Wednesday, July 09, 2014

Confessions of a Computer Modeler - WSJ


After earning a master's degree in environmental engineering in 1982, I spent most of the next 10 years building large-scale environmental computer models. My first job was as a consultant to the Environmental Protection Agency. I was hired to build a model to assess the impact of its Construction Grants Program, a nationwide effort in the 1970s and 1980s to upgrade sewer-treatment plants.
The computer model was huge—it analyzed every river, sewer treatment plant and drinking-water intake (the places in rivers where municipalities draw their water) in the country. I'll spare you the details, but the model showed huge gains from the program as water quality improved dramatically. By the late 1980s, however, any gains from upgrading sewer treatments would be offset by the additional pollution load coming from people who moved from on-site septic tanks to public sewers, which dump the waste into rivers. Basically the model said we had hit the point of diminishing returns.
When I presented the results to the EPA official in charge, he said that I should go back and "sharpen my pencil." I did. I reviewed assumptions, tweaked coefficients and recalibrated data. But when I reran everything the numbers didn't change much. At our next meeting he told me to run the numbers again.
After three iterations I finally blurted out, "What number are you looking for?" He didn't miss a beat: He told me that he needed to show $2 billion of benefits to get the program renewed. I finally turned enough knobs to get the answer he wanted, and everyone was happy.
Was the EPA official asking me to lie? I have to give him the benefit of the doubt and assume he believed in the value of continuing the program. (Congress ended the grants in 1990.) He certainly didn't give any indications otherwise. I also assume he understood the inherent inaccuracies of these types of models. There are no exact values for the coefficients in models such as these. There are only ranges of potential values. By moving a bunch of these parameters to one side or the other you can usually get very different results, often (surprise) in line with your initial beliefs.
I realized that my work for the EPA wasn't that of a scientist, at least in the popular imagination of what a scientist does. It was more like that of a lawyer. My job, as a modeler, was to build the best case for my client's position. The opposition will build its best case for the counter argument and ultimately the truth should prevail.
If opponents don't like what I did with the coefficients, then they should challenge them. And during my decade as an environmental consultant, I was often hired to do just that to someone else's model. But there is no denying that anyone who makes a living building computer models likely does so for the cause of advocacy, not the search for truth.
Surely the scientific community wouldn't succumb to these pressures like us money-grabbing consultants. Aren't they laboring for knowledge instead of profit? If you believe that, boy do I have a computer model to sell you.
The academic community competes for grants, tenure and recognition; consultants compete for clients. And you should understand that the lines between academia and consultancy are very blurry as many professors moonlight as consultants, authors, talking heads, etc.
Let's be clear: I am not saying this is a bad thing. The legal system is adversarial and for the most part functions well. The same is true for science. So here is my advice: Those who are convinced that humans are drastically changing the climate for the worse and those who aren't should accept and welcome a vibrant, robust back-and-forth. Let each side make its best case and trust that the truth will emerge.
Those who do believe that humans are driving climate change retort that the science is "settled" and those who don't agree are "deniers" and "flat-earthers." Even the president mocks anyone who disagrees. But I have been doing this for a long time, and the one thing I have learned is how hard it is to convince people with a computer model. The vast majority of your audience will never, ever understand the math behind it. This does not mean people are dumb. They usually have great BS detectors, and when they see one side of a debate trying to shut down the other side, they will most likely assume it has something to hide, has the weaker argument, or both.

No comments: