Does climate science have a confirmation bias problem? Or is it the bias of climate skeptics that is the problem? I suppose how you answer that might reflect your own bias. And so, in light of recent posts that explored issues of trust and polarization, maybe it's a good time for us to examine the bias issue. Once again, it is an exchange between readers (one of them being Gavin Schmidt) that will take center stage, and hopefully serve as a springboard for a productive discussion on bias. Yesterday JohnB waded into an ongoing exchange between Judith Curry and Gavin on this thread. At the end of one comment, Gavinwrote:
The solution to the existence of individual biases (which exist regardless of how many letters have been signed), are the multiple levels of review and collaborations across many people and voices. You don't get rid of biases by pretending they don't exist.
To which JohnBresponded: There is a subtle problem with bias. That is that it changes our perception of when to raise warning flags due to our expectations. The Vulcan scientist is a fantasy and everybody is prey to their own biases, you me, Judith, Keith, everybody. The problem with bias is that it clouds our ability to detect bias in our own actions. Where this is relevant. Suppose you (or anyone) were running projections to 2100 and you expect a 4 degree warming. AS in 4 degrees is around what you think the temps in 2100 will be. You know your stuff and have done this before, so you will have a rough idea of what to expect. If the answer comes in between 3.5 and 4.5 degrees, you'd shrug and say to yourself "Round about what I expected" and move on. If the answer came in at 5 degrees, you'd whistle and think "Higher than I expected, this could get bad" and then continue. However, if the answer came in at 3 degrees, you'd most likely think "That's a bit low, I'd better check my figures." Natural bias tends to make us more liable to doubt when the answer disagrees with our preconceived ideas. You will be more likely to suspect a problem if the answer is lower than expected than when the answer is higher than expected. I think it's called "being human". The problem comes in when the next person builds on your research. He assumes your findings are right. Why not, they're peer reviewed and he also knows that Gavin knows his stuff and is likely right. But researcher number 2 has the same bias. So if his figures come out under yours, they will be immediately suspect (by him for a start) but if they come out a bit higher, well, both still fall into the error bars of the other, so they should be right. Hence the seemingly never-ending litany of "It's worse than we thought." The simple fact that it's always "worse than we thought" sets alarm bells ringing to Joe Public. This sort of "compounding of errors" has probably been observed by most people as it happens in all walks of life. Why should Climate Science be any different? Joe Public knows it happens everywhere else, he's seen it happen, therefore he won't accept "Trust me I'm a scientist" as an answer. He will have trouble with "It's been checked by my peers" because he's seen corporate plans checked and rechecked and still fail miserably. Joe Public knows all this, which is why he is immediately suspicious when someone says, in effect "Yes, I'm biased, but it doesn't matter because I'm right". He just won't believe you. (I will add that a researcher whose bias is towards a low climate sensitivity has exactly the same problem as described above, but in the opposite direction. He will be more likely to check his figures if the answer is above his expectations.) ********* Responding directly to JohnB, Gavincountered: You are imagining scenarios that match only your prejudgement of my thinking. You are in fact completely wrong. In the 1990s, the GISS climate model had a sensitivity of 4.2 deg C (or even 5 deg C in some configurations). For the new model that I contributed to for AR4 (Schmidt et al, 2006), the sensitivity was 2.7 C "“ and at no time ever in the development process did we act as if that was "˜problem' to be fixed. For the vast majority of scientists (and indeed all of the ones I've worked with), the answer is what it is. ********* JohnB responded again to Gavin, which you can read in entirety here. This is the first graph:
Where did I say anything was a "problem to be fixed"? I was pointing out the fact that our own biases influence our initial reaction to results. Nothing more, nothing less. Physicists have told me that this is so in their field, why would it not be true in others, including Climate Science?
After reading JohnB's initial comment several times, it sounds to me that he is suggesting a bit more than that, which someone more well-known than him seemed to be getting at here:
Because any study where a single team plans the research, carries it out, supervises the analysis, and writes their own final report, carries a very high risk of undetected bias. That risk, for example, would automatically preclude the validity of the results of a similarly structured study that tested the efficacy of a drug. Nobody would believe it.
Now that you've read that quote, here's the source and the full context. Does that bias you? A brief word about JohnB. He is from Australia and a non-scientist. He told me via email that for the last six years he's been
hanging out at a place called scienceforums. These guys are Particle Physicists, Astronomers, BioChemists, name a major science and there is an expert there, I mean one moderator studies Time for a living. You want to debate a topic? Fine. But you'd better be able to provide links to the actual papers and quote the relevant passages. Science debate there is hard science. You can perhaps imagine what some of our "Climate" debates were like.
In that email, JohnB also fleshed out how he became increasingly interested in climate science and the issue of confirmation bias:
I know about the bias thing because an Atomic Physicist told me about how sometimes he can throw out data because he knew it wrong by looking at the results, if it's too far from expectation, or the wrong sign, you just know there's something wrong with it. (We were debating tree rings BTW) But Climate Science isn't Physics, is it? The hard and fast rules aren't there and the error bars are far larger. Knowing it's wrong becomes more of an opinion or educated guess so the possibility of bias effecting the results are larger. When I got interested in modern Climate Science, one of the first things I came across was Phil Jones' immortal statement "Why should I show you my data when all you want to do is find something wrong with it?" This was so far from what normal, hard scientists would say as to be not even in the same Galaxy. Pulling it in any of our forums would write you off as a crank there and then. Not willing to show the data? We're not going to bother listening. Climate Science was not meeting the standards of proof that we ask of any poster in any of our science forums. Climate science wasn't meeting the standard that hard, physical scientists had told me for years was the acceptable standard. So I started reading and digging a bit deeper and frankly didn't like what I was seeing. A climate scientist publishes a paper using a "new" statistical method. Who reviewed it? Statisticians? Nope, other Climate Scientists. But it's in the literature and just gets cited and reused. I was introduced to a form of scientific debate 6 years ago where it doesn't matter who you are or what your education level was or how many letters you have after your name. Evidence matters, logic matters and proof matters, everything else is irrelevent. You can be a cowtown hick and argue with a physicist. If you're right and can prove it, you're right. Game over. If your theory or model doesn't match the observations, then your theory or model is wrong. In Climate science, if your model doesn't match the observations, the first assumption is that the obs are wrong and they get reworked until they match the model. Note the Allen and Sherwood paper in 2008. Tropospheric warming as measured by the thermometers on weather balloons didn't match the models predictions. Do you adjust the model or decide that the airspeed of the balloon is a better proxy for temperature than the actual thermometer carried by the balloon? Your proof that airspeed is better? Because it matches the models predictions. I know which I would choose, and I know which one Climate science chose.
I'll leave it to others to engage with JohnB on his grasp of climate science and the profession's protocol. But I highlight his obvious efforts to educate himself about the discipline, which strike me as sincere, (perhaps he'll want own up to any extra-science motivational biases in the thread), because I think there is a tendency to dismiss this kind of public engagment in the climate debate. The meme on skeptics seems fairly one dimensional and monolothic, as reflected in this Jeffrey Sachs op-ed. On the other hand, Bart Verheggen acknowledges:
Undoubtedly climate skepticism comes in many shades of grey (as does climate concern). How can we distinguish between genuine skeptics and pseudo-skeptics? Undoubtedly, all self styles skeptics see themselves as genuine. I don't really have an answer to that question.
Well, maybe the answer is to actually engage with them and their arguments. Obviously, Judith Curry is blazing that trail. But I also want to applaud Gavin Schmidt for coming over here and mixing it up with Judith, JohnB and other readers. Now who can help shed some light on the problem of bias? Does it perhaps afflict both climate science and its critics? If so, what can be done about it?