Summary: Roy Spencer’s latest paper, published in Remote Sensing, supposedly “blew a gaping hole” in the standard theory of climate change. A new paper by Andrew Dessler shows that this is just another in a long string of Roy’s faulty claims to prove that climate sensitivity is lower than previously thought. The main problem in all of these attempts has been rampant abuse of statistics. Typically, Roy would brush off such criticisms, relying on the statistical naïveté of his core audience and the media, and claim he is being persecuted by the “IPCC gatekeepers”. In this case, one of Dessler’s figures shows very clearly how Spencer and his co-author Danny Braswell left out of their analysis all the data that didn’t fit with their hypothesis. It’s so clear that even people who don’t know much about statistics can see the problem. There is no running from this one–no claiming that Spencer is being persecuted–unless he wants us to believe he’s being persecuted by his own data.
In Roy Spencer’s recent book, The Great Global Warming Blunder, he portrayed himself as some kind of persecuted Galileo figure, boldly proclaiming the truth about climate sensitivity to a corrupt and oppressive Priesthood–i.e., his colleagues. (Read my review of the book here.)
Ultimately I find enough evidence to virtually prove my theory, but now the research papers that I submit for publication are rejected outright….
The climate modelers and their supporters in government are largely in control of the research funding, which means that most government contracts and grants go toward these investigators who support the party line on global warming. Sympathizers preside as editors overseeing what can and cannot be published in research journals. Now they even rule over several of our professional societies, organizations that should be promoting scientific curiosity no matter where it leads.
In light of these developments, I have decided to take my message to the people. This message is that mankind’s influence on climate is small and will continue to be small. (Roy Spencer, The Great Global Warming Blunder, pp. xi-xii)
Here’s my favorite.
I find it difficult to believe that I am the first researcher to figure out what I describe in this book. Either I am smarter than the rest of the world’s climate scientists–which seems unlikely–or there are other scientists who also have evidence that global warming could be mostly natural, but have been hiding it. That is a serious charge, I know, but it is a conclusion that is difficult for me to avoid. (Roy Spencer, The Great Global Warming Blunder, p. xxvii)
You can probably imagine how this kind of rhetoric goes over with the other scientists, who have patiently played Whack-A-Mole with Spencer’s steady stream of claims that he has blown the consensus view on climate change out of the water.
Here’s how it typically goes.
1. Spencer claims to show that standard climate models don’t reproduce some aspect of the data very well.
2. He then pulls out a very simple 1-box climate model that he claims does reproduce the data well. And invariably, this model incorporates a lower climate sensitivity than the standard models.
3. He writes it up, sends it in to a journal, and many times it is rejected. Why is it rejected? When one makes an argument that a given model is “good” or “bad” at reproducing some data, that argument is inherently statistical, and there are standard statistical methods that are used to determine how “good” or “bad” a model is at reproducing data. Well, Roy pretty much just ignores the statistical methods, and sometimes even makes up his own. For instance, I showed that some of the work he described in his book (which had been rejected from a reputable journal) was based on a made-up statistical method that could have given him any climate sensitivity he wanted, given that he was willing to allow his model parameters to stray into wildly unphysical territory.
4. If the paper gets rejected, Roy writes in a book or on his blog about the work, and claims he has once again been the victim of the “IPCC Gatekeepers,” who are in cahoots to keep any dissenting views out of the literature. If the paper was not rejected, Spencer hits the media with claims that he has all but proven the consensus wrong, even if such strong claims were not made in the paper.
5. The other scientists sigh… or maybe utter a few choice expletives… and go to work taking apart the work. They point out the flaws in Roy’s statistics, among other things.
6. If Roy acknowledges the criticism at all, he usually dismisses the main points, relying on the statistical naïveté of his core audience. For instance, amid pressure to respond to my criticisms of his book, he initially said that he didn’t have time to respond to it, and later said there were so many errors in my analysis that he didn’t know where to start. As noted above, Spencer has a tendency to add charges of bias and even subterfuge to his dismissals.
7. The media may report on all this, but the coverage is mixed, because your average journalist doesn’t know enough about the subject to tell who is right.
Until today, everything was going pretty much according to schedule with regard to Roy Spencer’s and Danny Braswell’s latest paper, which they published a few weeks ago in Remote Sensing.
Spencer and Braswell used “lag regression analysis” (a statistical technique) on satellite data of changes in radiation flux and temperature over the last ten years. With this technique, they showed that the observations exhibit a certain characteristic pattern, and then claimed to show that the standard climate models used by the IPCC are terrible at reproducing this characteristic pattern (see Fig. 1).
Note how the average of the “3 least sensitive models” does a slightly better job at mimicking the real data than the average of the “3 most sensitive models”. This refers to “climate sensitivity,” i.e., the equilibrium temperature change caused by a change in forcing equivalent to doubling the CO2 concentration in the atmosphere. There you go–obviously the real climate must be far less sensitive to greenhouse gas additions than ANY of the climate models! (S&B didn’t go quite that far, but this is how it was exaggerated in the media.)
Spencer and Braswell also used a variation of the “simple climate model” they have used in the past to show that they could reproduce the characteristic pattern in the data IF they assumed that random changes in cloud cover were forcing the system, rather than acting as a feedback in the system. (I.e., clouds drive the climate, rather than responding to and in turn altering changes in climate due to other factors.)
With this stunning new evidence in hand, it appears they submitted their paper to Science magazine, the premiere science publication in the world. Oh, it got rejected, but that’s not a big deal, because 1) Science rejects the vast majority of papers submitted, and 2) Roy Spencer had already predicted they might get rejected because the editors are biased against “skeptics”.
They then submitted their paper to Remote Sensing, an odd choice given that it’s a new journal that doesn’t publish much climate science, and isn’t indexed by the standard databases, yet. This worked out well for them, at first, because it appears the editor handling the manuscript probably ended up just choosing the reviewers suggested by S&B.
After the paper came out in Remote Sensing, the University of Alabama at Huntsville did a press release that exaggerated a little how strong the paper’s claims were. James Taylor (who works for the Heartland Institute and blogs for Forbes magazine) blogged that “New NASA Data Blow Gaping Hole in Global Warming Alarmism.” A minor media frenzy ensued when the story was picked up by Yahoo! News and Fox News.
Climatologists Kevin Trenberth and John Fasullo posted a short response to the paper on the RealClimate blog. They showed that Spencer and Braswell had made a statistical blunder by failing to include error bars. If you want to show that one data set is different than another, you have to perform statistical tests like this. In this case, they pointed out that S&B were comparing a 10-year period in the data with a 100-year period in the models. So they broke up the 100 years into 10-year periods, calculated error bars for the model response, and showed that now the data fell within the error bars. What’s more, they showed that some of the models (not shown in S&B’s figure) actually did REALLY WELL at mimicking the data. Which models did well? The ones that were already known to do a good job of mimicking El Niño cycles, which is what dominated weather changes over the past decade. Therefore, Trenberth and Fasullo concluded that the skill exhibited by the models in reproducing the pattern S&B identified had nothing to do with climate sensitivity. They also pointed out that the “simple climate model” used by S&B to interpret their results was too simple to include the processes associated with El Niño cycles, and they pointed to my critique of Spencer for evidence that Spencer has a history of abusing simple climate models.
More news outlets picked up the story, reporting how the paper had been criticized, but also how Spencer disagreed with the criticism. Stephanie Pappas at LiveScience reported that she couldn’t find any climate scientists who agreed with the paper. Still, this wasn’t a game-changer, because Spencer could (and did!) claim his critics were getting it all wrong, and compared them to “the Empire” from Star Wars. (Jedi Mind Trick: “You don’t need to see any error bars. Move along.”)
Things started unravelling a bit last week, when the editor of Remote Sensing announced that he was resigning to take responsibility for publishing S&B’s paper, which should not have been published because it failed to address prior criticisms of related work. But once again, all Spencer had to do was claim that the “IPCC gatekeepers” must have put political pressure on the editor.
I don’t think the same old tactics will work so well this week, however, since Andrew Dessler has published a paper in Geophysical Research Letters critiquing Spencer and Braswell’s paper, as well as another one by Dick Lindzen. (Here is a video where Dessler explains the main results of his paper.) One of the most important criticisms is that S&B had to put unrealistic parameters into their simple climate model to get the answer they wanted–a familiar story.
But the most damning criticism has to do with Spencer and Braswell’s figure, shown above (Fig. 1). Regarding this figure, Spencer and Braswell said,
While we computed results for 14 of the models archived, here will only present results for the three most sensitive models… and the three least sensitive models…. (Spencer and Braswell, 2011)
Why wouldn’t they report all their results? As Trenberth and Fasullo had already pointed out, Spencer and Braswell were ignoring the models that simulate El Niño well, and some of these models do quite well at reproducing the satellite data. Wait… Spencer and Braswell were ignoring some of the data for which they SAID they had performed their analysis? People make mistakes, but it’s one thing to overlook some data that has been published somewhere out in the literature, and another to overlook data that you have analyzed yourself.
Dessler analyzed the same satellite radiation flux data, but used several different temperature data sets, and calculated error bars for the flux/temperature regressions. He also compared the data to the results from 13 climate models, rather than just two sets of three, averaged together. The resulting plot is shown in Fig. 2. Here the red and blue lines are the combined radiation-temperature data, and the shaded areas are the error bars. (The blue set is the one Spencer and Braswell reported.) The black lines are the results from the 13 climate models, and the lines that have crosses on them are the ones Spencer and Braswell averaged together and plotted in Fig. 1.
Now, let’s think about the import of Figures 1 and 2.
First, as Dessler points out, some of the climate models do pretty well at simulating the satellite data. Since the models do not drive the El Niño cycles via random cloud variations, Spencer and Braswell’s modeling effort (even if realistic parameters were used) doesn’t show anything about the cause of temperature variations.
Second, Dessler (2011) points out that Spencer and Braswell just happened to choose 1) the temperature series that causes the data in the figures to deviate the most from the models and 2) six of the models that deviate the most from the data.
Look at Figure 1. Now look at Fig. 2. Note that the blue data set in Fig. 2 is the one Spencer and Braswell used in Fig. 1. Note where all six of the models plotted by Spencer and Braswell (black lines with crosses) lie in Fig. 2, compared with the seven models they… misplaced (plain black lines).
Now look again.
Roy Spencer has some explaining to do.
When I critiqued Spencer’s book, and showed how he had used a bogus statistical technique that was capable of giving him any answer he wanted, I tried to come up with the most charitable interpretation I could. It took me a while, but I figured out a way to explain Spencer’s method as a dumb mistake (we all make them), rather than conscious deception. Well, I’ve been thinking about the figures above, trying to come up with another explanation… and I’m drawing a blank.
Are we supposed to believe that Spencer ran the analysis of all 14 models, and then decided he would only look at six of them? Sorry, but I’m not buying. The only thing I can think of to soften the blow is that I can’t imagine Roy saying he analyzed all 14 datasets, knowing that someone else would inevitably come along and reanalyze the data and see what he had left out of his figures. If his intent were to deceive, he would just have to claim he only analyzed the six models, and give a mea culpa when it came out that the others undercut his claims.
We’ll have to see what Spencer comes up with for an explanation, if he even bothers to acknowledge the problem. But for now, there’s no use crying “Persecution!” unless Roy wants to imagine that he’s being persecuted by his own data. No matter how Vader-esque his opponents, Roy Spencer has some explaining to do.
Update 1: Ben Santer e-mailed me this comment about my post.
There are multiple ways in which Roy is being persecuted by his own data. As our new JGR paper indicates, even UAH-based estimates of lower tropospheric temperature temperature change now have a signal-to-noise ratio approaching 4. This is a little ironic. For well over a decade, Roy Spencer and John Christy were claiming that their estimate of global-scale changes in lower tropospheric temperature – which showed little or no warming – was the truth, the whole truth, and nothing but the truth. Now the UAH-inferred warming over the satellite era is now nearly four times larger than our current best estimates of natural climate variability on the 32-year timescale. I doubt whether Roy Spencer and John Christy will make mention of this result in upcoming press releases…
Update 3: Where I linked to Dessler’s paper above, it just goes to the abstract. To download the whole paper you have to have a subscription. Here is a pre-print version of the paper on Dessler’s university’s website.