Posted by: Barry Bickmore | March 1, 2011

Roy Spencer’s Great Blunder, Part 3

The following is PART 3 of my extended critique of Roy Spencer’s The Great Global Warming Blunder:  How Mother Nature Fooled the World’s Top Climate Scientists (New York:  Encounter Books, 2010).  In this part I refer constantly to Spencer’s simple climate model, which I explained in Part 1, so make sure to read that first. See also Part 2.

Summary of Part 3: Roy Spencer posits that the Pacific Decadal Oscillation (PDO) is linked to chaotic variations in global cloud cover over multi-decadal timescales, and thus has been the major driver of climate change over the 20th century.  To test this hypothesis, he fit the output of a simple climate model, driven by the PDO, to temperature anomaly data for the 20th century.  He found he could obtain a reasonable fit, but to do so he had to use five (he says four) adjustable parameters.  The values he obtained for these parameters fit well with his overall hypothesis, but in fact, other values that are both more physically plausible and go against his hypothesis would give equally good results.  Spencer only reported the values that agreed with his hypothesis, however.  Roy Spencer has established a clear track record of throwing out acutely insufficient evidence for his ideas, and then complaining that his colleagues are intellectually lazy and biased when they are not immediately convinced.  

It Must Be the PDO!

As I mentioned in Part 1, Roy Spencer believes that climate change is largely controlled by chaotic, natural variations in cloud cover, rather than by external forcing.  The idea that there are chaotic, natural climate variations over short timescales of up to a decade or so is non-controversial, but Spencer wants to take it a step further.

So, what might yearly, 10-year, or 30-year chaotic fluctuations in cloudiness do?  Maybe the Medieval Warm Period and the Little Ice Age are examples of chaos generated by the climate system itself.  (p. 107)

As we saw in Part 2, Spencer tries to scuttle the standard explanation for the ice ages of the past million years because 1) the standard explanation is consistent with the models the IPCC uses to project future temperature trends, and 2) even he doesn’t hypothesize how “chaos generated by the climate system itself” can cause trends spanning tens of thousands of years.  I.e., the fact that mainstream models of climate change can explain more data is threatening to him.

I believe that the ice core record is largely irrelevant to what is happening today.  (p. 30)

Therefore, it is reasonable to suspect that the ice ages and the interglacial periods of warmth were caused by some as yet undiscovered forcing mechanism.  (p. 69)

To be fair, I should mention that if Spencer’s hypothesis were correct, it would be impossible to apply to the distant past because we have no methods for estimating past cloud cover.  So for the moment, let’s give him a pass on this issue and see how he accounts for the climate change in the more recent past, when we have had decent meteorological records.

Spencer’s hypothesis is that the Pacific Decadal Oscillation (PDO) has been controlling most of the global temperature change over the last century.  The PDO is a mode of natural climate variation in the Northern Pacific that oscillates over timescales of a few decades.  The Pacific Decadal Oscillation Index (PDOI) is a unitless quantity climatologists have created to describe how strongly the PDO is favoring warming (positive PDOI) or cooling (negative PDOI).  The PDOI over the 20th century (subjected to a 5-year running average) is plotted as the green line in Figure 1.  For comparison, the global average temperature anomaly (HadCRUT3v, subjected to a 5-year running average) is plotted as the blue line.

Figure 1.

You probably immediately noticed that the temperature and PDOI records don’t look exactly the same, although there are some positive and negative humps in similar places.  However, Spencer actually posits that the PDO constitutes a “forcing” in the system, and there can be some time lag before the system responds to a forcing.  “If you understand this distinction, you are doing better than some climate experts” (p. 111).

If the PDO is forcing the system, Spencer reasoned, maybe he could take the PDOI, multiply it by some scaling factor to convert it into W/m^2, and then run that through his simple climate model (see Part 1) to see what comes out.  The problem, of course, is that there are several parameters included in the simple climate model, so if Roy wanted to make his scaled PDOI produce the observed temperature data, he had to provide values for those parameters by “adjusting” them to get the best fit.  Here is a list of the adjustable parameters he used.

  1. alpha = the feedback parameter (see Part 1 for explanation)
  2. beta = the scaling factor to convert the PDOI into W/m^2
  3. h = the depth of the ocean mixed layer (see Part 1)
  4. ∆To = the temperature deviation from equilibrium at the start of the simulation

Spencer describes how he proceeded.

Since we don’t know how to set the four [parameters] on the model to cause it to produce temperature variations like those in [the 20th century temperature record], we will use the brute force of the computer’s great speed to do 100,000 runs, each of which has a unique combination of these four [parameter] settings.  And because spreadsheet programs like Excel aren’t made to run this many experiments, I programmed the model in Fortran.

It took only a few minutes to run the 100,000 different combinations….  Out of all these model simulations, I saved the ones that came close to the observed temperature variations between 1900 and 2000.  Then, I averaged all of these thousands of temperature simulations together….  What we see is that if the computer gets to “choose” how much the clouds change with the PDO, then the PDO alone can explain 75 percent of the warming trend seen during the twentieth century.  In fact, it also does a pretty good job of capturing the warming until about 1940, then the slight cooling until the 1970s, and finally the resumed warming until 2000.

If I instead use the history of anthropogenic forcings that James Hansen has compiled…, somewhat more of the warming trend can be explained, but the temperature variations in the middle of the century are not as well captured.  I should note that the “warm hump” around 1940 and the slight cooling afterward have always been a thorn in the side of climate modelers.  (p. 115)

I digitized the data in Spencer’s figure, and have plotted it here in Figure 2.

Figure 2.

Next, Spencer took nine years (2000-2008) of satellite radiation flux data and removed the influence of feedbacks via a method related to his work in Spencer and Braswell (2008) (which we saw has been discredited in Part 1), leaving only the radiative forcing.  He then plotted the average forcing vs. the average PDOI for each of those years, so that he could use the slope of the data to obtain an empirical estimate of beta, the PDOI scaling factor.  The best-fit slope was 0.97 W/m^2, whereas his model produced a best-fit value of 1.17.  Pretty close!  I’ve digitized the data in Spencer’s graph (p. 119) and reproduced it in Figure 3.

Figure 3.

Attack of the Zealots

It looks pretty impressive, doesn’t it?  Roy Spencer thought so, too, so he submitted a paper on these results to a reputable scientific journal.  He describes the result.

In early 2009 I submitted the work I am describing for publication in Geophysical Research Letters, and the paper was quickly rejected by a single reviewer who was very displeased that I was contradicting the IPCC.  Besides, this reviewer argued, because the PDO index and temperature variations… do not look the same, the PDO could not have caused the temperature changes….

This expert’s comments revealed a fundamental misunderstanding of how temperature changes are caused, and as a result my paper was rejected for publication.  In fact, the editor was so annoyed he warned me not to bother changing and then resubmitting it.  My results… had obviously struck a nerve.  This is the sorry state of scientific peer review that can develop when scientists let their preconceived notions get in the way.  (pp. 111-12)

This episode (and perhaps others like it) is one of the main reasons why Spencer says he is taking his message “to the people”.

The peer review process for getting research proposals funded and scientific papers published is no longer objective, but is instead short-circuited by zealots adhering to their faith that humans now control the fate of Earth’s climate.  (p. xvi)

I’d be the last one to claim that the peer review system in science is perfect, but is it really that broken?  Is the research Roy Spencer describes so groundbreaking and brilliant that real scientists–those who are Truly Objective–would have accepted it without such a lot of misguided, trivial objections?  I decided to conduct my own, more thorough peer review to find out.

Adjustable Parameters

Anyone who deals with numerical modeling knows that if you start using too many adjustable parameters, you can often make your model fit the data very well, but the parameters chosen for the model might not be physically meaningful.  That is, there are often a number of distinct combinations of the parameters that would give about equally good results.  So when scientists like me see Roy Spencer curve-fitting with four adjustable parameters, red flags go up right away.  The typical thing to do in this situation would be to see if we can constrain some of the variables into a physically reasonable range.  We can actually go out and measure the depth of the ocean mixed layer, for example.

To hear Roy tell it, he just let all four parameters ride, but no matter, because the values his computer program chose all came out to be physically reasonable!  Here are the “best-fit” values he came up with.

  1. alpha ≈ 3.0 W/m^2/°C  (p. 116; alpha is the feedback parameter)
  2. beta ≈ 1.17 W/m^2  (p. 119, Fig. 26; beta is the PDO scaling factor)
  3. h ≈ 700 m  (pp. 115-116; h is the ocean mixed layer depth)
  4. ∆To ≈ -0.6 °C  (pp. 116-117, ∆To is the starting temperature anomaly in the year 1900)

In the next sections, I’ll look at both 1) how Spencer came up with these values, and 2) what to make of his claims that they are physically reasonable.  Some readers might recognize that some of my criticisms are the same or similar to those made by Ray Pierrehumbert about a related episode of Roy’s curve-fitting.  That’s ok, he didn’t listen the first time, either, and I’m going to go into a little more depth, in some cases.

Spencer’s Model in MATLAB

To explore Spencer’s claims, I first programmed his model into MATLAB, and connected it to a built-in curve-fitting routine.  (If you want a copy of the m-files, just e-mail me.)  When I plugged in the values listed above, I got the blue curve in Figure 4.  The red curve is the one I digitized from Spencer’s figure (see Fig. 2), and the black curve is the HadCRUT3 temperature anomaly subjected to a 5-year running average.  Since Spencer’s curve is supposedly the average of thousands of individual curves, I’d say mine is quite close, and in any case it’s clear I programmed my model to be identical to his.

Figure 4.

Any Answer I Want?

Having made sure the model was correct, I used the same parameter values as the starting point when I applied the curve-fitting routine.  The fitting routine changed three of the parameter values significantly (alpha = 3.7, beta = 1.55, and ∆To = -0.66), but the ocean mixed-layer depth (h) stayed pegged right at 700 m.  (The resulting model output is shown as the green curve in Figure 4.) So I asked, “What would happen if I set the starting point of h at different values from 50 to 1200 m (in 50 m increments), and then re-fit all of the parameters every time?”  I did it, and had the computer spit out a graph with all the “best-fit” curves, so I could compare them.  The result is in Figure 5.

Figure 5.

Astute readers will be scratching their heads, wondering why there is only one model curve shown.  Well, so was I.  I combed through my code, beating my head on the desk every once in a while.  Finally, I found out that all 24 model curves were there, but they were all exactly on top of one another.  That’s right, what I’m telling you is that I could generate the exact same best-fit model curve by assuming h values anywhere from 50 m to 1200 m.  What happened to the parameter values during the fitting process?  Again, the h values didn’t budge, and ∆To was quite stable at around -0.66.  However, the alpha and beta values both varied dramatically with different depths.  In Figure 6, I’ve plotted the alpha and beta values vs. h.

Figure 6.

What Figure 6 shows is that best-fit alpha, beta, and h values are all perfectly covariant with one another.  That is, no matter what number you pick for h, there will always be a combination of alpha and beta values that will give you the same best-fit model curve.  The exact same model curve.

Roy Spencer said he ran 100,000 different combinations of the fitting parameters, so how on Earth did he just happen to pick a set of values that agreed well with his hypothesis, when he could get the exact same curve no matter how deep he made his model ocean?  Let’s examine that question.

How Deep is the Ocean?

First, if a 700 m mixed layer is a physically reasonable value, then maybe my objections are moot.  Here’s what Spencer says about it.

By coincidence, this figure actually matches the approximate depth over which warming has been observed to occur in the last fifty years, which is something the model did not know beforehand.  (p. 116)

Even if the water temperature has been measurably heating down that deep, however, Spencer’s model assumes that the temperature is uniform throughout that entire 700 m, which is demonstrably false.  The thermocline (i.e., the boundary between the warmer, well-mixed layer at the surface of the ocean and the colder deep ocean water is typically in the range of 50-100 m deep (Baker and Roe, 2009).  In a simple model like Spencer’s that doesn’t account for upwelling and diffusion of heat into the deep ocean, one needs to fudge that figure a little higher.  Murphy and Forster (2010) discussed previous work on this question, and it appears mixed-layer depths of 100-200 m (probably closer to 100 m) are reasonable for models such as Spencer’s.  The irony, of course, is that Murphy and Forster were criticizing Spencer and Braswell (2008) for using only a 50 m mixed layer, which skewed their results.  (Spencer provides the spreadsheet he used for the 2008 study here.  Go ahead and plug in a 700 m mixed layer, and see what kind of nonsense comes out.  You can compare it to what you’re supposed to get here.)

Automagic!

The key to understanding Spencer’s choice of a 700 m mixed layer depth is in Figure 6.  My best-fit values for alpha and beta at h = 700 m were 3.71 W/m^2/°C and 1.55 W/m^2, respectively.  My technique was somewhat different from Spencer’s–for some reason he averaged together thousands of different curves that seemed to fit the data pretty well, and I assume he averaged the adjustable parameter values from these different model runs, as well.  Therefore, he obtained similar, but not identical, values:  alpha = 3.0 W/m^2/°C and beta = 1.17 W/m^2.  Remember that for Spencer’s hypothesis to work, he needed to obtain an alpha value corresponding to negative (alpha > 3.3) or weakly positive feedback.  The value alpha = 3.0 corresponds to positive feedback, but it is much weaker than the range Spencer gives for the IPCC models (alpha = 0.9-1.9).  So why not choose a mixed layer depth of 800 or 1000 m, and obtain an even larger alpha value?  Because the graph in Figure 3 dictates that Spencer also needed a beta value close to 1 W/m^2.  And guess what?  His ad hoc statistical method automatically gave him answers in the right range!

Did he purposefully manipulate his method to produce just the right values?  I actually don’t think so.  Roy’s computer program may have generated just the right values simply due to luck, combined with a marked misunderstanding of his model system and a flawed statistical method.  When I generated the 24 model curves in Figure 5, which all fit the data equally well using widely different parameters, I collected the averages of all the best-fit parameters and got:  alpha = 3.3 W/m^2/°C, beta = 1.38 W/m^2, h = 625 m, and ∆To = -0.66 °C.  Wow, those are close to Roy’s preferred parameters, right?  Well, the truth is that at first I ramped the ocean depth from 50 to 1000 m, and some of my average parameter values were too low.  All I had to do to get what I wanted was change the upper bound to 1200 m.  But that’s the point, isn’t it?  I could get whatever I wanted by judiciously choosing the right boundary conditions… or by dumb luck.

This discussion brings up another intriguing question.  What if we were to choose a realistic mixed-layer depth?  What kind of alpha and beta values would we obtain then?  In Figure 6, the values for h = 100-200 m are alpha = 0.53-1.06 and beta = 0.22-0.44.  In other words, the feedback would have to be just as positive as, or more positive than, that assumed by the IPCC models.  And as for beta, Ray Pierrehumbert pointed out that if it were as high as Roy Spencer wants it to be, it would produce fluctuations in the net radiation flux that are much larger than actually observed via satellite.  He instead suggested a more reasonable value of 0.25 W/m^2 for beta.  So what do you know?  By assuming a reasonable mixed layer depth, you can obtain a beta value that is consistent with satellite observations, and an alpha value that indicates feedback that is at least as positive as the IPCC asserts.  But then, they wouldn’t be consistent with Roy Spencer’s method for estimating beta shown in Figure 3, or with his hypothesis that climate feedbacks are more negative than the IPCC estimates.

Another Adjustable Parameter?

What about Roy’s favored value of ∆To ≈ -0.6 °C?  Why, that’s exactly what he would expect, too!

The third parameter is the starting temperature anomaly in 1900:  the model chose a temperature of about 0.6 deg. C below normal.  This choice is interesting because it approximately matches what the thermometer researchers have chosen for their baseline in Fig. 23.  That is, the temperature the model decided is the best transition point between “above normal” and “below normal” is the same as that chosen by the thermometer researchers.

It’s difficult to put into words how strange this statement is.  The HadCRUT3v temperature anomaly, which Spencer uses, is normalized to a 1961-1990 base period.  I.e., they calculated the average temperature from 1961-1990 and then subtracted that value from all the raw temperatures.  Why did they choose 1961-1990?  Because it’s a “climatological standard normal”, as defined by the World Meteorological Organization (WMO).  The WMO explains,

WMO defines climatological standard normals as “averages of climatological data computed for the following consecutive periods of 30 years: January 1, 1901 to December 31, 1930, January 1, 1931 to December 31, 1960, etc.” (WMO, 1984).The latest global standard normals period is 1961-1990. The next standard normals periode is January 1, [1991] – December 1, 2020.

Therefore, 1961-1990 isn’t the only “normal” period they could have chosen to be in line with WMO guidelines–it’s just the latest one.  Roy Spencer seems to be implying that these “normal” periods approximate some kind of equilibrium state, but the WMO explicitly says otherwise.  In a document called “The Role of Climatological Normals in a Changing Climate“, the WMO explains that this was the view in the early 20th century, when they first started using the concept of “normals”, but that has changed.

It is now well-established (IPCC, 2001) that global mean temperatures have warmed by 0.6 ± 0.2°C over the period from 1900 to 2000, and that further warming is expected as a result of increased concentrations of anthropogenic greenhouse gases. Whilst changes in other elements have not taken place as consistently as for temperature, it cannot be assumed for any element that the possibility of long-term secular change of that element can be ruled out. The importance of such secular trends is that they reduce the representativeness of historical data as a descriptor of the current, or likely future, climate at a given location. Furthermore, the existence of climate fluctuations on a multi-year timescale (Karl, 1988), to an extent greater than can be explained by random variability, suggests that, even in the absence of long-term anthropogenic climate change, there may be no steady state towards which climate converges, but rather an agglomeration of fluctuations on a multitude of timescales.

The near-universal acceptance of the paradigm of a climate undergoing secular long-term change has not, as yet, resulted in any changes in formal WMO guidance on the appropriate period for the calculation of normals (including climatological standard normals).

If there isn’t such a thing as a climate “steady state” (a concept similar to “equilibrium”,) that’s mighty inconvenient for people who want to fit temperature data using a simple climate model like Spencer’s.  Just for the sake of argument, however, let’s assume there is such a thing.  Now look at Figure 7, where I have plotted the entire HadCRUT3v temperature series (1850-present).  If you had to pick any period in the entire series where it seems like the system might have been hovering around some kind of “equilibrium” or “steady state”, what would it be?  Personally, I would pick the beginning of the series (1850-1900), and certainly 1961-1990 wouldn’t be near the top of my list.

Figure 7.

Now let’s play around with the model again to see how important the choice of base period and ∆To is.  Figure 8 shows the results when I left the temperature data as is, set the mixed layer depth to 700 m, and re-fit the model to the data with ∆To values ranging from -0.6 to 0.6 °C.  As you can see, in the latter half of the 20th century, it doesn’t make a whole lot of difference what the starting value is, but boy, does it matter in the first half!  If we compare the overall slope of the data in the first half of the century to the model curves, it’s pretty clear that to match the slope of the data you have to have a ∆To value of about -0.4 to -0.6, and again it’s just dumb luck that the base period was chosen so that the actual data starts down in that range.

Figure 8.

What would happen if we chose another base period–one that is more likely to represent something like an “equilibrium state”?  In Figure 9, I adjusted the HadCRUT3v temperature anomaly to have a 1850-1900 base period.  Then I fit the model to the data given 24 different h values ranging from 50-1200 m.  This time, it looks like there are two different model curves, instead of 24.  (Again, there are multiple curves exactly on top of one another.)  One set of model curves (blue) fits the data really well, while the other set (red) fits rather badly.  Unfortunately, the curve that fits really well was generated with unrealistic mixed layer depths (h ≥ 700 m) and negative alpha values, which indicate an unstable system.  In other words, there is no way to fit the adjusted temperature anomaly data with Spencer’s model without making assumptions even he would admit are implausible.  So in effect, the base period chosen for the temperature anomaly data was a fifth adjustable parameter.

Figure 9.

The Acid Test:  More Data

Now let’s put the preceding discussion on the shelf, and assume for the sake of argument that Roy Spencer had done everything right in his curve-fitting adventures.  The acid test of a model produced this way is to see if it can predict any data other than that used to calibrate it.  It turns out that MacDonald and Case (2006) used tree rings from a certain type of hydrologically sensitive tree to reconstruct the PDOI from AD 996-1996.  What if we were to use this to drive Spencer’s simple climate model, with his preferred parameters?  The results are in Figure 10.

Figure 10.

Spencer spends several pages (pp. 2-3, 9-11) bashing the “hockey stick” reconstructions of temperature over the last 1000-2000 years, and instead prefers a particular reconstruction that has a much more prominent Medieval Warm Period (MWP), and which, incidentally, has been shown to be riddled with errors.  But it seems pretty obvious from Figure 10 that if you drive Spencer’s simple climate model with this longer record of the PDOI and Spencer’s preferred parameters, you don’t exactly produce a big MWP.

Oh, I know.  Maybe the tree ring record of the PDOI isn’t reliable, even though it matches the 20th century record pretty well.  Fine, but the failure of Spencer’s model to produce anything even remotely like anyone’s reconstruction of global temperatures over the last 1000 years is truly spectacular.  Is the tree ring reconstruction that far wrong?  (And by the way, the value of ∆To chosen only affects the results for the first 50 years, so there’s no wiggle room there.)

Certainly Spencer might respond that maybe OTHER modes of climate variability were driving the system over this longer time period.  Sure, that could be.  But then, what do we make of his professed devotion to Occam’s Razor?

The simple, natural explanation for most of the global warming experienced from 1900 to 2000 took only a desktop computer and a few days to put together.  In contrast, hundreds of millions of dollars have been invested in explaining those same temperature variations with supercomputers using not just one but two manmade forcings:  warming from manmade carbon dioxide and cooling from particulate pollution.  This looks like a good place to apply Occam’s razor, which states that it is usually better to go with a simpler explanation of some physical phenomenon than a more complicated one.  (p. 120)

My reading of Occam’s Razor tends to favor a model that uses known physical principles to pretty well explain climate changes over timescales from a hundred years to hundreds of millions, rather than a model that explains only the 20th century (sort of, and if you ignore the creative curve-fitting techniques), but has to posit all kinds of unknown climate drivers for time periods that are any longer.

What About Roy?

The take-home message here is that Spencer’s curve-fitting enterprise could (and did!) give him essentially any answer he wanted, as long as he didn’t mind using parameters that don’t make any physical sense.  And let’s face it, Roy Spencer has established something of a track record in this area.  In Part 1, we saw that he plugged unrealistic values (including a 50 m ocean mixed layer depth) into his simple climate model to prove that random variations in cloud cover could skew estimates of the feedback parameter, alpha.  In Part 2, we saw that he glommed onto a single 2004 study that cast doubt on the standard explanation for the ice ages, but since then he has ignored the fact that the objections raised have been adequately answered.  In this installment, I’ve shown that he once again employed unrealistic parameter values (including a 700 m deep ocean, rather than 50 m!!!) to get the answers he wanted.  Finally, it turns out that years ago Roy Spencer and John Christy, who manage the UAH satellite temperature data set, made several mistakes in their data analysis that made it appear the temperature wasn’t rising like all the thermometers were saying.  Ray Pierrehumbert summarized,

We now know, of course, that the satellite data set confirms that the climate is warming , and indeed at very nearly the same rate as indicated by the surface temperature records. Now, there’s nothing wrong with making mistakes when pursuing an innovative observational method, but Spencer and Christy sat by for most of a decade allowing — indeed encouraging — the use of their data set as an icon for global warming skeptics. They committed serial errors in the data analysis, but insisted they were right and models and thermometers were wrong. They did little or nothing to root out possible sources of errors, and left it to others to clean up the mess, as has now been done.

One of the kindest things Roy said about his scientific colleagues in the book was,

I do not believe that there is any widespread conspiracy among the scientists who are supporting the IPCC effort–just misguided good intentions combined with a lack of due diligence in scientific research.  (p. 66)

You’re probably expecting that now I’ll go off on a tirade about how, even though he complains and complains that all his colleagues are intellectually lazy and biased, Roy Spencer is the one who isn’t being Truly Objective, and the editor of Geophysical Research Letters was absolutely right to send him packing with his curve-fitting paper.  He would certainly deserve it, given how he’s treated his colleagues in The Great Global Warming Blunder, but that’s not where I’m going.

It’s true that science is about data, and science is about logic, but to a large extent it’s also about creativity.  Cutting-edge scientific research involves having great ideas, and then following them up to see if they work out.  Since scientists are just people, sometimes they can get a little dogmatic about their hunches, which can cause them to ignore contrary evidence, or glom onto isolated bits of evidence that fit with the hunch.  That’s ok, however, because as the philosopher Paul Feyerabend once pointed out, the history of science has shown that sometimes a little dogmatism can be a good thing.  Even if the evidence doesn’t favor a brilliant scientist’s hunch at the moment, maybe the idea just needs a little work.  Continuing to follow a hunch because you think there’s enough evidence to show “there’s something there,” might be just the thing needed to produce a real breakthrough.  It’s also ok because science is a community effort.  Scientists may have their own hunches, but that doesn’t mean they’ll accept someone else’s hunch without a good deal of evidence!  This serves as an essential check to separate brilliant inspiration from plausible-sounding nonsense.

My point is that if Roy Spencer has a hunch that chaotic variations in cloud cover are controlling the climate, and that the PDO has been driving recent temperature increases, then more power to him.  But let’s face it, trying to play the part of the brilliant iconoclast hasn’t been working out so well for Roy, lately, because he’s been sloppy about lining up his evidence.  Probably every scientist has had papers or proposals rejected based on reviews they thought weren’t entirely fair.  But if every time that happened to me I were to take my ball and go home, as Spencer did when he decided to bypass the peer review system and take his message “to the people,” I would be missing out on something uncommon and valuable.  That is, I would be missing the chance to develop my ideas in the face of unrelenting and, for the most part, intelligent and informed criticism.

I hope he takes some lessons from the criticisms I’ve given, and tones down his wild accusations of impropriety and bias against his colleagues.  Scientists, including Roy Spencer, are just people, and most of them are trying to do their best.

[UPDATE:  Arthur Smith has now done the full mathematical proof for what I showed by playing around with MATLAB.  UPDATED UPDATE:  Arthur went on to show that, given the mathematical form of Spencer’s model, he would have to start his model at ∆To = negative a few trillion degrees in 1000 A.D. to have his model produce a suitable anomaly in 1900 to adequately fit the 20th century data.  Ok, so if you keep reading down into the comments, it turns out that there are other ways (that aren’t physically impossible) to drive the model and get the proper starting point for the 20th century, but they are still wildly improbable, and there’s no evidence for anything like that.]

References

Baker, M.B., and Roe, G.H. (2009) The shape of things to come:  Why is climate change so predictable?, Journal of Climate, 22, 4574-4589.

MacDonald, G.M., and Case, R.A. (2005) Variations in the Pacific Decadal Oscillation over the past millenniumGeophysical Research Letters, 32, L08703.

Murphy, D.M., and Forster, P.M. (2010) On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Journal of Climate, 23, 4983-4988.

Spencer, R.W., and Braswell, W. D. (2008) Potential biases in cloud feedback diagnosis: A simple model demonstrationJournal of Climate, 21, 5624-5628.



Responses

  1. I have read carefully through the three parts of your blog article on Roy Spencer’s latest book and I congratulate you on a very clear exposition of what is a fearsomely difficult subject.

    Like you I thought that Dr Spencer’s book was disappointingly combatitive. All I am interested in is the scientific truth and I thought that his complaints about the behavior of the climate scientific community towards him (whether justified or not) distracted from his thesis. Interestingly, James Hansen’s book, Storms of my Grandchildren, contains exactly the same sort of hand-wringing ‘nobody is listening to me’ rhetoric, but from the diametrically opposite position. It seems like everyone with strong views in this game gets to feel persecuted.

    I think that it is important to distinguish between Dr Spencer’s hypothesis and his proof. His hypothesis is that clouds regulate the Earth’s average temperature via the water vapor cycle so as to keep the planet at a relatively stable temperature, even in the face of forcings such as man-made atmospheric CO2. This is plausible in that it could well explain why the world has not so far experienced nearly as much warming as should have been apparent by now if the CO2 warming theory were correct.

    But for me his proof seemed convoluted and frankly not very well explained. Also as someone involved in software development all my professional life, I am extremely dubious about the use of computer models to prove anything. The outputs of computer models should not be confused with experimental data. They are only as good as the hypotheses built into the model which in the case of modeling the climate are very crude. But even if, at the end of the day it turns out (as you are suggesting) that Dr Spencer has made a total botch-up of his attempt at proving his hypothesis (and this remains to be seen since he has not yet had time to respond), there is still the fundamental issue that simply won’t go away…

    The average Earth’s long term rate of temperature rise between 1850 and 2010 was 0.4degC. Between 1965 and 2000 it was 4 or 5 times higher than that. This led people to assume that this was a sure sign of the impact of the huge increase in man-made atmospheric CO2 that had occurred over that period. But a closer examination of the temperature record shows that this could equally well be the 35 year upward swing of the natural cyclic ~70 year fluctuation in temperature known as the Atlantic Multi-decadal Oscillation. Either hypothesis could be true, CO2 or AMO. The fact is, it will only be possible to determine between one or the other over the next 35 years when, if the rise was indeed due to the cyclic AMO, we should see a corresponding fall. Skeptics are excited that the rate of rise of temperatures since around 2000 have more-or-less leveled out but it is still much too early (by decades) to tell if this will be maintained. If on the other hand the trend does carry on up at the rate we were experiencing prior to 2000, then it will be game set and match to the warmists.

    Set against this quintessential uncertainty, it’s a good thing that we have scientists like Dr Spencer who are working hard to try to prove their hypotheses. He might just do it (next time!) in which case the world can relax a bit perhaps. But he might not, in which case the warmists, increasingly might win the race.

    The one thing I am certain of is that the science is definitely not settled. The temperature record (not computer models) is the only true data that speaks and, thus far, it speaks ambiguously.

    • Hi David,

      Your main point is correct. Like I said, if Roy wants to pursue his hunch, then more power to him.

      However, if you want to understand the point of view of the vast majority of scientists who work on this, you need to step back from your “either hypothesis could be true” stance, and ask yourself, “Which hypothesis has more evidence–right now?” Like I said, the standard model of climate change can explain the main features of the climate of the last century, the last couple thousand years, the last million years, and the last 500 million years. Why does it work so well? And how much data does your AMO hypothesis explain?

  2. Barry,

    You say: “how much data does your AMO hypothesis explain?”

    A hypothesis doesn’t explain data. It’s the other way round. Data either confirms or refutes a hypothesis.

    The temperature data is quite ambiguous – that was my point. The upward swing in the 35 years up to 2000 was only around 0.5degC. That in itself is not in the least bit alarming but it would become so if it continued upwards at the same rate for the next 35 years, and the next, and the next… If on the other hand, that upswing was due to the natural AMO cycle, and if there were to be a corresponding 0.5degC downswing over the next couple of decades or so, that would mean that the long term trend would continue on as it has done up to now at a decidedly unalarming 0.4degC per century.

    Only time will provide the data to distinguish between the two hypotheses, CO2 and AMO. No amount of computer modelling, arm waving, or playing the “vast majority of scientists” argument will have any effect at all. In the end it is always the data that speaks.

    • David,

      I think you are at least a little bit wrong about this. Scientists make observations (i.e., collect data). They want to explain some set of observations, so they make an explanation–called a hypothesis. The hypothesis always goes beyond the original set of facts, so that gives us an opening to try to make more observations that the hypothesis would predict. These NEW observations are either consistent with the hypothesis or not. (In reality, there’s a lot of grey area. Look at Fig. 3. Is the data slope “consistent” with Roy Spencer’s predicted slope? It depends on what kind of standard you adopt, I guess.)

      In the end, we can have different standards of how much evidence is enough to convince us, but it’s all still about how much data is explained by a hypothesis.

      At this point, the standard model of climate change has hundreds of millions of years of climate data it can explain. Your AMO hypothesis has, well, the fact that it’s sort of been going up at the same time as the temperature… lately. We don’t know how long that relationship has held.

      Since there is never enough data to completely confirm a theory, we have to make decisions based on the best explanations and data we have. Right now, the AMO or PDO hypotheses aren’t even in the running. More power to you if you want to follow it up, though.

    • If AMO were causing the surface warming trend, then the oceans would be cooling. They’re not, they’re warming, which means there must be an external forcing at work.

      Another very good post Barry. I like how you went step by step through Spencer’s modeling and discussed all his parameter choices.

  3. “A hypothesis doesn’t explain data. It’s the other way round. Data either confirms or refutes a hypothesis.”

    If that’s true, then that leaves us with the notion that hypotheses are simply wild guesses based on nothing at all. Hypotheses are expressly about data – although the word “data” is misleading, as it implies numerical data. In similar fashion, some people thing that palaeontology can’t be scientific, because you can’t do experiments. Data, experiment – these are just forms of what’s really going on, which is observation.

    The whole scientific enterprise boils down to:
    Step one: “That’s weird.” (observation)
    Step two: “maybe it’s because” (the hypothesis)
    Step three: “If I’m right, then: ” (testability)
    Step four (a – confirmation) : Cool. That seems to explain things adequately, for now.
    Step four (b – disconfirmation): return to step 1.

  4. I think you’re bridging between published research and public communication quite well – but I’m a scientist (not climate) so maybe I find it easier to read than most..

    In either case I applaud your tenacity to go to these lengths.

    If only I had a few days to spare I’d get that code into Matlab myself for a play 🙂

    I can relate to Roy as I know only too well how easy it is to fool yourself into believing you’ve solved something with a model – I’ve had several moments where I’ve “found the fit!” for a particular model and data only to discover a little later that it was inevitable due to my choice of constraints or by having way too many degrees of freedom..

    I remember attending a Kalman filtering course where the lecturer demonstrated all too well how you can fit a 4th power polynomial to a straight line dataset with one anomaly and demonstrate some complex behaviour which was much better explained with a slope and intercept 😉

    The important thing is to be rigorous when testing your modelling – they can be incredibly powerful tools when used & understood correctly.

  5. From the PDO site at http://jisao.washington.edu/pdo/ it says: “The Pacific Decadal Oscillation (PDO) Index is defined as the leading principal component of North Pacific monthly sea surface temperature variability (poleward of 20N for the 1900-93 period).” which should make it a big chunk of HadCRUT3.

    On your data link, it says something a little different: “The monthly mean global average SST anomalies are removed to separate this pattern of variability from any
    “global warming” signal that may be present in the data. ” So depending how well they have detrended, deseasonalized or removed the the global SST anomalies, it may still contain an extractable image of the HadCRUT3 “global warming” signal.

    Spencer’s modeling of HadCRUT3 = f(PDO) should work to some extent because PDO is a function of a big portion of HadCRUT3. That doesn’t tell you much though — It is like saying the S&P500 is driven by all the Delaware stocks included in the S&P500 index. So what? It’s an obvious, trivial correlation, and a useless model unless you can somehow predict the Delaware stocks.

  6. Nicely done. Open Mind has a nice analysis of an attempt to reconstruct the AMO. The regional ocean circulation patterns and less they are reinforced by or reinforce other regional circulation patterns tend to have a limited regional affect. In the Pacific Northwest glacier response ,/a>to PDO is pretty much indistinguishable unless it its being properly reinforced by ENSO and the PNA. When ENSO and PDO are negative or positive is the most crucial.

  7. […] following is reposted from Barry Bickmore's blog – it's PART 3 of my extended critique of Roy […]

  8. David Socrates

    “Dr Spencer’s hypothesis and his proof. His hypothesis is that clouds regulate the Earth’s average temperature via the water vapor cycle so as to keep the planet at a relatively stable temperature, even in the face of forcings such as man-made atmospheric CO2”

    You go on to say that it is a plausible hypothesis. But is it? If clouds are so good at keeping the planet at a stable temperature, why didn’t they do that in past extreme climate changes?

    “This is plausible in that it could well explain why the world has not so far experienced nearly as much warming as should have been apparent by now if the CO2 warming theory were correct.”

    Here’s a relavent article about Lindzen making that claim.

    “The sensitivity of Richard Lindzen:
    Have we warmed as much as expected?”

    http://climateprogress.org/2011/02/23/the-sensitivity-of-richard-lindzen/

  9. My analysis of Spencer’s model here:

    http://arthur.shumwaysmith.com/life/content/mathematical_analysis_of_roy_spencers_climate_model

    It’s fully integrable… even his Fortran simulations were a real waste of computer time!

  10. Re. Barry 02 March at 4.47pm and Paul 02 March at 10.42pm…

    1. Paul: Re. the scientific method, yes, very well put. I agree with you (and Barry) that its essence is the iterative cycle between hypothesis and evidence.

    2. Barry: A small but important quibble: it isn’t my AMO hypothesis – it is just one that has been voiced frequently by skeptics.

    But to avoid long and unnecessary diversions into the physics of ocean cycles, let’s re-define the alternative to the CO2 hypothesis more generally as follows:

    Natural climate variability explains all the small scale changes in temperature recorded in the past few hundred years up to the present date (2010).

    In fact, this is the the null hypothesis of climate change because it is the default one that would be left on the table if ‘man-made CO2’ turned out not to fit the facts. (This assumes that you and I agree that there are currently no other serious alternatives.)

    My view is that the temperature record, which is the only objective real-world data we have, has thus far failed to discriminate clearly between the CO2 hypothesis and the null hypothesis. Just take a look at the HadCRUT3 world temperature data in the following chart: http://www.thetruthaboutclimatechange.org/temps.png

    You will see that the average rate of temperature rise between 1850 and 2010 (the blue linear regression line) was 0.41degC per century. Although it is true that the
    average rate of rise between 1965 and 2000 was some 4 or 5 times higher than that, giving cause for significant alarm, you will also see that over the whole 161 year period the variation, up or down, from the blue long term trend line was still only ±0.25degC (the vertical distance between the two red dotted lines. This is hardly dramatic enough to disturb the man-in-the-street.

    But if the CO2 theory is correct, we should surely soon see a resumption of the steep rise that occurred between 1965 and 2000 any time soon and the curve will then break dramatically through the roof of the dotted red ±0.25degC ‘tunnel’ shown in the diagram, at which point things will look increasinglyly dismal for the climate change skeptics.

    If on the other hand, the strong upward swing is over and the red line stays wandering around within the red ‘tunnel’ for another decade or two, I think it will be very hard indeed to persuade the proverbial man-in-the-street to vote for expensive climate mitigation measures.

    The bottom line is that it is not your view or my view that matters. It’s that pesky man-in-the-street (the voter) who will decide and it is without question the data that will speak to him one way ot another.

  11. Natural climate variability explains all the small scale changes in temperature recorded in the past few hundred years up to the present date (2010).

    In fact, this is the the null hypothesis of climate change because it is the default one that would be left on the table if ‘man-made CO2′ turned out not to fit the facts.

    No. The null hypothesis is that known natural modes of variability are all that we need. Including the possibility of unknown natural processes renders one’s hypothesis functionally equivalent to ‘a wizard did it’. So if you think that there’s a real 65-ish year oscillation in the temperature record, the question you need to answer is what causes it. Without that, you don’t really have an alternative hypothesis at all. A physical model beats a half-assed curve fit every time.

  12. More fun here:

    http://arthur.shumwaysmith.com/life/content/roy_spencers_six_trillion_degree_warming

  13. Re. Sailrick on March 3, 2011 at 10:20 pm: Dr Spencer’s hypothesis and his proof….You go on to say that it is a plausible hypothesis. But is it? If clouds are so good at keeping the planet at a stable temperature, why didn’t they do that in past extreme climate changes?

    Sailrick,

    Your question is a very good one. And the answer is that I don’t know. First of all, please understand that I am not a defender of any of the various cloud theories, nor am I a detractor. I just simply don’t know if any of them are even remotely correct. But let me explain why that is irrelevant to the current debate.

    The FACT is that the planet has, over periods of hundreds of years, kept itself fairly well regulated (within plus or minus a couple of degrees Celcius). Take a look at the following temperature chart: http://www.thetruthaboutclimatechange.org/temps.png .

    It is apparent that: (1) the world temperature has trended upwards for the last 161 years at what most people would agree is an un-alarming average rate of 0.41degC per century; and (2) over shorter timescales, there are equally un-alarming, up-and-down oscillations departing by up to ±0.25degC from the long term trend (i.e. staying within the red dotted ‘tunnel’ lines in the chart). The problem for the man-made CO2 warming theory is that this un-alarming behavior has continued right up to the present day, even though the post-World War 2 period saw a huge increase in man-made atmospheric CO2 compared with before. The big question is: why has the temperature curve, instead of going through the roof, stayed within the ‘tunnel’?

    As the red temperature curve shows, there was a potentially alarming rise in temperature between 1965 and 2000 but this flattened out somewhat thereafter. Maybe the flattening out is just a short term natural wiggle and very soon the curve will resume its alarming rise, breaking through the roof of the ‘tunnel’. Alternatively, maybe the alarming rise to 2000 was actually just part of the natural up-and-down wiggle which is why it has now flattened out and maybe the temperature will therefore remain within the ‘tunnel’ as the years go by. Tough call, huh?

    Whatever your view (or mine), we are only guessing. That is why I keep on saying that the world will have to wait another decade or so to see whether the red line in my chart does or does not break through the roof of the ‘tunnel’ and go zooming up at the predicted alarming rate.

    Skeptical scientists like Lindzen, Spencer, Svensmark and many others, who (rightly or wrongly) have already anticipated an un-alarming outcome, are not denying radiative transfer theory. That is exactly why they are working on the puzzle of why a large injection of CO2 into the atmosphere has not caused an alarming warming, despite the correctness of the theory. The huge mistake that is being made in this blog trail and elsewhere is to assume that, just because these people have patently not yet come up with convincing proofs of their hypotheses, this somehow means the the man-made CO2 warming theory has been proven. No it does not!. Only the FACTS, that is the shape of the temperature curve over the next decade or two, will decide the matter one way or the other. That’s science!

  14. […] Roy Spencer’s Great Blunder, Part 1 Roy Spencer’s Great Blunder, Part 2 Roy Spencer’s Great Blunder, Part 3 […]

  15. Re. MartinM on March 4, 2011 at 7:02 am

    So if you think that there’s a real 65-ish year oscillation in the temperature record, the question you need to answer is what causes it. Without that, you don’t really have an alternative hypothesis at all. A physical model beats a half-assed curve fit every time.

    Sorry Martin but that is just plain wrong. You have missed the whole point.

    The test of a hypothesis is always the data. I have shown you the temperature data and pointed out that, thus far, it has stayed within reasonable bounds of natural climate variability although there is certainly a possibility it may be about to exceed them alarmingly. Therefore it is too early either to validate or to falsify the man-made CO2 warming hypothesis.

    Tell the man in the street (who like it or not will have the vote on this) that that is a half-assed approach and he will tell you briskly where to get off.

  16. “(who like it or not will have the vote on this)”

    Science is not a popularity contest.

  17. The test of an hypotheses is the data and the theory. Eli don’t care what your data says, if your hypothesis breaks the first law of thermodynamics it is wrong.

  18. Ben,

    Science is not a popularity contest, I agree. But the point of my observation was that, at the end of the day, in democracies voters are in charge of governments. They will only allow big changes if they agree with them. If the temperature resumes its meteoric rise upwards that it appeared to have in the 1990s then the CO2 theory will be proved by the FACTS and the voters will accept the costs involved. But IF that upward rise, since abated, does not resume, voters will not. That was the sense of my statement.

    • David Socrates, now that we are 7 years on from this article and the global average high temperature record has been broken at least 5 times since then, I wonder if your thoughts on this matter have changed just a bit. That there was no pause from 2000 to 2010 should be obvious and that the upward trajectory has been clear since then are pretty evident. The AMO index has been steady or declining the last decade, but temperatures are rising faster.

  19. “Natural climate variability” is not a hypothesis, null or otherwise; it is begging the question, because it denotes “something we don’t know about.” Of course, there is always stuff we don’t understand, and there always remains a chance that any theory, no matter how well validated, is incorrect, and the data being observed is due to “something we don’t know about.” Since it is not possible to calculate confidence limits for “something we don’t know about,” it cannot be used as a null hypothesis in any meaningful statistical sense.

    Modern climate science is based upon the idea that the observed variability around the projected climate trends is primarily due to weather. Weather mechanisms are fairly well understood, and can be modeled and shown to produce patterns that are qualitatively consistent with observation, although they are too chaotic to accurately project far into the future. Fortunately, they average out over the time scale of climate. Nobody has been able to come up with a plausible model in which weather mechanisms produce persistent changes on the time scale of climate. Hence the need for some unknown mechanism.

    Scientists are always excited by the possibility that they might discover some novel unknown mechanism, and on rare occasions it actually happens. On the other hand, basing policy on the belief that there is some unknown mechanism which has produced the modern warming that is clearly evident in the temperature record, while the same, or some other, mechanism suppresses (or at least limits) the warming that is predicted based upon CO2, and that this mechanism will prevent the dangerous temperature rise that established theory predicts is basically pinning the hopes of the world upon a deus ex machina.

  20. […] the rest may not amount to much in terms of a critique of mainstream climate science.  In fact, in Part 3 of my critique, I took Spencer’s model apart and showed that his model is pseudo-statistical […]

  21. […] the World’s Top Climate Scientists (New York:  Encounter Books, 2010).  See also Part 2 and Part 3.  Previous critiques of Spencer’s general approach to climate have been published by Ray […]

  22. […] demolisce il libro in cui Roy Spencer nega se stesso, già parecchio rovinato dalle recensioni di parecchi altri scienziati e Greenfyre demolisce Richard Muller, vale la visita non fosse che per […]

  23. […] wrote about this modeling effort in Part 3 of my recent review of Spencer’s book.  I even went to the trouble of programming his model into […]

  24. […] Bickmore (a Republican) had previously exposed as being one that could “give him essentially any answer he wanted, as long as he didn’t mind using parameters that don’t make any physical […]

  25. […] Barry Bickmore (a Republican) had previously exposed as being one that could "give him essentially any answer he wanted, as long as he didn't mind using parameters that don't make any physical […]

  26. […] “flawed”, and “incorrect”. As ThinkProgress points out, a geochemist has shown that Spencer’s models are irretrievably flawed, “don’t make any physical sense”, […]

  27. […] wrote about this modeling effort in Part 3 of my recent review of Spencer’s book.  I even went to the trouble of programming his […]

  28. […] Bickmore (a Republican) had previously exposed as being one that could "give him essentially any answer he wanted, as long as he didn't mind using parameters that don't make any physical sense." This case is […]

  29. […] much just ignores the statistical methods, and sometimes even makes up his own.  For instance, I showed that some of the work he described in his book (which had been rejected from a reputable journal) […]

  30. […] […]

  31. […] merit, but that didn’t stop him from attempting to create a simple model which turned out to be an exercise in curve-fitting without real physical merit. Despite several deep criticisms of his approach, he continued to […]


Leave a comment

Categories