Posted by: Barry Bickmore | February 25, 2011

Roy Spencer’s Great Blunder, Part 1

The following is PART 1 of an extended critique of Roy Spencer’s The Great Global Warming Blunder:  How Mother Nature Fooled the World’s Top Climate Scientists (New York:  Encounter Books, 2010).  See also Part 2 and Part 3.  Previous critiques of Spencer’s general approach to climate have been published by Ray Pierrehumbert and Tamino (here, here, and here).  My Utah readers will remember that Roy Spencer was invited to testify before a committee of the Utah Legislature last year.

Summary of Part 1:  In his latest book, The Great Global Warming Blunder, Roy Spencer lashes out at the rest of the climate science community for either ignoring or suppressing publication of his research.  This research, he claims, virtually proves that the climate models used by the IPCC respond much too sensitively to external “forcing” due to changes in greenhouse gas concentrations, variations in solar radiation, and so on.  Instead, Spencer believes most climate change is caused by chaotic, natural variations in cloud cover.  He and a colleague published a peer-reviewed paper in which they used a simple climate model to show that these chaotic variations could cause patterns in satellite data that would lead climatologists to believe the climate is significantly more sensitive to external forcing than it really is.  Spencer admits, however, that his results may only apply to very short timescales.  Since the publication of his book, furthermore, other scientists (including one that initially gave Spencer’s paper a favorable review) have shown that Spencer was only able to obtain this result by assuming unrealistic values for various model parameters.

Roy Spencer is not your average climate contrarian.  He has a PhD in meteorology from the University of Wisconsin–Madison, is a researcher at the University of Alabama–Huntsville, used to work in one of the climate units at NASA, and has published some well respected research on climate.  And yet, in The Great Global Warming Blunder:  How Mother Nature Fooled the World’s Top Climate Scientists, Spencer’s latest book, he isn’t just talking about his accomplishments in mainstream science.  Rather, he’s taking his case “to the people” because he says his latest research has blown the lid off the consensus among climate scientists that humans are causing significant climate change.  But the part of his research that has been published in the peer-reviewed literature has largely been ignored, and the rest has been quashed in the review process.

Ultimately I find enough evidence to virtually prove my theory, but now the research papers that I submit for publication are rejected outright….

The climate modelers and their supporters in government are largely in control of the research funding, which means that most government contracts and grants go toward these investigators who support the party line on global warming.  Sympathizers preside as editors overseeing what can and cannot be published in research journals.  Now they even rule over several of our professional societies, organizations that should be promoting scientific curiosity no matter where it leads.

In light of these developments, I have decided to take my message to the people.  This message is that mankind’s influence on climate is small and will continue to be small.  (pp. xi-xii)

These are serious charges Spencer levels against his fellow scientists, and while he is careful to distinguish between the majority of climate scientists, whom he paints as intellectually lazy malingerers who are “just along for the ride” (p. xvi), and the leadership of the IPCC, whom he paints as conniving, politically driven power-grabbers, he pictures a pretty broad-based conspiracy.

I find it difficult to believe that I am the first researcher to figure out what I describe in this book.  Either I am smarter than the rest of the world’s climate scientists–which seems unlikely–or there are other scientists who also have evidence that global warming could be mostly natural, but have been hiding it.  That is a serious charge, I know, but it is a conclusion that is difficult for me to avoid.  (p. xxvii)

That’s how Roy Spencer sees himself–a persecuted Galileo, boldly speaking scientific truth to power, while most of his fellow scientists succumb to greed and cowardice.  Whether Spencer ultimately turns out to be right or wrong, in this review I will show that at this point, he hasn’t even come close to proving his case.  Furthermore, some of his work has been of demonstrably poor quality, so if his aim is to convince other scientists, he has shot himself in the foot more than once.  Whereas Galileo’s main thesis was eventually universally accepted, the probability of that kind of outcome here seems vanishingly small.

The Gist

Spencer’s two main claims are as follows.  First, “the climate system is much less sensitive to our greenhouse gas emissions than the experts claim it to be” (p. vii).   Second, “the climate system itself is probably responsible for most of the warming we have seen in the last 100 years or so.  Contrary to popular belief, you don’t need a change in the sun or a volcanic eruption or pollution by humankind to cause global warming or cooling” (p. viii).

The problems with Spencer’s arguments take some background knowledge to recognize, so I’m going to start at a pretty basic level, just as he does in his book, but then go beyond his explanations in the book by including a little more math.  (I’m sorry–I’ll try to walk you through it slowly if you’re a mathphobe.)  Also, I’ve included a small “appendix” at the end of this post with a short explanation of climate “forcing” and “feedback.”  If you’re a climate wonk, you undoubtedly already know all about that, but if not, skip down to the end and read the appendix first.

A Simple Climate Model

To explore his ideas, Spencer employed a “simple climate model”.  And by “simple” I mean it treats the Earth as a well-mixed ocean of a certain depth, and includes some terms for different kinds of forcing, and another for net feedbacks.  I don’t mean to put down Spencer’s work by pointing this out–this kind of “zero-dimensional” climate model is very commonly used by scientists as a first-order approximation of how the system behaves, at least in situations where they aren’t bothering to look at the spatial distribution of climate effects.  In fact, using a simple model like this can be very informative, because there are so few variables that you can easily examine the effects of changing each one.

Spencer’s model is described qualitatively in the book, and is also programmed into an Excel spreadsheet, which Spencer makes available here.  The model is basically the following.

Equation 1:  d(∆T)/dt = (Forcing – Feedback)/Cp

Here, ∆T is the difference between the temperature at time t and the temperature at equilibrium.  (That is, ∆T is the “temperature anomaly” with respect to equilibrium.)  Cp is the total heat capacity of a column of ocean water 1 m^2 on top and h meters deep. (If you’re interested in running such a model yourself, Cp = 4,180,000 J/m^3 * h.  Pay attention to the ocean water depth.  It will be very important in a future installment of this review.)   The reason this column of ocean water is 1 m^2 on top is because the Forcing and Feedback fluxes are both in W/m^2–i.e., they are normalized to 1 m^2 of the Earth’s surface.  A Watt (W) is equivalent to 1 J/s, where Joules (J) are units of energy.  So the Forcing tells us the rate at which extra energy is coming in, while the feedback tells us how the climate system responds to the push, by either enhancing the forcing or hitting the brakes.

So what Eqn. 1 is really saying is that the rate of change of the temperature depends on 1) how much water has to be heated by the incoming radiation (Cp), 2) what the forcing is, and 3) how the climate system responds to the forcing in terms of sending more or less radiation back into space.

Feedback is represented by Eqn. 2.

Equation 2:   Feedback = alpha*∆T

When the “feedback parameter” (alpha) is positive, then there are some “brakes” on the system (notice the minus sign in Eqn. 1).  That is, if the forcing pushes the temperature one way, the feedback will put the brakes on and slow it down.  If alpha is negative, then the system will be unstable, because every time the forcing pushes one way, the feedback will keep pushing the system harder and harder in that direction.

This way of defining climate feedback is a bit non-standard, however, so I should explain the difference.  Typically, when climate scientists say there is zero feedback, alpha is actually about 3.3 W/m^2/°C.  This is the amount of extra energy the Earth would radiate back into space (all else being equal) if the temperature were raised 1 °C, simply because hotter objects give off more radiation.  So if alpha is less than 3.3 W/m^2/°C, scientists say there is a net positive feedback in the system, and if it’s more than that, they say there is a net negative feedback.  Essentially nobody thinks alpha should be less than zero, though, because that would lead to really crazy swings in the climate.  For reference, Spencer indicates that the climate models the IPCC uses to make temperature projections (and which incorporate fairly strong positive feedback) have alpha values of 0.9-1.9 W/m^2/°C.

Short-Term Cloud Feedbacks

Climate scientists don’t just guess at things like alpha values, however.  They can estimate alpha values from the correlation between satellite measurements of changes in radiation fluxes and changes in temperature.  When Spencer examined this method for estimating alpha, he surmised that it assumes the temperature changes are the cause of the changes in net radiation flux.  But what if the causality were reversed?  What if, at least in part, something internal to the system were causing the changes in radiation flux, and that caused the changes in temperature?  Wouldn’t that screw up this method for estimating alpha?

This is not a crazy idea.  It is well known that weather is chaotic, meaning that slight fluctuations in one part of the system can cause large and unpredictable fluctuations in another part of the system.  (This is also known as the “Butterfly Effect“.)  Climate (which refers to the long-term average of weather) is not necessarily thought to be chaotic, however, except over fairly short time periods.   Over these shorter periods, there are many modes of climate variability, usually involving semi-structured oscillations in sea surface temperatures, like the El Niño-Southern Oscillation, the Pacific Decadal Oscillation, the Arctic Oscillation, and so on.  In turn, random fluctuations in sea surface temperature due to ocean circulation patterns, etc., might cause concomitant changes in cloudiness, which would affect the radiation balance, and hence the temperature.  (If you keep reading, however, you will find that Spencer thinks the causality is the other way around.  Random variations in sea surface temperature are caused by random variations in cloudiness, which are caused by who-knows-what.)

Spencer and his colleague, Danny Braswell, put this idea to the test with their simple climate model, in which they could specify what the alpha value was, and drive the model with a combination of random fluctuations in both external and “internal” forcing.  They could then track both the net radiation flux and the temperature to estimate alpha in the traditional manner.  They found that the traditional estimation method produced systematically low alpha values–i.e., they were skewed toward more positive feedback.  However, they found characteristic patterns in the data (which they called “feedback stripes”) that allowed them to estimate alpha much more accurately.  Furthermore, they could find the same kinds of patterns in the satellite data.  These observations led Spencer and Braswell to conclude that alpha really should be 6 W/m^2/°C or more, indicating very strong negative feedback.

With this interesting result, Spencer and Braswell decided to submit their paper to the Journal of Climate, an excellent scientific journal that publishes climate research.  What happened?  Did some sniveling cowards trash the paper in review out of fear of future reprisals from the people Spencer sarcastically calls “The Keepers of All Climate Knowledge” (p. xxi)?  Did one of those politically motivated sympathizers who have insinuated themselves as editors of all the major climate journals reject it, despite favorable reviews?  Surprisingly (to Spencer), they did not!

I did not have high hopes for getting the paper accepted, though, because of its potential implications regarding the seriousness of manmade global warming.  To my great surprise, two leading climate experts chosen by the journal’s editor to be peer reviewers agreed that we had raised a legitimate issue.  In fact, each reviewer decided to build his own simple climate model to demonstrate the effect for himself.  Both offered constructive advice on how to improve our model in order to demonstrate the effect more clearly.  One even said it was important that the climate modeling community be made aware of the issue.  We modified the paper according to their advice, and it was published in November 2008.  (p. 73)   [Note:  See Spencer and Braswell (2008)–BB.]

What?  Where were all the caviling intellectual lightweights?

Our university put out a press release on the paper–and the mainstream news media totally ignored it.

As far as I can tell, the results of that published work have been largely ignored by the scientific community too.  Chances are, even if they did read the paper they would not recognize its potential significance.  This is because it is almost impossible to get away with saying anything like “this could throw all of our global warming predictions out the window” in a scientific publication.  There will always be at least one peer reviewer of your paper who has so bought into the theory of anthropogenic global warming that he will not permit you to publish anything that directly calls the prevailing orthodoxy into question.  (p. 73)

Oh, there they were.  But wait, isn’t there some more charitable interpretation that could be made?  Here are a few ideas.

1. Spencer himself says it’s unlikely that most scientists who even read their paper would have recognized its Earth-shattering significance, because he deliberately left that part out to sneak the paper past unsuspecting zealots in the review process.

But they should have immediately recognized the significance, anyway?

2. The results aren’t necessarily as significant as Spencer wants us to believe.

In fact, Spencer himself admits that the huge alpha values he estimated don’t necessarily represent the long-term feedback, which is what climatologists actually care about.  In fact, later in the book he argues for a long-term alpha value of about 3.0 W/m^2/°C, indicating a weak positive feedback, rather than a strongly negative one.  Regarding the 6 W/m^2/°C figure, he says,

Note that I am not necessarily claiming that this is the feedback operating on the long time scales associated with global warming–only that it is the average feedback involved in the climate fluctuations occurring during the period when the satellite was making its measurements.  (p. 118)

3. All the other scientists haven’t been ignoring Spencer’s paper, and Spencer is just being a whiner.

It takes time to explore a new idea, and there aren’t that many people working specifically on cloud feedbacks.  After all, there’s no point in getting all hot and bothered about results that, by Spencer’s own admission, may not amount to much.  Even if another scientist read Spencer and Braswell’s paper immediately after it was published and started working on the problems they identified right away, it might have been several months before a paper was ready to submit, and then several more months before the review process, revisions, editing, and publishing were completed.  In other words, you have to expect several months, and more likely a year or more after a paper is published, before responses start coming out.

Here’s a timeline of how the response to Spencer and Braswell’s paper evolved up until Spencer published his book.  Spencer and Braswell published their paper in November, 2008, but it was originally submitted in September 2007.  (That’s right, the review, revision, editing, and publishing processes for Spencer’s own paper took over a year!)  Piers Forster was one of the reviewers, so he had about 1 year advance notice of the content of the paper.  In December 2008, Gregory and Forster (2008) published a paper on a related topic, in which they mentioned that some of their results were consistent with the earlier paper.  McLean et al. (2009) submitted a paper in December 2008, which was published in July 2009, about how a certain mode of climate variation (the Southern Oscillation) seems to control a lot of the short-term fluctuations in global temperature.  But when they discussed changes in cloud cover, they mentioned that they couldn’t tell from their data whether Spencer and Braswell’s thesis about cause and effect applied.  Spencer’s book came out in April 2010, about 1.5 years after his paper, and I assume it was probably a few months from the time he submitted the final manuscript to the publisher till it actually came off the presses.   The bottom line is that Spencer was ready to start whining about the injustice if his paper was rejected, and he was ready to start whining if all the other scientists didn’t immediately respond to his paper within about a year–which is about the same time it took his paper to be published after he submitted it.  (Who knows how long it took him to do the work and write it up in the first place?)

What has the response to Spencer and Braswell (2008) been like since the publication of The Great Global Warming Blunder?  Three more papers have been published that respond in some way to Spencer and Braswell (2008) and two of them deserve our special attention.

First, Andrew Dessler of Texas A&M University published a paper in Science magazine (Dessler, 2010) in which he estimated the cloud feedback in a way that he claimed gets around the cause-and-effect problem Spencer and Braswell (2008) identified.  He found that the cloud feedback is probably positive, but there is some statistically non-negligible probability that it could be very weakly negative, too.  This result is consistent with the IPCC models.  Roy Spencer actually held a press conference (!) to talk about how he disagreed with Dessler’s interpretations, and Spencer and Dessler had an e-mail exchange, which is discussed (and linked) at the RealClimate site.  The crux of the issue is that, based on a single figure in another modeling paper he published last year (Spencer and Braswell, 2010), Spencer thinks clouds cause El Niño, which would go against decades of research.

Second, do you remember Piers Forster?  One of the scientists who gave a favorable review of Spencer and Braswell’s paper, and even suggested ways to improve it?  The guy who published a paper mentioning Spencer and Braswell’s work, and saying his results were consistent with theirs?  Well, Murphy and Forster (2010) went ahead and did a more thorough examination of Spencer and Braswell’s approach, and the result wasn’t pretty.  Here’s the abstract of their paper.

Changes in outgoing radiation are both a consequence and a cause of changes in the earth’s temperature. Spencer and Braswell recently showed that in a simple box model for the earth the regression of outgoing radiation against surface temperature gave a slope that differed from the model’s true feedback parameter. They went on to select input parameters for the box model based on observations, computed the difference for those conditions, and asserted that there is a significant bias for climate studies. This paper shows that Spencer and Braswell overestimated the difference. Differences between the regression slope and the true feedback parameter are significantly reduced when 1) a more realistic value for the ocean mixed layer depth is used, 2) a corrected standard deviation of outgoing radiation is used, and 3) the model temperature variability is computed over the same time interval as the observations. When all three changes are made, the difference between the slope and feedback parameter is less than one-tenth of that estimated by Spencer and Braswell. Absolute values of the difference for realistic cases are less than 0.05 W/m^2/K, which is not significant for climate studies that employ regressions of outgoing radiation against temperature. Previously published results show that the difference is negligible in the Hadley Centre Slab Climate Model, version 3 (HadSM3).  (Murphy and Forster, 2010)

Ouch.  The short version is that Spencer and Braswell plugged in some unrealistic values of the main variables into their model, and automagically got answers that confirmed their hypothesis that standard climate models might be greatly overestimating climate sensitivity.  When someone else plugged in realistic values, it turned out that Spencer and Braswell’s hypothesis was not confirmed in any significant sense.  We’ll see in a future installment of this review that this kind of sloppy modeling work is one of Roy Spencer’s hallmarks.

[UPDATE:  A reader pointed out that Spencer responded to Murphy and Forster’s paper on his blog.  He acknowledges some of the mistakes Murphy and Forster pointed out, and objects to others.  It’s worth reading, but also keep in mind that even if he ends up being right about this, he admits that it may not be indicative of long-term climate sensitivity.]

4. Maybe the other climate scientists have other reasons to believe the climate is pretty sensitive to forcing (i.e., dominated by positive feedbacks.)

In fact, they do… whether Roy Spencer likes it, or not.  That’s the topic of my next installment.

References

Dessler, A.E. (2010) A determination of the cloud feedback from climate variations over the past decade, Science, 330, 1523-1527.

Gregory, J.M., and Forster, P.M. (2008) Transient climate response estimated from radiative forcing and observed temperature change, Journal of Geophysical Research, 113, D23105.

McLean, J.D., de Freitas, C.R., and Carter, R. M. (2009) Influence of the Southern Oscillation on tropospheric temperature,  Journal of Geophysical Research, 114, D14104.

Murphy, D.M., and Forster, P.M. (2010) On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation, Journal of Climate, 23, 4983-4988.

Spencer, R.W., and Braswell, W. D. (2008) Potential biases in cloud feedback diagnosis: A simple model demonstration, Journal of Climate, 21, 5624-5628.

Spencer, R.W., and Braswell, W. D. (2010) On the diagnosis of radiative feedback in the face of unknown radiative forcing, Journal of Geophysical Research, 114, D16109.

Appendix:  Forcing vs. Feedback

After some introductory material, Spencer begins his exposition with two chapters devoted to explaining the concepts of “forcing” and “feedback” in the climate system.  Here’s the idea in a nutshell.

The temperature of the part of the atmosphere where we live depends on how fast energy comes in the system and how fast it goes out.  Spencer explains it as analogous to a pot of water on a stove.  Energy comes from the stove into the pot, heating up the water, but as the water heats up, more heat energy leaves the pot and goes out into the air.  At some point, the water will have heated up to a point where the rate of heat input from the stove is exactly balanced with the rate of heat outflow from the water.  At that point, the temperature becomes stable, and the system is in “equilibrium”.

“Forcings” are factors you treat as external to the system, that either change the rate at which energy comes in, or change the rate of energy outflow.  Suppose your pot of water is in thermal equilibrium (stable temperature), and then you turn up the heat.  You have now “forced” the system, making it so the heat inflow temporarily outmatches the outflow.  The temperature of the water will go up until those rates are matched again, but now the equilibrium temperature will be higher.  Likewise, if you cover the pot with a lid, you have forced the system by making it so heat can’t escape the pot as rapidly.  The climate system might be forced when there is a change in the amount of radiation coming in from the Sun, or when we pump into the atmosphere extra greenhouse gases, which slow down the rate at which energy can leave the system.  In either case, something external to the system has caused a change in the energy inflow or outflow rates, and so the system has to adjust to a new temperature.  (For more information on climate forcing, click here.)

A feedback is an internal response to forcing.  Say the climate system is forced by an increase in the incoming solar radiation, and it gets a little hotter.  This initial increase in temperature then causes other things to happen, e.g., more water vapor can be evaporated into the air at a hotter temperature, and water vapor is a greenhouse gas.  This extra greenhouse gas in the atmosphere slows down the rate of outgoing energy, and the temperature becomes even hotter!  In other words, the forcing gave the system an initial push in one direction, and then the feedback enhanced that initial push.  If the feedback enhances the forcing, it’s called a “positive feedback”.  On the other hand, extra water vapor in the air might lead to more clouds forming, and certain types of clouds tend to reflect back more of the incoming solar radiation.  This would tend to cool the system–i.e., push it in a direction opposite the forcing–and so it is called a “negative feedback”.  (For more information on climate feedbacks, click here.)

Different kinds of clouds have different characteristics with respect to how much solar radiation they reflect, and they also can trap outgoing radiation, which would constitute a positive feedback.  Therefore, it’s pretty complicated to sort out whether cloud feedback is net negative or net positive, and that’s one area where Roy Spencer takes issue with the consensus position.  All the climate models used by the IPCC assume that if you add up all the feedbacks in the climate system, the total is net positive.  So if you give the models a push (forcing) toward hotter or cooler temperatures, that push will be enhanced.  Spencer, however, thinks that the feedbacks are near zero or net negative, largely due to clouds.


Responses

  1. Barry, thanks for doing this, I had a discussion with Spencer way back somewhere on a blog – may even have been climateaudit – about his model, and some questions and problems I had with it. He responded in a way but didn’t seem to understand my complaints – in fact here it is:

    Spencer on Cloud Feedback

    (and see some of my earlier comments there):
    “So in my view the lesson is simply – you can’t take these averages at time-scales less than that relaxation time or you’ll get bogus results. But take the averages over 3 or 4 times the decay time and it should be fine.”

    which seems to agree with the Murphy and Forster conclusion. I can’t believe he based his book on this!

    By the way, your equation 1 has Cp on the wrong side (or it should be 1/Cp).

    • Thanks! I fixed the equation. And remember that this is the part of his research that he actually got published in the peer-reviewed literature. It gets much, much worse. Stay tuned for Parts 2 and 3.

  2. RS: “Either I am smarter than the rest of the world’s climate scientists–which seems unlikely–or there are other scientists who also have evidence that global warming could be mostly natural, but have been hiding it.”

    Spencer is forgetting a third possibility. Remember Feynman: “The first rule is to not fool yourself, and you are the easiest person to fool”.

    Barry, you nail it here: “Spencer and Braswell plugged in some unrealistic values of the main variables into their model, and automagically got answers that confirmed their hypothesis that standard climate models might be greatly overestimating climate sensitivity.”

  3. Very interesting post. I look forward to the next two parts!

  4. Dr. Bickmore, it is with great dismay that I must conclude that an otherwise seemingly intelligent person as yourself could be utterly hoodwinked by the AGW movement. Please, remove yourself from this scenario and be objective here. This whole movement would be hilarious if I didn’t know so many people have bought into it. The fact is, AGW can so easily be refuted, it’s amazing. I called and e-mailed every natural sciences faculty I could find at Northern Arizona U and ASU to challenge them to a debate. The only one to return my call was Robert Balling, a climatologist at ASU who actually agreed with me that AGW theory is grossly exaggerated. I truly can’t understand this theory can continue to have a following (other than the money trail I guess).

    • Scott,

      So are you saying that Roy Spencer’s modeling effort was really legitimate? I’m thinking you didn’t really stop and think about what I said in this particular post. If you want me to change my mind, you (and Roy Spencer) are going to have to do a better job at making your case.

    • Mr. Hastings says that AGW is easily refuted, but does not tell us why or how. I wonder how all those folks at the NAS and AGU have been hoodwinked? If Mr. Hastings knows and can articulate it, he would be doing humankind a favor.

  5. […] following is reposted from Barry Bickmore's blog – it's PART 1 of an extended critique of Roy Spencer’s The Great […]

  6. […] in Climate Change « Roy Spencer’s Great Blunder, Part 1 LikeBe the first to like this […]

  7. Dr Bickmore, Interesting post. You stated: “Over these shorter periods, there are many modes of climate variability, usually involving semi-structured oscillations in sea surface temperatures, like the El Niño-Southern Oscillation, the Pacific Decadal Oscillation, the Arctic Oscillation, and so on”.

    If you look at the PDO Index (1900-2010) in Wikipedia you will see that the PDO has barely reached one cycle from 1970 to 2010. Since most satellite data was available since the 1970s and since significant green house gas emissions started in the 1960s I don’t think we can say that the PDO index occurs over a short time. Notice also that the PDO is not a nice symetric sine wave and the amplitude of this imperfect oscillation may also be modulated by a longer time scale signal.

    • Good point, Raymond. I’ll be addressing Spencer’s claims about the PDO in Part 3.

  8. […] 2010).  In this part I refer constantly to Spencer's simple climate model, which I explained inPart 1, so make sure to read that first. See also Part […]

  9. I am not sure if Roy Spencer promotes bad-science accidentally or on purpose, but this is not the first time he has blundered. Recently I was searching through a database of the stolen emails from East Anglia colloquially known as Climate Gate. I found an email where researchers Mears and Wentz are mentioned as the discoverers of major mathematical errors in the algorithms used by Roy Spencer and John Christy. It turns out that the bad-science published by these researchers from the University of Alabama has been the primary reason why the climate models were questioned by the public at large. The errors where published in SCIENCE in 2005 with Spencer and Christy acknowledging the errors in the letters section of September 2005 issue. So why do Spencer and Christy continue to deny the science?

    http://www3.sympatico.ca/n.rieck/docs/climate_science.html#climategate

  10. […] Roy Spencer’s Great Blunder, Part 1 Roy Spencer’s Great Blunder, Part 2 Roy Spencer’s Great Blunder, Part 3 […]

  11. Barry, you stated:
    “If you want me to change my mind, you (and Roy Spencer) are going to have to do a better job at making your case”.
    The fact of the matter is that the above will never happen. You believe yourself as a moderator of all things objective. In your mind you are able to sift through opinion from fact and flawlessly take the side of objectivity and “truth”. I don’t blame you for trying, most people feel this way about themselves. However, I have found that most climate scientists don’t have a background in a. psychology and b. history. If they did, I think this debate would be over.

    Why? I could go through several examples from history where groups of people had developed culture beliefs and practices based on “cognitive bias”. They don’t realize how off course of reality they have become until something stark shoves them into place. I freely admit that I am a biased person. It’s human nature to be biased. To say otherwise is disingenuous or ignorant. As an alumnus, I love BYU football. Win or lose, I’m going to be pulling for my team. At times, I find myself arguing a call that goes against BYU. I never argue for the other team other than to say “good call” if it goes in BYU’s favor.

    Look, I know I will never bring you over to the “dark side” of the global warming debate, just as much as I know you will never be able to do the same to me. I have taken a side, and I admit it. I think that’s the difference between you and I. I could cite several resources to support my position from the subtle to the down-right hilarious (e.g. Climate is NOT weather! (news flash: eventually it has to be related somehow! when will that occur?!) , or the 97% “consensus” of climate researchers (ask the authors what their margin of error is) Surprise! They didn’t do one. (their “97%” number came from 3% of the total sample), or El nino in 98 is “absolute proof of global warming” but la nina is “la nina”, or one of my personal favorites, “global warming leads to less snow, I mean more snow, I mean whatever is outside your window” my words). Please don’t try to deny that the above statements have not happened in discussions of climate scientists on their blogs and media outlets including the IPCC because I have personally witnessed all of these claims.

    Climate scientists present themselves in a very nuanced way. For proof, go to RC today‘s post. Their satellite “did not make orbit”. This is hilarious! In my world, it “crashed into the ocean and was destroyed after multiple system failures”. Both statements are verifiably true. Your reviews are no different in your wording. You just probably don’t even realize it.

    I read Roy Spencer and get a very different interpretation than you get. My cognitive bias. Yep. I admit it. I believe you provide no “smoking gun” to disprove Spencer other than your colored viewpoint. It is pervasive throughout your reviews of both Monckton and Spencer. The AGW crowd will undoubtedly say “great job. You crushed him”. And then you can stay inside your tidy box with everyone who thinks like you and pat each other on the back. That’s great. In fact, I’m all for clubs. The problem with clubs like the AGW movement is that they want me to pay their dues (in the form of taxes and draconian measures) and become a member. I don’t like that. Pay your own dues to your own club and leave me, and millions of other intelligent, practical, sincere people who have decided to agree with me alone. That’s the only reason I’m here. And thank you for not deleting my posts like other do.

    • So let me get this straight, Scott. First you write in with a plea for me to “be objective,” but now your train of thought seems to have veered off into roughly this:

      1. You think you’re being all “objective,” Bickmore, but you’re not! Everyone is biased, including me, but at least I admit it.

      2. Climate scientists say all kinds of things like “Climate is NOT weather,” and since I’m too willfully ignorant to figure out what they’re talking about, I’ll use that as an example of THEM not being objective.

      3. You can point to some supposed consensus among climate scientists, but the study that established that 97% of them agree humans are significantly affecting the climate had a small sample size of actual climate scientists. I’m going to ignore the fact that another study, with a much larger sample size, produced essentially the same results. I’m also going to ignore the fact that the original study had a much larger sample size of “Earth Scientists,” and the vast majority of them also agreed with the consensus, but the subsample of working climate scientists agreed even more strongly. I’m going to ignore these things because I haven’t actually read these studies–I just read what some right-wing nut said about them on the Internet.

      4. And so, since everyone is biased, my reading of Roy Spencer (i.e., that he’s a genius) is just as good as yours!

      Here’s the thing, Scott. Everyone has biases, but we can at least TRY to overcome them, to some extent. For instance, I was once skeptical of AGW, but I decided to investigate, and ended up CHANGING MY MIND. In fact, when I reviewed Roy Spencer’s material, I even programmed his “simple climate model” into my computer, and played around with it. (See Part 3.) So don’t come around here telling me I CAN’T change my mind about an issue like this, because I already have. And don’t tell me that one opinion about Roy Spencer’s work is just as good as another, when I’ve actually taken his model apart for a test drive, and all you’ve done is pat yourself on the back for sticking to your biases.

      • Barry,

        Thanks for the clear-headed rebuttal to Hastings. It is pretty rich for someone to ask you to be objective, and then turn around and say “I’m biased and you can’t make me change.”

      • Barry,

        I’ve re-read my post a few times, and nowhere do I mention a “plea to be objective”. I never made any plea for you to be objective. Of course, this is your interpretation of what I said, which is exactly the point I am trying to make. You have made an assumption based on your internal biases.

        You stated regarding me, “I’m going to ignore these things because I haven’t actually read these studies–I just read what some right-wing nut said about them on the Internet”. Please Barry, do me a favor and continue thinking this. Please ignore the fact that I actually downloaded and read through the entire text of the study, the EOS summary and slanted conclusion, and interacted with the lead author, Peter Doran, of the University of Chicago a few times regarding the methodology of his survey.

        Forget the fact that as a physician, I read studies of multiple types (i.e. case control, meta-analyses, placebo controlled, blinded, etc. etc.) frequently and have some experience (I’m no expert) in statistical methodology. Forget the fact that while I’m no physicist, I am a physician and while not specializing in physics, do have to have a breadth of working knowledge in the basic sciences. Forget the fact that I have spent the last three years exhaustively following climate change, and reading articles from peer-reviewed journals from both sides of the aisle. The more ignorant you believe we are, the more powerful we become.

        Look, I apologize if you feel I was attacking you or your views. However, you do need to be aware of the fact that if you make it a practice to criticize other’s work (e.g. Spencer and Monkton), that you need to be open to receiving the same. Would you honestly expect it not to happen? I’m signing off as I have a busy life outside of climate research. I wish you well in your new found conversion to the AGW movement. Just remember: a. psychology, b. history. Also remember the null hypothesis. The ball’s in your court, not mine. We are innocent until proven guilty.

  12. Barry, I’m sure you will agree that the climate is a very complex system and the computer models are all imprecise analogs of it. I’m a complete amateur observer but I have quite a bit of common sense and a pretty good understanding of control systems. I have written software to control environmental chambers and greenhouses. That’s the crude limit of my expertise and no doubt makes me dangerous. I must admit that at this point I’m skeptical of GW, but hope to remain open minded.

    That said, please allow me to humbly make the following hypothesis.

    1) If you look at what is generally accepted data you will conclude that the world’s average temperature remains relatively stable over long time periods.

    2) Temperate is never in equilibrium due to varying both short and long term forcing of lets say “unknown magnitude or known magnitude”.

    3) During a 4 billion year geological time period in which it is believed solar output has increased an estimated +25 to +30%, liquid water has been present on earth. So the temperature variations have remained within a +/- 50C or less range.

    4) Due to points 1 through 3 above I leap to the conclusion that a control system of some sort is at work. I don’t even begin to pretend that I understand how it works. I simply say it sure looks like there is one.

    5) All my experience tells me that for a control system to remain in stability NET negative feedback is ALWAYS required. If not, the system will either tend to oscillate or run away when “kicked” by something. In terms of the earth’s estimated thermal time constant, the oscillation frequency would be quite high i.e. maybe decades. The climate does not seem to do that.

    CONCLUSION #1: NET negative feedback is normally present in climate. If not, damped or continuos “high frequency” oscillation would very likely be observable whenever a new equilibrium temperate is forced.

    Now lets double CO2 concentration, or for comparison, light a match in a big room. Sorry for that analogy, I’m sure you’ll have fun with that one. IPCC models estimate a net forcing of 2.6W/m2. How much is that going to increase temperature? Without positive feedback, IPCC estimates the effect would be about .7C in 100 years.

    FINAL CONCLUSION: In order to get more than .7C in 100 years you would need NET positive feedback. If the feedback is in fact negative, the increase will not even be .7C due to CO2 in the next 100 years.

    • Hi Dan,

      Here’s where you are getting confused (and I don’t blame you.) Climatologists talk about “positive feedback” in systems where alpha in the simple model above is less than 3.3 W/m^2/K, and “negative feedback” in systems where alpha is greater than 3.3 W/m^2/K. The “positive feedback” scenario you describe would be a case where alpha is actually negative, not just “less than 3.3 W/m^2/K”. Roy Spencer says the IPCC models have the equivalent of alpha = +0.9-1.9 W/m^2/K, so we’re still talking about stable systems, here. So where does that 3.3 W/m^2/K come from? This is called the “Planck Response”, which is just the amount that IR emissions from the Earth would go up (all else being equal) just from raising the one degree. That is, hot objects emit more grey body radiation than colder ones. So yes, even in situations where the climatologists say there is positive feedback in the system, there are still some “brakes” on the system.

      Another place you go wrong is that the IPCC climate sensitivity estimate for “no feedbacks” is 3.7 W/m^2 for 2x CO2, which amounts to about 1.2 °C equilibrium warming. (This is not scaled to time, though. It takes time for a system to reach a new equilibrium.) Anyway, that kind of “no feedbacks” estimate is all referenced to alpha = 3.3 W/m^2/K, not zero.

      Clear as mud?

      • Hi Barry,

        I understand that the value of Alpha at 3.3W/m2 exactly cancels theoretical forcing, thus resulting in an error signal of zero in the formula you present. I think you require an absolute value for feedback because you are trying to describe the current steady state of climate and you know the heater is ON.

        I’m arguing that feedback is not positive, it’s negative because climate is stable. Further it’s probably stable because it’s a type of control system. Control systems don’t work with net positive feedback; they will always tend to run away when disturbed.

        I then used IPCC estimates of forcing, i.e. the power of the heater, not alpha, to predict delta T with no change in “current” feedback. Maybe I have the forcing numbers wrong. That doesn’t matter because I’m trying to refute that CO2 creates NET positive feedback or we would have fried already. Thus CO2 effects must either do nothing to feedback or, at worst, reduce the amount of negative feedback.

        If it does reduce the amount of negative feedback, we could still have a problem because then it could cause more warming than its’ small direct forcing effect. When you think about it, if you remove your offset of 3.3W/m2 we end up with the same conclusion. Therefore I need to retract my final conclusion and say that if CO2 reduces NET negative feedback, it’s forcing effects would be magnified.

        If you can prove that CO2 does in fact reduce negative feedback I guess I’ll need to acknowledge the GW camp. Can you do that?

  13. […] [2] Roy Spencer, see: https://bbickmore.wordpress.com/2011/02/25/roy-spencers-great-blunder-part-1/ […]

  14. Hi Dan,

    I’m having a little trouble following what you’re saying, probably because I’m not a control systems guy. In any case, I’ll give a shot at replying, and hopefully you’ll correct me where I’ve misinterpreted you.

    You say, “I understand that the value of Alpha at 3.3W/m2 exactly cancels theoretical forcing, thus resulting in an error signal of zero in the formula you present. I think you require an absolute value for feedback because you are trying to describe the current steady state of climate and you know the heater is ON.”

    The alpha value for zero feedbacks is 3.3 W/m^2/K, not 3.3 W/m^2. Therefore, the magnitude of the Planck response depends on how far out of equilibrium you are. (Notice that you multiply alpha by the temperature anomaly, so it isn’t just an absolute value for the feedback.)

    “I’m arguing that feedback is not positive, it’s negative because climate is stable. Further it’s probably stable because it’s a type of control system. Control systems don’t work with net positive feedback; they will always tend to run away when disturbed.”

    As I mentioned, Roy Spencer says that alpha in the IPCC models is in the neighborhood of 0.9-1.9 W/m^2/K. A positive alpha value means there is a net negative feedback if you count the Planck response. The problem is that when climatologists talk about feedbacks in the system, they usually ARE NOT counting the Planck response. In that case, alpha values of 0.9-1.9 represent positive feedback, RELATIVE to 3.3 W/m^2/K.

    Sit there are stare at that last paragraph for a minute to digest it, because I don’t think you understood that I was agreeing with you last time. IF the net negative feedbacks in the system were positive (i.e., alpha < 0), the system would be unstable. But nobody says that.

    So as I understand it, you aren't arguing against standard climate models. You're arguing against a misinterpretation of those models.

    • Hi Barry,

      Yes I see your point; we are actually on the same side. Thank you for taking the time to show me that. My excuse is that I’m not a climate guy so the formula is foreign to me and, in my ignorance, did misunderstand it.

      I think we agree then that the climate is a control system and that NET feedback therefore must be negative. I see now that you did acknowledge that apparent fact in your article.

      The only thing left now it to determine if CO2 does in fact cause reduction in the amount of NET negative feedback and by how much. That’s a tall order isn’t it? If it reduces negative feedback to zero or more we agree we’ll get thermal run away, but I admit we don’t need that extreme just to get a magnified CO2 temperature effect.

      So I ask is there proof that CO2 does reduce NET negative feedback and if there is, what is it? I understand intuitively that increased temperature, for any reason, could be self-enforcing due to increasing earth reflectivity etc. Further increased CO2 might also be self reinforced by warming oceans. But this intuition could be wrong due to lack of understanding of the system as a whole.

      We can not accept GCMs as prove of reduced NET negative feedback because, correct me if I’m wrong, we already know they don’t yet work in the real world. Thus their estimated effects could (read likely) also be wrong.

      As a complete aside topic, I think that we can learn something about even a “black box” control system by making observations about what “kicks” do to it. Volcanoes come to mind, or large meteor impacts, as they do produce key SUDDEN kicks and could give us quite a bit of insight about the climate control system. Trouble with that I’d guess is we would first need to find a way of quantifying the nature, magnitude and duration of the “kick”. I wonder if any such analysis has been attempted?

      • Oops! In proof reading my last comment I spotted an error in paragraph four. The words “increasing reflectively” should read “reduced reflectivity”.

        • Thanks so much for this discussion. I’m sure it took a lot of time and effort, but I’m glad both of you took that trouble. It was very informative and interesting. I wish this could be more widely distributed as a great example of how a technical disagreement (if that’s the right word) works.

      • Hi Dan,

        CO2 is treated as a forcing in the present system, not a feedback. So the relevant questions are:

        1. Does adding CO2 cause an increase in global temperature? Answer: Yes–that part is just basic physics.

        2. Once you raise the temperature (or lower it) for any reason–it doesn’t have to be CO2–does the system have positive feedbacks in place that would enhance the response relative to what it would be if you only had the Planck response? Answer: Yes, and that has been estimated in a number of ways, not just GCMs.

        Here’s a really great web page that should explain everything for you. And in fact, your intuition about looking at “kicks” to the system like volcanic eruptions was spot-on. That’s one of the ways climatologists have used to estimate climate sensitivity.

        http://www.skepticalscience.com/climate-sensitivity-advanced.htm

  15. Quote: The “ad hominem” is a classic logical fallacy, but it is not always fallacious; in some instances, questions of personal conduct, character, motives, etc., are legitimate and relevant to the issue

    Roy Spencer is on the board of directors of “The Marshall Institute” (a conservative think tank that also denies the connection between smoking tobacco and various health issues like cardiovascular disease and cancer) and I find it a little odd that a scientist would do this. People worried about this seemingly duplicitous behaviour should watch this 55 minute video:

  16. When I heard that Spencer was a creationist I decided I wouldn’t track his work in any detail. James Hansen is not a creationist, but I find it hard to take him seriously either. In his 2008 testimony to Congress he detailed a steady increase in global temperature since the 70’s, and claimed it dovetailed with proxy temperatures from paleoclimatic data which indicate a CO2 forcing of T. Not only am I aware of no such data, but I don’t see how a linear trend in modern T rise can be identified with an asymptotic response to an exponential growth in CO2 concentration! But this trend, which Hansen takes for granted to be anthropogenic, is the source of a putative acceleration in sea level rise which, barring the melting of Greenland, will flood us by 7cm (GHG forced) per century. Hansen’s worries then depend on Greenland’s ice collapsing. Even if this happened it would be the equivalent of a slow, expected hurricane hitting all the world’s coasts, giving us plenty of time to prepare and respond. The Chinese wouldn’t think twice about creating an artificial lake displacing a population equivalent to that of Miami or Amsterdam, and the oddsmakers have no idea whether Greenland will survive this particular interstadial. In the unlikely event that currient polar melting were both long term and GHG induced, I see little room for rational hysteria regarding the continued burning of fossil fuels, except that eventually they will run out. We probably have a hundred more serious problems lined up, most of which we don’t even know about yet. For one, population density will increase due to current growth rates at between 10 and a thousand times the rate of continental inundation, depending on what happens to Greenland’s ice. For another, every medical advance which remedies a genetic defect in such a way as to insure its replication, results in the degeneration of the gene pool. Global Warming? It just doesn’t register on a rational alarm scale. –AGF

  17. Hi Barry,

    Thanks for that link, very interesting, it has taken me some time to respond, as I wanted to take the time to review the claims made. This most certainly is a complex and important issue. Allow me to focus on the facts we have as I see them and please correct me where I go wrong.

    So-called climate sensitivity is the equilibrium temperature change as the result of any NET RF change C/(W/m2). A numeric value can expressed as delta T with 2xCO2 but this sensitivity is assumed to apply to any type of radiative forcing. It’s really just a convenient value for working with CO2 effects. The value represents a combined feedback result of “turning up the heat” by any means.

    That value together with the “efficacy” of particular RF is the key to predicting the amount of warming that will occur due to added CO2. The studies to determine its’ value consist of a number of different approaches. Namely computer models, empirical data from Paleoclimate, empirical data from ice cores, empirical data from recent climate, and empirical data from recent volcanic activity.

    I first looked at the empirical studies and found the results disappointing. I somewhat understand the difficulty determining these values. Short-term climate has the advantage of direct data measurement, but suffers from the “noise” effects of short duration measurements. Long term studies have a noise averaging advantage, but they suffer from the need to rely on proxies, which add their own set of unknowns. Empirical studies show sensitivity range of .4C to 10C (most in the range of 1.3 to 4.5C).

    I also looked at the data obtained from climate models. Climate models indicate a range of values from 1.5 to 4.5C and “are converging on 3C”. My understanding of how the models work is they try to integrate a to a future result based on set of initial conditions. The integration is a series of small time slices. Each time slice sets the initial conditions for the next iteration. The calculations also include spatial integration of a two or three-dimensional analogs of simulated atmosphere, land and ocean of the entire earth.

    I understand the spatial and time resolution is being continuously being improved. A tall order and thus, even on a super computer, a single run can take months because of the large amount of computations needed. The results they give depend on the initial conditions, the variables used to modify step integration, the granularity of space and time, and naturally the algorithm used in a particular calculation. Changing of these parameters is called plugging. By plugging in different values they can modify the results obtained. They can then check the results against empirical data from the climate record to see “how well” the simulation worked. Through various “cut and try” runs they can simulate a given slice of climate history quite well today with the right mix of variables and algorithms.

    Problems arise when they believe they have the “right” set of variables for one simulation, and try to run the same variables from a different set of initial conditions. Correlation with the climate record starts to go off course. This indicates the model isn’t working too well and seems at this point in time to be the SOTA.

    In summary, when you have error ranges in the order of half or more of the final computed value you really don’t have much to go on. The signal to noise ratio is just too low to get any meaningful intelligence from them.

    If you look at this objectively, this science is still in its’ infancy. Alternately, there may well be cause for alarm, time is of the essence, and thus waiting for a “finer tuned” answer may not be prudent.

    I find the need to examine my own biases when looking at this information. The fact is I don’t really want to believe in AWG. Thus I’m trying to find a way of minimizing any proof of it. I have to admit the science has progressed to a point that does tell us there is going to be warming from CO2 for sure but we don’t really have much of a clue as to how much yet. I’ll be sure to keep watching this site for updates.

    • The situation with regard to climate modeling is a bit different than what you suggest. Global climate simulation is not an initial condition problem, it is a boundary value problem. If I start a global climate model from some initial point and time march forward, the atmospheric portion of the model will forget its initial starting point in a relatively short time, the mixed layer and ice portions will take longer and the deep ocean longest of all. If we ignore the deep ocean for the moment, the time to achieve some sort of equilibration state to static forcing for the atmosphere is order of months and perhaps a decade for the mixed layer. Static forcing here means diurnal and annual solar variability but not steadily increasing (or decreasing) greenhouse gas concentrations.

      The better global climate models (GCMs) are run for centuries of simulated time with current forcing in order to compare their performance against our understanding of current climate. The most recent IPCC report devotes a chapter to assessing the answer to this question. To some extent, all GCMs have internal adjustment knobs and these are used to provide the best fit to current climate. These knobs are physical in the sense that they represent real processes but cannot be set from observations because we lack the comprehensive observations we would need. One example of such a knob is the typical fall speed of an ice crystal. While we can compute this for a particular ice crystal given information on its 3D size and density (ice crystals are not solid ice), we cannot do this globally because we don’t know the global distribution of ice crystal sizes and shapes. Thus we specify a fall speed within the constraints of physical understanding. We then adjust this up or down to give us about the right amount of ice clouds.

      We then run the climate model to simulate the last 100 to 150 years to see if we can simulate temperature change on that time scale. We force the model with observed changes in GHG concentrations and estimated changes in atmospheric particle concentration, including volcanic eruptions. Again the IPCC report discusses these tests in great detail. The simple answer is that the models do a very good job by and large of simulating this temperature change due to changes in external forcing. The more complicated answer is that there is one adjustable knob that can be used – the effect of aerosol (small atmospheric particles) on incoming solar radiation. This setting affects the sensitivity of the surface temperature to total number of particles in the atmospheric column. In essence, you get to pick one value and you have to stick to that value for the entire run. You also know this value within some reasonable ranges, so you don’t get to pick an arbitrary number.

      After all this, the GCM is run forward into the future using assumed changes in GHG forcing over this century. The model has no adjustable knobs at this point and the future is conditioned by the knobs set to get (a) as realistic current climate as possible and (b) the observed climate change over the last 150 years or so.

      So, in summary, your comments about initial conditions and running off the rails are not correct. Climate model results are quite robust to different initial conditions. Simulations of the last 1000 years (or longer) are very difficult to use to assess the quality of climate models because we don’t know the forcing (what was the aerosol loading of the atmosphere? what was the variability of the solar input?) and we don’t have a tight constraint of the global temperature, although we do know the general envelop from proxy temperature records. We do have an uncertainty in model climate sensitivity because the data are currently inadequate to constrain the model knob settings (such as the ice fall speed), but we are working hard on this problem and I expect that we will get better results. One of our biggest problems is going to be the imminent failure of our global observing systems (satellites) and no willingness by the current Congress to fund instrument development and continuous observations of the climate system. Finally, I think climate modeling is quite far removed from its infancy. I started working in this area in the early 70’s, which was truly modeling infancy. We are well into our late teens, I think, which means we still have a lot to learn and are perhaps sometimes too sure of our knowledge, but we also have a great deal of vigor and enthusiasm, as well as considerable knowledge!

      Hope this helps with your continued investigation of your biases.

      • Hi Tom,

        Thank you for clarifying my misconception about the initial condition problem. You also provided some detail on one particular aspect of the modeling problem that shows me that you have a far more detailed understanding than I do of GCMs. However being a programmer myself, when it comes to predicting a complex system like climate I can see lots of potential pitfalls. So please allow me to ramble on and correct my errors.

        All GCMs integrate results over relatively small steps, so any errors they may have get multiplied. I understand that the “knob tweaking” is not arbitrary and needs to be based on some reasonable range of assumptions. The problem is that the tweaking done is based on, shall we say, “questionable” input data. I say questionable because we already have 50+ years of increased CO2 levels and we can’t tell what’s going on for sure from it because the real data we need is almost buried in noise or possibly even entirely missing.

        If the input data were 100% complete and accurate, we would have less need of any predictive models because we would be able to see the effect in the raw data itself. How then can we make a model tweaked by noisy data, plus admit we don’t fully understand all the variables, and expect it to predict anything of much value? It seems to me that until we are be able to say with reasonable certainty that CO2 effects thus far have been “X”, we don’t have much chance of building a predictive model that can tell us what future effects will be.

    • Hi Dan,

      Your comment about errors growing with small steps once again confuses initial condition problems with boundary value problems. Atmospheric scientists are very aware of computational error issues – a large number of us (not me!) make our daily bread through forecasting where error propagation by finite differencing is a big deal. There is a great deal known about this and we test our schemes by doing finite difference solutions of analytically solvable problems so that we can document error growth. Numerical schemes can be designed to make computational error growth insignificant.

      On the climate side, numerical schemes are primarily concerned with stability. The solutions to our equations are constrained over time by the physical forcing of the system (e. g., variations in solar radiation, CO2 concentrations, etc.) and internal variability (e. g., ocean-atmosphere interactions), so we want schemes that prevent numerical instability from appearing in our results. Again, this is a well understood field. There are textbooks, as well as hundreds of articles, devoted to this problem. At this point, I don’t think any of us are really very concerned that forward integration errors contribute significantly to model uncertainty.

      As I tried to say but apparently failed to communicate, knob tweaking is not arbitrary. Climate models are built on physical equations that contain real, definable climate system parameters. All of these are physically based and, by and large, we know what their acceptable ranges are. Some are easily measured, well-known, and global in value (gravity, heat capacity, Clausius-Clapeyron equation); some are reasonably well understood, but only measured locally (cloud physics properties); some are measured globally by satellite but only at relatively course resolution (cloud fraction, surface vegetative type). So our knob tweaking has some uncertainty due to our lack of precise knowledge but is not arbitrary in the sense that we can have any value we like.

      We use the high resolution data acquired during the last 1-2 decades to test model behavior. We put in the current CO2 concentration, current solar constant value, etc. and run our model for a long simulation period to obtain an energetically converged model. We then compare that model to measurements such as the radiation budget (reflected solar, outgoing thermal energy) at the top of atmosphere. These measured quantities are not prescribed in our model; they come out of the full equation set. The fact that CO2 is slowly increasing over this time (2-3 ppm per year) is a small concern because it means that atmospheric statistics are not quite stationary. However, we are primarily concerned at this stage with getting the model internal variability correct and the internal variations are much larger than the climate change on the period of a decade. This recent data record is a way for us to examine the internal consistency of the model and make sure that the knob settings are approximately correct. Remember that the knob settings are for physical parameters – they should be constant in value regardless of CO2 concentration because we don’t expect the physics to change.

      Last point! We will never have enough data to dispense with models. This is equivalent to saying that I wouldn’t need a forecast model for weather if I had perfect knowledge of the current state of the atmosphere. Even if I had that knowledge, I could not do an extrapolation to the weather three days from now without a detailed model because the system is not linear and doesn’t work by simple extrapolation. Besides, it is impossible for us to measure the state of the climate system at sufficient time and space resolution to know everything we need to know. This is a very different topic, but you can read some stuff on data assimilation if you want to pursue this further. Our current research with climate models is intended to answer your question about what the influence of CO2 changes has been AND what further changes will do. Our knowledge is not perfect, but it is quite good. If we double CO2 concentrations, the planet will warm by at least 2 K and may warm by 10 K. There is no existing model from the simplest to the most complex, that gives you less than about 1.6 K, almost all of them are greater than 2 K, and some are greater than 10 K. How about if we think about the consequences of the fat tail of predictions, rather than always thinking about the smallest possible changes? Do you have fire insurance?

  18. Hi Dan,

    One thing to think about is the shape of the probability distribution for climate sensitivity–skewed with a fat tail toward the high side. And it turns out that it’s almost impossible to get rid of that fat tail. The upshot (for me) is that there is a pretty significant chance (maybe 20%) of an “apocalyptic” sort of outcome for continuing to burn fossil fuels like we have been. Since there is no fat tail on the low side, the likelihood of a “dodged that bullet” sort of outcome is much smaller. But by far the greatest probability lies in the “We’d better do something, or else, but at least maybe doing something will have an appreciable effect” range. If Risk equals probability times cost, then you’re right on the money when you say it “may not be prudent” to put off doing anything.

    If you’re interested in the probability distributions, see the paper by Baker and Roe that I linked in Part 3.

  19. Hi Dan,

    I wanted to add one more comment about your last post. That is, when you discuss the results of empirical methods for estimating climate sensitivity, you say, “Empirical studies show sensitivity range of .4C to 10C (most in the range of 1.3 to 4.5C).” From that you seem to conclude that we don’t have any clear idea what the sensitivity is. I don’t think you’re looking at this quite right. These estimates were done by a bunch of different methods, and some are naturally more precise than others. Therefore, some have wide error bars, and others smaller. Given that, I don’t think it’s really fair to take the lowest and highest ends of ALL the error bars, and say that’s the probable range. Rather, I think it would be more instructive to find the range where all the error bars overlap. It seems like the true value (if there is just one) would probably be somewhere in there. Just eyeballing the graph in the link I gave you, that range looks more like 1.5-6ish. And if climate sensitivity is anywhere in this range, it’s probably a very good idea to try to mitigate.

    • Hi Barry,

      You could look at it that way, but I can’t determine which of those studies is more or less correct and have to respect their stated error ranges. So I can’t discard any that don’t fit a particular grouping or error bar length. I agree that most cluster around the 1.5 to 6C range. But I think you’ll agree history shows us a consensus does not translate to correctness. That’s why I don’t actually care too much how many people there are on each side of this issue. Man really isn’t as smart as he thinks he is (my bias).

      It seems the now 100-year-old AGW theory range was 1.6 to 6C. So it looks like all of our latest technology hasn’t been too effective in narrowing it down much. That shows me this is no slam-dunk problem.

      I also believe the empirical studies tend to the lower side? That’s hopeful to me because I can see pitfalls in modeling.

      I agree that if we’re looking at even 1.5 and climate theory works at all, we could be in for some trouble. I don’t see any way out because we’re not going to reduce our use of oil anytime soon unless we get an economic collapse to boot. More taxes are a very bad idea. With what’s unfolding now in Japan it looks like even nuclear is going to get yet another death knell.

      Anyway I wanted to run this one by you for your thoughts. It is junk science?

      Case #1 For Strong Negative Feedback

      Assumption #1 – Liquid water on earth for 4 Billion years?

      Assumption #2 – Solar variation approx. +7% (or more) per billion years?

      Assumption #3 – Earth’s self-heating is relatively small compared to solar effects?

      Based on these assumptions we’ve had around a +28% variation in solar output, yet temperature hasn’t moved any more than about +25C degrees (probably less). That would indicate a major negative feedback case would it not? A 600 billion-year picture with a +4.2% solar constant change paints an even more negative feedback picture.

      The entire history of temperature we think we know stays within a +/- 8C band. I think that if you want to say CO2 has any major effect you have to say the feedback is highly non-linear within that +/- 8C band.

      Case #2 Volcanic Ash Ends Ice Ages

      We think we know that volcanic activity co-insides with the end of ices ages. Ash from volcanoes could have reduced the reflectivity of the ice cover. Water vapor levels were low so there would be little mechanism for covering the ash, further darkening an increasing area of the ice causing it to absorb more heat. This heating, even without melting, would have increased sublimation of water and eventual melting in a positive feedback loop due to the relatively strong GH effect of water vapor. As the liquid ocean water warmed, it would then release CO2, increasing its’ concentration. While ice still remained on land chemical sequestering would remain inhibited. This could help explain the apparent 200-800 year lag in CO2 concentration in air when compared to increased temperature. I do also understand that the apparent lag does not nullify the CO2 theory, BUT I LIKE IT BECAUSE IT LETS US OFF THE HOOK A BIT!

      Here is a paper from 1963 making this hypothesis:

      Click to access igs_journal_vol05_issue038_pg241-244.pdf

      • Hi Dan,

        Case #1:

        This makes no sense to me. Royer et al. combined changes in solar output and changes in atmospheric CO2 over about 420,000,000 years, and found that climate sensitivity must always have been at least 1.5 °C. See here:

        http://www.nature.com/nature/journal/v446/n7135/full/nature05699.html

        Case #2:

        The Milankovitch theory is the current explanation, and it works very well. See Part 2 of my review.

  20. Scott,

    Look about 4 comments from the top. You said:

    “Dr. Bickmore, it is with great dismay that I must conclude that an otherwise seemingly intelligent person as yourself could be utterly hoodwinked by the AGW movement. Please, remove yourself from this scenario and be objective here.”

    Your second post went downhill from there.

  21. great post – i really enjoyed your piece on the ABC too. thanks

  22. […] to an extended blog critique by Barry Bickmore of my book,” he wouldn’t be responding to my 3-part review.  Now, I don’t mind if Roy doesn’t have time to respond to my critique–everyone has to […]

  23. Barry, this is a nicely written and interesting story but with the clear aim of defaming Roy Spencer rather than trying to get to the truth. Personally, I think that his work is very interesting and inspiring albeit far from perfect, as he would be the first to admit. Take a look at his comments on the Murphy and Forster paper from 2010: http://www.drroyspencer.com/2010/07/can-climate-feedbacks-be-diagnosed-from-satellite-data-comments-on-the-murphy-forster-2010-critique-of-spencer-braswell-2008/, where he admits to some mistakes and thanks for the constructive criticism. On the issue of cloud feedback I think his latest work in J. Geophys. Res. 115, D16109 (2010) gets much closer to the truth than the recent paper by Dressler, which you cite. The argument made by Dressler, that the main temperature fluctuations on a time scale of a few years are from ENSO and that El Nino is not caused by radiative forcing is nonsense. The question is clearly whether the heating and cooling during ENSO events is mediated by variations in the radiation balance, and there is good evidence that this is the case.

    I agree on much of your criticism. The model forwarded at one point by Spencer to explain the temperature variation during the last 150 years by natural oscillations, mainly the PDO, was not very serious and is easy to attack. This does not mean, however, that the combined PDO and ENSO have not played an important role. Also, I agree with you that the recent revival of the much criticised Milankovitj theory of ice ages is convincing. However, I think you exaggerate greatly the success of the models with CO2 forcing in explaining the past climate. For example, the role of the CO2 concentration ([CO2]) in the interglacial periods is very questionable. Not only is there a long delay from the onset of warming till the increase of [CO2] but the decrease of [CO2] is also greatly delayed when the temperature starts to drop.

    Finally, please discuss the science which you do so well and refrain from character defamation. You ridicule Spencer for his complaints about refereeing, but the strong bias against skeptics in many journals is well known – just read some of the climategate emails.

    • Hi Jens,

      If I went about to “defame” Roy Spencer, then it was in a far less obnoxious manner than he went about to “defame” his colleagues in his book. You should read it–it’s amazing. At least I didn’t accuse him of fraud, or “hiding” results out of fear.

      I plan on wading into Spencer’s 2010 paper in depth at some point, but I didn’t do much with it in the review, because the review was about his book, which was published earlier. I didn’t think it was that important, because as I mentioned, even if Spencer is right about that issue, it may not have much to do with long-term cloud feedback (which is what everyone cares about.) Also, since Spencer and Braswell (2010) didn’t cite Murphy and Forster, I assumed that they hadn’t taken those criticisms into account, yet. I did notice that they used a 100 m mixed layer, though.

      As for CO2 forcing of past climate, the lag time during the glacial-interglacial cycles is honestly not that big of a problem. There are a number of different plausible explanations, but it’s difficult to tell which ones might be the big contributors. All your observations tell us, really, is that CO2 wasn’t driving the system at the time–it was a feedback. But everyone agrees about this point. The bottom line is that if you estimate climate sensitivity from the glacial-interglacial data, or data from the whole Phanerozoic, you get numbers in the same range the IPCC has been saying. If you’re right, and the CO2 doesn’t have much to do with it, then that must mean that the climate is EXTREMELY sensitive to very minor variations in insolation. But why would it be so sensitive to that, and not greenhouse gases?

      Finally, your defense of Spencer’s complaints about peer review are unconvincing. His 2008 paper passed peer review even though it was severely flawed. And yet, when he submitted a paper that you agree was “not very serious and… easy to attack,” and was rightfully rejected, he went into a snit and published a book largely about how corrupt the peer review system is. Do scientific rivals ever smack down each others’ papers (sometimes unfairly) in review? Of course, and it’s happened to me before. But that’s just people being people, and Spencer ought to buck up and play the game. Is there any doubt that he was being a bit whiny about his treatment, just like I said?

    • Also, regarding your reference to “climategate,” I think you’re blowing that way out of proportion. Did the e-mailers sometimes crow about trashing skeptical papers in review? Yes, but as far as I have been able to tell, they truly did think the papers they talked about in this way were bad. And they were right.

      What if some new e-mails come out, in which the person who reviewed Spencer’s PDO paper crowed about trashing it in review? Would such an e-mail be inappropriate? Perhaps, but given that Spencer’s paper truly was awful, nobody could legitimately say that the review process treated him unfairly.

      This is one of my pet peeves about climate contrarians. If you are pushing a view that is far outside the mainstream, OF COURSE your papers are going to get scrutinized more closely in review, because you are far more likely to draw reviewers that disagree with you. It’s not a scandal. It’s not even mildly surprising. And yet, the contrarians so often try to push it as some kind of scandal, and whine about how unfair it all is. It never seems to cross their minds that maybe their papers got rejected because they really do have serious problems. Well, here we have a case (Roy Spencer’s PDO paper) where we can both agree that the paper was rightfully rejected because it was awful, and yet it doesn’t seem to bother you that Spencer used THIS example as his basis for criticizing the peer review process. It doesn’t throw up a red flag about Spencer in your mind–rather, it throws up a red flag about ME, because I called him on it.

  24. […] demolisce il libro in cui Roy Spencer nega se stesso, già parecchio rovinato dalle recensioni di parecchi altri scienziati e Greenfyre demolisce Richard Muller, vale […]

  25. […] Spencer is a prime example of a contrarian scientist who exhibits this tendency.  As I noted in my recent review of Spencer’s The Great Global Warming Blunder, he has a history of publishing dramatic new […]

  26. […] wrote about this modeling effort in Part 3 of my recent review of Spencer’s book.  I even went to the trouble of programming his model into MATLAB and fitting […]

  27. […] […]

  28. […] discussing bad ideas being dangerous, this person offers one study, as a supposed refutation to the scientific consensus and we’re supposed to give up on all the evidence for climate change (see a critique of […]

  29. […] wrote about this modeling effort in Part 3 of my recent review of Spencer’s book.  I even went to the trouble of programming his model into MATLAB and […]

  30. Barry,

    I’m reading your critiques of Dr. Roy’s work. How do you rectify that his sensitivity estimates are within the measured bounds of the system to solar forcing, while the IPCC’s estimates are not?

    • Hi RW,

      I’m not sure what you are claiming. Would you mind rephrasing the question?

  31. Barry,

    The IPCC claims that 3.7 W/m^2 of ‘forcing’ from 2xCO2 will become +16.6 W/m^2 at the surface (+3C), requiring an amplification factor of about 4.5 (16.6/3.7 = 4.49). Post albedo power coming in from the Sun is only amplified by a factor of about 1.6 (390/240 = 1.625). The 3.7 W/m^2 of ‘forcing’ from 2xCO2 is supposed to be the equivalent of post albedo solar power, is it not? If watts are watts, how can watts of GHG ‘forcing’ be nearly 3 times more effective at warming the surface than watts from the Sun?

    Dr. Roy’s sensitivity estimates fall within the boundary of the 1.6 amplification factor to solar forcing, as I believe his estimates from various means are about 0.5-1C for a doubling of CO2.

    • Hi RW,

      This seems like an interesting problem. Can you give me a link to where the IPCC claims what you say they do? I’d like to check out what they say about the issue before trying to answer. Thanks for your help.

    • BTW, I see that you say 16.6 W/m^2 is equivalent to 3 °C change in Temperature. I’m guessing the link to the IPCC is the 3 °C. In that case, can you just show me the math you use to make that conversion? Once I see that it seems like I’ll be able to see where you’re coming from. Thanks again.

  32. Barry,

    The IPCC claims the sensitivity to a doubling of CO2 is about 3 C, right? I presume you are familiar with the Stefan-Boltzman law? If the surface is to warm by 3 C (from 288K to 391K), it must emit 406.6 W/m^2, which is 16.6 W/m^2 more than the 390 W/m^2 its currently emitting. Conservation of Energy dictates that this +16.6 W/m^2 flux has to be coming into the surface from somewhere if it’s to warm by 3 C.

    There is about 240 W/m^2 of post albedo solar power entering the surface and the surface is emitting 390 W/m^2 as a result of the GHE and all the physical processes and feedbacks in the system. 390/240 = 1.625. In energy balance terms this just means it takes about 1.6 W/m^2 of radiative surface emission to allow 1 W/m^2 to leave the system, offsetting each 1 W/m^2 entering the surface from the Sun.

    If you really think 3.7 W/m^2 is to become 16.6 W/m^2, you need to explain why it doesn’t take 1077 W/m^2 of surface power to offset the 240 W/m^2 of incident solar power. (16.6/3.7)*240 = 1077.

    Spencer’s estimates fall within the measured bounds of the system. The IPCC’s estimate is way, way outside the system’s bounds by nearly a factor of 3.

  33. I meant to say “(from 288K to 291K)”

  34. I don’t see the issue I’m talking about addressed there.

    I was under the impression that you understood the science well enough to discuss issues and questions directly without resorting to mostly outsourced knowledge?

    How about I break down the issue into a series of separate yes/no questions and we’ll take from there?

    • RW,

      You know, I have seen you criticized over at Roy Spencer’s blog for making this kind of argument, but not being very clear about it. If I fail to understand you, is the fault necessarily all mine? Since this keeps happening, why don’t you write up a web page where you spell the whole argument out?

      In any case, it seems to me that you are trying to make an argument based on treating the Earth like a blackbody, and ignoring feedbacks. The article I linked addresses a similar argument.

      Other things that strike me as odd are that you make a ratio of incoming radiation vs. surface emission, and expect that ratio to be the same for small changes in the net flux. Why would it be? After all, if the input from the Sun were quite a bit smaller, the planet would be a giant iceball, and the feedback system would be FAR different than it is, now.

    • I just thought of another thing to mention. As I said above, you are essentially claiming that the Earth is a simple black body (or gray body), or at least that the feedback ratio is always the same, no matter how much incoming radiation there is. Now, as Roy Spencer mentions in his book and a number of places on his blog, a feedback factor (alpha) of 3.3 in Eqn. 2 above would mean that there are no net feedbacks–just gray body emission. (Alpha would be 3.7 for a black body.) Since, no matter what data he’s using to drive his model, he always magically comes up with an alpha value of about 3, that means his models are pretty much saying the Earth is acting like a simple grey body. No wonder his models match pretty well with your analysis, since you are just basing your conclusions on simple manipulations of the Stefan-Bolzmann equation.

  35. No, it’s not your fault if you don’t understand me. That is why I’m offering to break it down into a series of separate yes/no questions.

    I’m not claiming the Earth is a “simple black body”, nor I’m I claiming the “feedback ratio is always the same”. I’m also not claiming the system is linear. It is indeed non-linear.

  36. Barry,

    Let me start by asking you just one question. The Earth is said to have an “effective” emissivity of about 0.62. Do you know what this means?

    • Sure. It’s the fudge factor on the Stefan-Boltzmann law to account for “grey body” behavior.

  37. Can you define what you mean by ‘fudge factor’ in this particular case? Why is does the Earth appear as grey body from space?

    • A grey body doesn’t absorb or emit the flux perfectly, like an idealized black body. So you multiply the black body flux by the “emissivity” to get the grey body flux.

      As I understand it, the effective emissivity of the Earth is so low because of the greenhouse effect, and because clouds have a low emissivity.

      Thanks for stepping me through your reasoning.

  38. Moreover, what does the 0.62 mean physically in terms of energy flow in and out of the system?

  39. Effective emissivity is basically the answer to a schoolboy question in planetary physics. Assume that the Earth has a uniform surface temperature, all solar radiation is absorbed at that surface and the atmosphere is gray with a fixed emissivity. Given that the total solar radiation absorbed amounts to 239 W/m2 and the mean surface temperature is 280 K, what is the emissivity and temperature of the the one-layer atmosphere? [Note that there are 3 unknowns in the problem (surface temperature, atmosphere temperature, and atmospheric emissivity) and two equations, so one of the three must be given in order to find the other two.]

    Since the atmosphere is not gray and the surface temperature is not uniform, the concept of effective emissivity is of little value except in some simple exercise of how the surface temperature would change with increasing emissivity, It certainly cannot be used to calculate feedback factors or climate sensitivity.

    In this simple 1-layer model, the greenhouse effect is of course directly dependent on the emissivity. A higher emissivity means more downward radiation from the atmosphere and a warmer surface. Unlike the actual atmosphere, a gray atmosphere “saturates” when the emissivity reaches one (the atmosphere becomes black) and the surface temperature reaches a maximum. This is totally unrealistic as witnessed by the atmosphere of Venus and arises because the simple model has no vertical temperature structure.

    Gray atmosphere approximations are highly appealing because of their simplicity, but they simply don’t represent the real world. Sorry.

    • Thanks, Tom. I knew students have to do these sorts of calculations in atmospheric science courses, and that they add layers to the atmosphere to see what that does, and so forth.

      It seems to me that RW is, in essence, arguing for a pretty nearly fixed emissivity for net solar input ranging from 0 to somewhere above the present 240-ish W/m^2. He appears to deny this is what he’s doing, but if not, I don’t know what the point of his surface emission/influx ratios are. Maybe he’ll explain.

  40. RW,

    I think I understand where you are headed in the near term. Your ratio of 390/240 = 1.624 is the reciprocal of 240/390 = 0.62. So the Earth’s “emissivity” is directly related to the gain in surface emission vs. solar input.

    Anyway, it seems to me that if the climate sensitivity is 3 °C, then that means the emissivity would change to 240/406.6 = 0.59. If the climate sensitivity is only 1.2 °C (i.e., the “no-feedbacks” value for 2x CO2), then that’s saying the emissivity would be 240/396.6 = 0.61. So in other words, if you’re boiling the whole system down to a 1-layer atmosphere and grey body behavior, you can couch “climate sensitivity” in terms of changes in emissivity.

    You appear to be saying that a change to emissivity = 0.59 with 2xCO2 is “outside the system’s bounds”, whereas Spencer’s estimates of a change to emissivity = 0.61 is “within the measured bounds of the system.” But I haven’t seen where you have defined any “bounds”. You just calculated 1 value

    Since the effective emissivity of the Earth is so highly dependent on clouds (emissivity = 0.5), for instance, I cannot fathom why you would believe that such a small change in effective emissivity is impossible. After all, Roy Spencer thinks that fairly large changes in the surface temperature are due to UNFORCED changes in cloud cover.

    Maybe you can set me straight by showing me how you calculate your “bounds” on what the emissivity of the Earth has to be.

  41. One step at a time, please. The emissivity of the planet is simply the actual emitted power flux to space divided by the surface emitted power flux. 240/390 = 0.615. Physically it means the atmosphere is acting as a ‘filter’ between the surface and space, where each ‘pass’ through the ‘filter’ about 62% of the emitted surface power escapes to space and 38% is returned or re-circulated back to the surface.

    Do you agree with this? Mind you, I’m not inferring anything about feedback here.

    • That’s how I understand it.

    • BTW, do you agree with my assessment that different climate sensitivities with respect to 2xCO2 can be translated in terms of different changes in emissivity? In other words, do you agree that even if Roy is right about the magnitude of the sensitivity, he is still talking about a change in effective emissivity?

      • Yes.

  42. Do you agree that the so-called “Planck response” (i.e. about 3.3 W/m^2 per 1 C of warming) is directly derived from the surface response to solar forcing (3.3 W/m^2 x 1.625 = +5.4 W/m^2 = +1 C from S-B)??? Put another way, it takes about 5.4 W/m^2 of surface emitted power to allow the 3.3 W/m^2 to leave the system, offsetting the 3.3 W/m^2 entering the surface. If not, explain how its derived.

    If yes, do you agree that the 1.625 W/m^2 to 1 W/m^2 ratio of emitted surface power to incident solar power (390/240 = 1.625) accounts for all of the physical process and feedbacks in the system (positive or negative, known and unknown)? If not, why haven’t the feedbacks fully manifested themselves after hundreds, thousands, millions of years of incident solar energy?

    Pleas note these are not ‘trick’ questions or an attempt to ‘trap’ you. I’m just trying to establish some common ground from which we can further discuss in more detail.

    • Hi RW,

      I’m not following this step. The alpha = 3.3 W/m^2/K value does result in an equilibrium warming of 1.2 °C (I’ve run the simple climate model to make sure). However, since we already agreed that any change in temperature due to adding GHGs would also change the effective emissivity of the Earth, I don’t see how the feedback factor can be “directly derived from” the 1.625 figure, which is just the reciprocal of the effective emissivity. In other words, the 1.625 ratio refers to present emissivity, while the feedback factor has to be related to how the emissivity would change in response to temperature changes.

      • The reason why the reciprocal of the emissivity is valid for the derivation is because the energy arriving and leaving is the same (i.e. about 240 W/m^2 in and out), where 240 W/m^2 s proportionally equal to the value of ‘0.615’ and ‘1’ respectively.

        1/0.615 = 1.625 and 1/1.625 = 0.615

        390/240 = 1.625 and 240/390 = 0.615

        Does this help?

      • I didn’t realize what step you meant. I’m not referring to the feedback factor to changes in ‘forcing’ – just that the net surface power flux of 390 W/m^2 is maintained by all the physical process and feedbacks in the system (latent heat, evapotranspiration, convection, clouds, precipitation, etc., etc.).

        Does this help?

      • Let me try to ask this question another way: Why specifically is the “Plank response” 3.3 W/M^2 per 1 C of warming? Why isn’t it 33 W/m^2 per 1 degree or 0.33 W/m^2, for example?

        If it’s not derived from the surface response to solar forcing (the 1.6 W/m^2 to 1 W/m^2 ratio), then what is it derived from?

        If not the net result of all the physical processes and feedbacks in the system, then what?

      • And yes, you are correct that the effective emissivity would change with a 1.1 C rise in temperature from GHG ‘forcing’. It would decrease from 0.615 to about 0.606 (240/396 = 0.606). The new surface reciprocal would be 1.65 (1/0.606 = 1.650). So yes, there is a small difference between the incremental and absolute response. Is this what you mean?

        • That’s what I mean, but I’m still getting hung up. Shouldn’t the 3.3 W/m^2/K (or whatever the feedback factor is) be more or less directly related to that **change** in emissivity, rather than whatever the emissivity happens to be right now?

  43. Or even more simply, it takes about 3.3 W/m^2 of surface ‘forcing’ to effect a 1 C rise in temperature. This is the origin so-called ‘intrinsic’ or ‘no-feedback’ response of 1.1 C from 2xCO2 (3.7/3.3 = 1.1, or 3.7 x 1.625 = 6 W/m^2 = 1.1 C from S-B).

    Do you agree?

  44. Also, I’m not implying anything at this point in regards to potential feedbacks acting on additional surface ‘forcing’ (like from increased GHGs). Eventually, I’m headed there of course, but I’m simply trying to establish agreement that the system’s energy balance is highly dynamic and there are numerous complex and chaotic physical processes and feedbacks that maintain the net surface flux of 390 W/m^2.

    Do you agree?

  45. Barry,

    You write: “That’s what I mean, but I’m still getting hung up. Shouldn’t the 3.3 W/m^2/K (or whatever the feedback factor is) be more or less directly related to that **change** in emissivity, rather than whatever the emissivity happens to be right now?”

    Yes, but the surface response of 5.4 W/m^2 (1C) to the 3.3 W/m^2 of ‘forcing’ would not. The change in the emissivity and surface reciprocal from 3.3 W/m^2 of GHG ‘forcing’ would be quite small – only about 1.3% and is negligible (0.615 – 0.607 = 0.008; 0.008/0.615 = 0.013).

  46. The change in emissivity or atmospheric opacity is largely due to the increase in GHG atmospheric absorption. Is this what you mean? That the actual 3.3 W/m^2 of surface ‘forcing’ comes from increased atmospheric absorption?

  47. Here is summary explanation:

    When there is a radiative imbalance, i.e. from additional CO2 added to the atmosphere which redirects more outgoing surface radiation back to the surface, there is reduction in the amount of LW radiation leaving at the top of the atmosphere (more radiation is arriving from the Sun than is leaving at the top of the atmosphere). To achieve equilibrium, the system warms up until it again radiates the same amount of energy as is arriving from the Sun.

    Currently there is about 240 W/m^2 arriving post albedo from the Sun and 240 W/m^2 leaving at the top of the atmosphere. This represents the system in equilibrium (energy in = energy out). If there was a radiative imbalance (or ‘radiative forcing’) of 3.3 W/m^2 from increased GHG absorption, the energy leaving at the top of the atmosphere would reduce by 3.3 W/m^2 to 236.7 W/m^2. Currently, there is about 390 W/m^2 emitted by the surface. In this example, an additional 3.3 W/m^2 is received by the surface for a total of 393.3 W/m^2. If the +3.3 W/m^2 at the surface responded to the same as the 240 arriving from the Sun, it will be amplified by a factor of about 1.625 (390/240 = 1.625). 3.3 W/m^2 x 1.625 = +5.4 W/m^2 to allow the 3.3 W/m^2 to leave at the TOA restore equilibrium (240 W/m^2 in and out). The new surface emitted radiation would be 395.4 W/m^2 (390 W/m^2 + 5.4 W/m^2), which corresponds to about a 1C rise in temperature.

    Does this make more sense?

    • RW,

      Sorry, it looks like here’s where I jump ship.

      You realize that adding GHGs to the atmosphere would lower the effective emissivity, and that adding more GHGs would change it more. It appears that your entire argument boils down to this. You think a smaller change in emissivity (that you can round off so it doesn’t make much difference) is more plausible than a larger one, but you give no physical reason why this would be the case.

      Unless you have something up your sleeve in the way of a physical reason why effective emissivity has to stay about the same, regardless of forcing, I don’t think the discussion needs to go any further.

      As Tom (who is an atmospheric scientist) said, this kind of analysis simply isn’t suited for saying anything about climate sensitivity.

      • The smaller change in emissivity I’m referring to is from direct GHG ‘forcing’. And yes, the effective emissivity does not have to stay about the same, regardless of forcing. In theory, it could increase or decrease depending on the ‘feedback’. Also, I’m not claiming the 1.625 factor gives the precise or actual sensitivity. It doesn’t.

        I’m eventually getting to that question of course, but I’m just trying to take it one step at a time. Again, I’m just trying to establish some common ground from which we can further discuss.

      • Ok, as long as we agree that the emissivity can change, lead on.

  48. OK, but I need you to answer these questions:

    Do you agree that the so-called “Planck response” is derived from the surface response to solar forcing, as I outlined above (i.e. from the 1.6 to 1 ratio)?

    If yes, why specifically is the “Plank response” 3.3 W/M^2 per 1 C of warming? Why isn’t it 33 W/m^2 per 1 degree or 0.33 W/m^2, or some other number, for example?

    If not the net result of all the physical processes and feedbacks in the system, then what?

    • What I mean here is do you agree that the 390 W/m^2 flux from the surface in response to the incident 240 W/m^2 flux from the Sun is the net result of all the physical processes and feedbacks in the system?

      If not, then why is the net surface flux about 390 W/m^2?

    • If I’m interpreting you correctly, I think I can agree.

      • OK, my next question is can the physical processes and feedbacks that control and maintain the 390 W/m^2 net surface flux be separated from those that will act on an additional ‘forcing’ – say from increased GHGs?

        If so, how and why?

  49. I would like to clarify a few things:

    I do not think the effective emissivity of about 0.615 is a fixed value. It’s only an approximate average. No doubt it fluctuates somewhat higher and lower, as the system’s energy balance is very dynamically maintained.

    I also do not think the atmosphere acts a single layer either, as implied by Tom Ackerman. I definitely does not.

  50. “Ok, as long as we agree that the emissivity can change, lead on”

    Yes, I agree.

  51. RW,

    You said,

    “OK, my next question is can the physical processes and feedbacks that control and maintain the 390 W/m^2 net surface flux be separated from those that will act on an additional ‘forcing’ – say from increased GHGs?”

    Not that I know of.

    • OK, the feedbacks that control and maintain the net surface flux of about 390 W/m^2, do you think they are net negative or net positive?

      If you think the feedbacks are net negative, why do think they would suddenly turn positive on additional ‘forcings’, let alone 300% net positive required for a 3C rise from 2xCO2?

      If you think they are net positive, how do you explain that the system is so tightly constrained given the global temperature anomaly barely moves by more than a few tenths of degree from year to year despite such a large degree of shorter-term, local, regional, seasonal hemispheric, and even sometimes globally averaged variability? And even when globally averaged the temperature does move by a half degree or more like during the 1998 and 2010 El Nino events, it always tends to revert to it’s pre-equilibrium fairly quickly. This would seem to be inconsistent with net positive feedback on additional forcings, let alone net positive feedback of 300%, would it not?

      • The feedbacks are net negative IF you include the Planck response as a feedback. Most of the time climatologists talk about positive or negative feedback relative to the normal Planck response, i.e., if alpha in Eqn. 2 above is greater than 3.3, it’s net negative, and if less than 3.3, it’s net positive. If you include the Planck response, negative feedback corresponds to any alpha value above zero. (If you don’t believe me about this, read Roy’s book. He’s very clear about this issue, which I appreciated.)

        I’ve noticed that a lot of electrical engineers and control systems engineers get confused about this point, because they use “feedback” in the (more proper) second sense. (I don’t know what your specialty is–just thought I’d throw that out.)

        So anyway, my point is that NOBODY argues that net feedbacks are positive, if you include the Planck response. Ever. There are always brakes on the system–the only question is how strong.

  52. […] to a corrupt and oppressive Priesthood–i.e., his colleagues.  (Read my review of the book here.) Ultimately I find enough evidence to virtually prove my theory, but now the research papers that […]

  53. I just took some time to read through this entire exchange and would like to make a couple of points.

    The first is that RW’s analysis is stretching the simple one-layer atmosphere model completely beyond its usefulness. The surface temperature of any planet with an atmosphere depends on the composition and thermal structure (temperature profile) of the atmosphere. [If you want a practical application, try to extend this analysis to Venus which has an effective radiating temperature of only 185 K (because it is highly reflective) with a surface temperature of 760 K!] The radiative forcing discussed by the IPCC is the reduction in longwave emission at the tropopause (and let’s not even talk about the stratospheric adjustment part of this). That reduction is NOT the same as in increase in longwave radiation received at the surface because there is an intervening atmosphere.

    The second is that this discussion fails to distinguish clearly between what maintains the current climate and what happens when climate changes due to external radiative forcing. Yes, you can define an effective emissivity for the current Earth atmosphere. No, you cannot then use that emissivity to investigate what will happen if you change the external forcing because that emissivity does not account for the actual physical processes that are at work, because it is an artificial construct. One would need to break the emissivity down into a series of partial derivatives and figure out what the value of each one is for a change in forcing and then add them back up. And, you cannot add changes in partial derivatives of emissivity because the atmosphere is not gray so the derivatives are spectrally dependent.

    If you want to understand this problem, I suggest that you start with a diagram of energy fluxes in the current climate system (see either Kiehl and Trenberth, 1997 or Trenberth et al., 2009, both in the Bulletin of the AMS). The Earth surface absorbs only about 170 W/m2 of solar radiation; the other 65 to 70 W/m2 is actually absorbed in the atmosphere. The Earth surface receives about 324 W/m2 of longwave radiation from the atmosphere, so the amount of radiative heating from the atmosphere is close to double that of the sun. In addition, there is a non-radiative transfer of about 100 W/m2 from the surface to the atmosphere via convection and evaporation. The asymmetry in atmospheric emission (more radiated down to the surface than out to space) comes about because of the thermal structure of the atmosphere (warm near the surface and cold aloft) and the distribution of water vapor. You simply cannot account for this in a one-layer atmosphere or a gray emissivity.

    The amplification of the CO2 radiative forcing comes by increasing water vapor concentrations in a warmer atmosphere which then increases the downwelling radiation. If you double CO2 and then wait for equilibrium, you can compute an effective emissivity for the system, but you cannot a priori calculate the change in gray emissivity from knowing that CO2 increased.

    Sorry for the lengthy response. I hope it clarifies why the basic premises of this discussion are wrong.

    • Agreed. I’ve just been curious to see where RW was going. If it all boils down to “the effective emissivity can’t change much because the net feedbacks are negative,” but he doesn’t give any physical reasoning to put any boundaries on it, then it’s not much of an argument.

  54. Barry,

    “The feedbacks are net negative IF you include the Planck response as a feedback. Most of the time climatologists talk about positive or negative feedback relative to the normal Planck response, i.e., if alpha in Eqn. 2 above is greater than 3.3, it’s net negative, and if less than 3.3, it’s net positive. If you include the Planck response, negative feedback corresponds to any alpha value above zero.”

    I know. This is the problem. The so-called “Plack response” already includes the lion’s share of all the feedbacks in the system from decades, centuries, millenia of solar forcing. How could it not, especially given such a tightly constrained energy balance is so dynamically maintained? It’s not a ‘zero-feedback’ value, but more an upper limit on incremental ‘forcings’. If the net feedback on the system is negative, as you agree is required for basic stability, then by the very nature of net negative feedback, responses to incremental ‘forcings’ would need to be less than the absolute response (i.e less the 1.6 to 1 ratio) – not more. Otherwise, the current energy balance could not be maintained.

    How would the system ‘know’ the difference from a change in GHG ‘forcing’ from that of a slightly varying cloud albedo, for example? Or some other variable that modifies the energy fluxes somewhat?

    “So anyway, my point is that NOBODY argues that net feedbacks are positive, if you include the Planck response. Ever. There are always brakes on the system–the only question is how strong.”

    But haven’t the ‘brakes’ already been put on the system? If they haven’t, why is the average temperature and net surface flux so tightly constrained? Why didn’t the feedbacks in the system ultimately manifest themselves to an effective emissivity of 0.22 (3.7/16.6 = 0.22), where a net surface flux of 1077 W/m^2 has 837 W/m^2 ‘blocked’ by the atmosphere and re-circulated back to the surface (240/1077 = 0.22)?

    • I wrote:

      “How would the system ‘know’ the difference from a change in GHG ‘forcing’ from that of a slightly varying cloud albedo, for example? Or some other variable that modifies the energy fluxes somewhat?”

      What I mean here is how would the physical processes and feedbacks in the system ‘know’ the difference between energy flux changes from a slightly varying cloud albedo, for example, from changes in GHG ‘forcing’? Why would the physical processes and feedbacks not respond them and limit them in the same way they do to all the other dynamic variables in the system that change energy fluxes?

    • Sorry, but I’m not buying this. As Tom pointed out, the emissivity is not some physical constant for the system. It simply describes the situation as it exists at the present time. It says absolutely nothing about how the system might change in response to forcing.

      You admit that the emissivity can change. Can you derive some boundaries for that change?

      • The emissivity is also derived from the surface response to solar forcing (the actual surface response is just the inverse), so it too is the net result of all the feebacks and physical processes in the system just the same.

        But I’m not quite sure exactly what you’re asking or what your objection is.

  55. Tom,

    “The first is that RW’s analysis is stretching the simple one-layer atmosphere model completely beyond its usefulness. The surface temperature of any planet with an atmosphere depends on the composition and thermal structure (temperature profile) of the atmosphere. [If you want a practical application, try to extend this analysis to Venus which has an effective radiating temperature of only 185 K (because it is highly reflective) with a surface temperature of 760 K!] The radiative forcing discussed by the IPCC is the reduction in longwave emission at the tropopause (and let’s not even talk about the stratospheric adjustment part of this). That reduction is NOT the same as in increase in longwave radiation received at the surface because there is an intervening atmosphere.”

    Before I address the rest of your post, can you explain to me how the 3.7 W/m^2 of radiative ‘forcing’ turns in +6 W/m^2 at the surface for the 1.1-1.2 C ‘Planck’ or ‘no-feedback’ response from 2xCO2? I mean specifically where the watts are coming from (the +6 W/m^2 flux into the surface)?

    If the 3.7 W/m^2 is not incident on the surface, as you seem to be claiming, then how much of it? How are you deriving the amount?

    BTW, my ‘analysis’ is not using or assuming a one-layered atmosphere, but I will explain this after you answer.

    • I wrote:

      “If the 3.7 W/m^2 is not incident on the surface”

      I meant to say: If the 3.7 W/m^2 is not all incident on the surface…..

    • BTW, the reason I’m asking this is so I can know how best to respond to the rest of your post.

  56. I think you are treating this like an accounting problem instead of a physics problem. When you increase the concentration of CO2 or any greenhouse gas in the troposphere, you reduce the amount of upwelling longwave radiation at the tropopause because you increase the opacity of the atmosphere (which reduces the energy radiated directly from the surface through the atmosphere) AND because the atmosphere radiates less despite having a greater emissivity. This occurs because the atmosphere has a lapse rate with temperature becoming colder with altitude. For more CO2, emission occurs at a higher altitude, which has a colder temperature, hence less radiation. As I have said repeatedly, you cannot mimic this response with a gray emissivity, no lapse rate atmosphere.

    The surface+troposphere must warm due to this reduced loss of longwave radiation. The warming occurs throughout the depth of the troposphere because the surface and troposphere are convectively linked together. This warming increases the downwelling radiation from the atmosphere. Warming continues until the upwelling radiation at the tropopause is once again in equilibrium with the absorbed solar radiation below the tropopause.

    So, in answer to your question, the initial radiative forcing at the tropopause is actually a reduction in the loss of longwave energy. The surface receives more longwave because (1) the opacity (emissivity) of the atmosphere in the CO2 band increases and (2) the atmosphere is warmer which increases the emission in both the CO2 band and the water vapor bands. The surface temperature increases because of this increase in downwelling longwave. The amount of temperature increase depends on the wavelength-dependent opacity of the atmosphere and the convolution of the Planck function with that opacity. I add this statement because there is no universal constant that relates a change in outgoing longwave radiation to a change in surface temperature.

    Finally, I would like to note that none of this is new. There is a wonderful paper written by Manabe and Weatherald in 1967 that explores all of this using a simple radiative-convective model. If you are interested, you can download it from http://www.atmos.washington.edu/~ackerman/Manabe_etal.pdf.

    • Just as I suspected, you didn’t really answer my question, which tells me you don’t really understand where the watts (the energy) is coming from to cause the 1.1 C so-called “Planck” response.

      But it’s late. I’ll respond to you – probably tomorrow, in more detail.

  57. Barry,

    “As Tom pointed out, the emissivity is not some physical constant for the system. It simply describes the situation as it exists at the present time. It says absolutely nothing about how the system might change in response to forcing.”

    I’ve acknowledged that the emissivity is not a physical constant or fixed value – but only an approximate average. But I don’t see how it not being a constant helps the case for net positive feedback on incremental ‘forcings’ or energy imbalances. If anything, it’s more consistent with net negative feedback in response to incremental ‘forcings’, because it’s always changing somewhat higher or lower, but longer-term averaged it stays about the same.

    • Then why was the Earth several degrees hotter a few hundred million years ago? Why was it several degrees cooler during the ice ages? It seems to me that some change in the forcing and subsequent response caused the system to approach a different equilibrium state. It’s like Le Chatelier’s principle for climate.

      • I’m not claiming or implying the Earth’s equilibrium surface temperature cannot change from changes in ‘forcings’. Obviously it can and has. It’s the magnitude of the change to the given ‘forcing’ in the current system that I’m disputing (and Roy is disputing). I agree the physics and data supports a likelihood of some effect.

        Exactly why the Earth was hotter a few hundred million years ago – I don’t know. We don’t have enough data to reliably determine why. As far as the ice ages, they are mostly likely driving by changes in the Earth’s orbit, which changes the distribution of the incident solar energy into the system.

      • You say we don’t have enough data, but here’s the scoop.

        1. Solar physicists can estimate the change in solar output back that far by studying other, similar stars. It was likely significantly less bright back then. Sure, they probably can’t be too precise with these estimates, but if the change was large, the absolute precision doesn’t matter as much.

        2. Geochemists can estimate the CO2 concentration and temperature at the time from ocean sediments. Sure, there are big errors associated with this, too, but when the changes are large, over a long period of time, they are less significant.

        3. Geologists can estimate the extent of any ice sheets at the time.

        4. Putting all this together, Royer et al. (you can look up at least a couple of their papers) were able to estimate climate sensitivity over time. They got about the same probability distribution as the IPCC models.

        5. Others have estimated climate sensitivity from the glacial/interglacial cycles, and guess what? They get about the same spread, too.

        People like Spencer and Dick Lindzen want to wave all this away, but the fact is that when we keep getting the same answer over and over, we start suspecting we might be in the right ballpark.

  58. Don’t bother responding to my comment. I tried to give you an answer to your question by explaining the physics of the atmosphere and providing you with some articles to read. Perhaps my answer was less clear than I thought. I’m sorry if that is the case.

    Your response however is simply rude and unacceptable and I see no point in continuing this discussion with you.

    • I apologize if I came across as rude.

  59. Barry,

    You’re obviously a smart guy. I just think you have accepted far too much up front at face value without really thinking things through very carefully (or very thoroughly). Specifically, the claimed ‘no-feedback’ response of 1.1-1.2 C and the ‘radiative forcing’ of 3.7 W/m^2 from 2xCO2. The IPCC and pro-AGW climate science community are suspiciously vague on exactly what these things mean and how they are derived.

    I know that Roy has accepted them for his work, but I (and others) do not for very good reasons. You seemed to agree that the physical processes and feedbacks that maintain the net surface flux of 390 W/m^2 from the post albedo solar flux cannot be separated from those that will act on additional ‘forcings’, but this is in fact exactly what the IPCC has done by claiming the 1.1 C ‘Planck’ response represents the ‘no-feedback’ response to 2xCO2.

    Also, I agree that the 3.7 W/m^2 of ‘forcing’ would not all be incident on (or received by the surface, but this is what they are assuming to arrive at the 1.1-1.2 C value. They say the 3.7 W/m^2 is the net reduction in outgoing LW at the tropopause when CO2 is doubled, which is then assumed to increase the net LW flux into the surface by 3.7 W/m^2. Because about 38% of the outgoing surface power is ‘filtered’ by the atmosphere and recirculated back to the surface, the surface then has to warm up an additional 62% (2.3 W/m^2) in order to emit the +3.7 W/m^2 at the surface back out to space to achieve equilibrium (3.7 W/m^2 x 0.62 = 2.3 W/m^2; 3.7 + 2.3 = 6 W/m^2 = +1.1 C.

    I’ve asked the question I asked Tom above on a few other pro-CAGW blogs, and so far not a single one has been able to provide the answer, which is telling to me.

  60. Okay, I am going to try one more time.

    1. You created an ill-posed problem by starting with numbers derived from a physically complete longwave radiative transfer model and then trying to explain them with an overly-simple emissivity model. Longwave radiation is NOT filtered or re-circulated by the atmosphere. It is absorbed and re-emitted. Until you understand those processes and their spectral properties, you are not going to get this problem right.

    2. I told you exactly where the absorption and re-emission occurs spectrally in the atmosphere. If you want to know the exact numbers, then you need to run a spectrally resolved radiative transfer code. I have one and have done it. I referred you to a paper written in 1965 that describes exactly what happens when you double CO2 but you didn’t read it.

    3. You don’t like the answer I gave, not because you can show it is wrong (which it isn’t) but because it doesn’t agree with your preconceived ideas about what ought to happen. You refuse to accept the fact that this problem cannot be explained adequately with a toy model.

    4. You then make a snarky remark (“I’ve asked the question I asked Tom above on a few other pro-CAGW blogs, and so far not a single one has been able to provide the answer, which is telling to me.”) stating that I didn’t provide “the answer” and implying that I am either dumb or dishonest, or both. (Not to mention all the other people who couldn’t answer your ill-posed question to your satisfaction.)

    Now, do you understand why I think you are rude? Maybe you can explain to me why I should think anything else.

    Oh, by the way, I published my first article on longwave radiative transfer in the atmosphere in 1973 and my first article specifically on CO2 radiative transfer in 1979 (long before anyone was worrying about global warming). I have taught atmospheric radiative transfer and remote sensing at two of the leading atmospheric science departments in the country, and will teach the radiative transfer course again this winter. I have published nearly two hundred peer-reviewed articles in atmospheric science, the vast majority of which deal with radiative transfer. And I am a Fellow of two scientific societies. So when you tell me that I don’t understand radiative transfer, you just might want to reconsider.

    Have a nice day.

    • 1. I think you’ve misunderstood what I was saying. Did you notice the word ‘filter’ was in quotes? Essentially the atmosphere does act as a ‘filter’ between the surface and space where only a portion of the emitted surface power is allowed to leave at the TOA. This is why the surface is warmer than it would be otherwise (i.e. the greenhouse effect) I’m well aware that the process of the ‘filtration’ is the absorption and isotropic re-emission of outgoing LW radiation by GHGs and clouds in the atmosphere.

      2. I know what happens when CO2 is doubles. The whole of the atmosphere absorbs an additional 3.7 W/m^2 of outgoing surface radiation that previously passed straight through the atmosphere as if the atmosphere wasn’t even there. It’s the reduction in ‘window’ transmittance from the surface directly to space and from the heated atmosphere itself directly to space (some of which too passes through the ‘window’). For example, Trenberth has a ‘window’ transmittance of 70 W/m^2 in his paper (40 W/m^2 through the clear sky and 30 W/m^2 through the cloudy sky). Using his number, if CO2 is doubled, this value reduces to 66.3 W/m^2 (70 – 3.7 = 66.3) and the atmosphere absorbs an additional 3.7 W/m^2.

      3. I’m not sure what you are referring to here.

      4. You seem to be awfully defensive here. It’s nice that you have an accomplished resume, but that doesn’t make what you say (or claim) anymore correct than anyone else. I’m not sure what else to say in regards to this.

    • Tom,

      You said earlier: “The radiative forcing discussed by the IPCC is the reduction in longwave emission at the tropopause (and let’s not even talk about the stratospheric adjustment part of this). That reduction is NOT the same as in increase in longwave radiation received at the surface because there is an intervening atmosphere.”

      I actually agree with this completely, which is why I asked where the watts are coming from to cause the 1.1-1.2 C rise. No one seems to know and the IPCC doesn’t implicitly say anywhere that I’m aware of. Of course, only the portion of 3.7 W/m^2 absorbed by the atmosphere that ends up being emitted back to the surface can increase its temperature. A large portion of what the atmosphere absorbs and re-emits goes to space as part of 240 W/m^2 flux leaving at the TOA.

      • Hi RW,

        Here’s how I understand it. The radiative transfer models Tom talked about do a great job of reproducing the spectral characteristics of incoming and outgoing radiation. Therefore, atmospheric scientists have high confidence that they know what the different greenhouse gases are doing (since their emissions have different spectral characteristics). Then they double the CO2 concentration in the models, and see what the change in the radiation flux would be. Assuming no feedbacks, it’s easy to calculate what the subsequent change in temperature would be.

        Try to understand where Tom is coming from. This is textbook stuff that’s been hashed and rehashed for decades. As you noted, it’s not controversial in the least, even for the likes of Roy Spencer and Dick Lindzen. Tom has been publishing and teaching this stuff for a long time, and yet you won’t believe a word he says about it. Can you see why someone might get the impression that nothing he says will make any impression? He really went out of his way to give you a pretty detailed explanation.

  61. Barry,

    “Others have estimated climate sensitivity from the glacial/interglacial cycles, and guess what? They get about the same spread, too.”

    I’ve seen this. There is absolutely no way this is even remotely applicable to the current system. For starters, you can’t equate the positive feedback effect of melting ice from that of leaving maximum ice to that of minimum ice where the climate is now (and is during every interglacial period). There just isn’t much ice left, and what is left would be very hard to melt, as most of is located at high latitudes around the poles which are mostly dark 6 months out of the year with way below freezing temperatures. A lot of the ice is thousands of feet above sea level too where the air is significantly colder. Unless you wait a few 10s of millions of years for plate tectonics to move Antarctica and Greenland to lower latitudes (if they are even moving in that direction), no significant amount of ice is going to melt from just a 1 C rise in global average temperature. Furthermore, the high ‘sensitivity’ from glacial to interglacial is largely driven by the change in the orbit relative to the Sun, which changes the distribution of incident solar energy into the system quite dramatically (more energy is distributed to the higher latitudes in the NH summer, in particular). This combined with positive feedback effect of melting surface ice is enough to overcome the net negative feedback and cause the 5-6 C rise. The roughly +7 W/m^2 or so increase from the Sun is only a minor contributor. We are also very nearing the end of this interglacial period, so if anything the orbital component has already flipped back in the direction of glaciation and cooling.

    With regard to your points 1-4, there are just too many unknowns and not enough reliable data to infer anything useful.

    • Hi RW,

      Monkey Wrench: When they do these calculations, they can do a direct estimate of the change in albedo due to ice melt/growth, because the geologists can estimate the extent of the ice sheets. So that part is already factored into the calculation, and what we are talking about is the “fast feedbacks” sensitivity.

      Oh, I know. All this paleoclimate data isn’t of sufficient quality for your standards.

      But the fact is that the data is what it is, and the standard models can explain it just fine, while low climate sensitivity cannot.

      • The glacial to interglacial cycle is primarily driven by orbital variation combined with the positive feedback from melting a large amount of surface ice – not the increase net radiative ‘forcing’ of about 7 W/m^2 that occurs.

        The orbital component driving the bulk of the change is totally non-existent in the current system and the melting ice is a largely ‘clamped’ effect for the reasons I outlined. Basically, they are trying to equate the 0.8C per 1 W/m^2 needed for the 3C rise from the 3.7 W/m^2 from 2xCO2 (0.8 x 3.7 = 3C) to the +7 W/m^2 net incident solar from the orbital change that occurs from glacial to interglacial (0.8 x 7.0 = 5.6C).

        It apples to oranges.

  62. Barry,

    “Here’s how I understand it. The radiative transfer models Tom talked about do a great job of reproducing the spectral characteristics of incoming and outgoing radiation. Therefore, atmospheric scientists have high confidence that they know what the different greenhouse gases are doing (since their emissions have different spectral characteristics). Then they double the CO2 concentration in the models, and see what the change in the radiation flux would be. Assuming no feedbacks, it’s easy to calculate what the subsequent change in temperature would be.”

    OK – if it’s easy, can you show me how it’s calculated (the ‘no-feedback’ temperature change).

    Also, I don’t dispute the 3.7 W/m^2 number or the radiative transfer models and methodology used to get it, so I’m not sure where you’re implying.

    • I mean to say: “I’m not sure what you’re implying”

  63. Barry,

    “Try to understand where Tom is coming from. This is textbook stuff that’s been hashed and rehashed for decades. As you noted, it’s not controversial in the least, even for the likes of Roy Spencer and Dick Lindzen. Tom has been publishing and teaching this stuff for a long time, and yet you won’t believe a word he says about it. Can you see why someone might get the impression that nothing he says will make any impression? He really went out of his way to give you a pretty detailed explanation.”

    Again, I’m not sure what you’re implying here. Other than what I’ve specifically responded to or tried to clarify, I don’t really disagree with what Tom has said about the atmosphere.

  64. I think one of the biggest problems here is that many seem to be forgetting or overlooking that what matters in the system (any system) is net energy flow.

    The main point of contention here is that in the Earth’s climate system it only takes about 1.6 W/m^2 of surface radiative flux to allow 1 W/m^2 of radiative flux to leave at the TOA. This includes all the various complex and chaotic non-radiative energy transports from the surface to the atmosphere, from the atmosphere to other parts of the atmosphere, and from the atmosphere back to surface.

    Let me try to approach this from another angle. The surface is emitting about 390 W/m^2, right? The 390 W/m^2 just represents the net flux of energy entering the the surface, right? There is 240 W/m^2 entering the system and 240 W/m^2 leaving at the TOA. Where is the +150 W/m^2 flux into the surface coming from (240 + 150 = 390)???

  65. Let me elaborate on what I’m referring to in more detail. Using the Trenberth energy flows diagram here:

    He has surface is emitting 390 W/m^2 and there is a ‘window’ transmittance of 70 W/m^2 (40 W/m^2 through the clear sky and 30 W/m^2 through the clouds). This means that of the 390 W/m^2 emitted at the surface, 320 W/m^2 is absorbed by the atmosphere (390 – 70 = 320). He then has the atmosphere emitting 165 W/m^2 to space (165 + 70 = 235 leaving at the TOA), so by deduction 155 W/m^2 of what’s absorbed by the atmosphere is returned to the surface (155 coming back from the atmosphere + 235 post albedo from the Sun = 390, the net flux coming into the surface).

    Using these numbers, a little over half of the radiative surface flux by the atmosphere is ultimately emitted to space and not returned to the surface.

    Do you see what I’m getting at here? Not all of what’s absorbed by the atmosphere is incident on the surface to influence its temperature – only about half is.

    • Yes, I see it. I don’t know why you would think this point is in dispute. Here are the things I don’t understand.

      1. You say you don’t dispute the results of radiative transfer codes, but then you ask me to reproduce the calculations in a blog comment. Well, I’m sure I could program my own radiative transfer code if I had the slightest inclination, but I don’t. If you want to see how it’s done, maybe you can download one or ask Tom for his.

      2. You keep saying that you don’t think emissivity is constant, but as far as I can tell, every time you seem to be making an actual argument, it looks like you are assuming a linear response (i.e., constant emissivity). So you seem to be saying it shouldn’t change “much”, but you don’t provide any argument for what the bounds should be.

      3. I don’t know if you don’t think I understand the idea of net fluxes, or what. The fact is that I don’t think an emissivity change implies that energy is coming from nowhere. It just mean something about the fluxes has changed.

      • 1. The radiative transfer codes are used to calculate the reduction in ‘window’ transmittance when CO2 is doubled (i.e. the amount incremental atmospheric absorption from 2xCO2). As I’ve said, I don’t have an issue with the 3.7 W/m^2 number or the methods used to get it.

        My point is the radiative transfer codes you’re referring to are NOT what is used to derive the ‘no-feedback’ response of 1.1-1.2 C from 2xCO2. This is why I’m asking you how they are getting it.

        They don’t implicitly say, but they have just assumed that the 3.7 W/m^2 of increased radiative ‘forcing’ will result in an +3.7 W/m^2 increase in net flux into the surface, which the opacity of the atmosphere then requires the surface to emit an additional 2.3 W/m^2 in order to re-emit the -3.7 W/m^2 at the TOA to restore equilibrium. This arises because about 38% of what’s emitted from the surface is ‘blocked’ by the atmosphere, so the surface has to warm up about 62% more (+2.3 W/m^2) to emit 3.7 W/m^2 back out to space. +6 W/m^2 = 1.1 C from S-B.

        If only half of what’s absorbed goes back to the surface, then only about +1.85 W/m^2 returns to increase its temperature.

        Do you see what I mean?

      • 2 & 3. I’m not assuming a fixed emissivity or linear response. The average emissivity would change, yes. I thought I clarified this before. I’m not quite sure what your asking here. Can you elaborate in more detail?

      • “Forcing” is, by definition, a NET change in the energy flux about the Troposphere (or the surface, or wherever you define it), isn’t it? If so, I think this is a simple case of mixed-up definitions.

        As for the linear response issue, my point is that your arguments involving the flux ratios don’t make any sense to me unless you are assuming a linear response.

  66. Barry,

    ““Forcing” is, by definition, a NET change in the energy flux about the Troposphere (or the surface, or wherever you define it), isn’t it?”

    Yes it is, but it’s before any half up/half down effect as a result of the absorption by atmosphere. In other words, the 3.7 W/m^2 is the instantaneous change at the tropopause or the reduction in ‘window’ transmittance at the tropopause (i.e. the incremental power absorbed by the atmosphere).

    Or are you claiming the incremental absorption from 2xCO2 is 7.4 W/m^2? I’ve asking all around the climate science community and no one can give me a straight answer on this. I’m more than willing to be shown incorrect if this is case.

    “As for the linear response issue, my point is that your arguments involving the flux ratios don’t make any sense to me unless you are assuming a linear response.”

    I’m not assuming a linear response. The system is indeed non-linear, but the problem is the non-linearity is in the opposite direction of the positive feedback case.

    To see what I mean, take a look at this plot derived from the ISCCP data (1983-2008). The ‘surface gain’ is the ratio of emitted surface power divided by the incident post albedo solar power:

    As you can see very clearly, as the incident solar power increases, the ratio of surface emitted power to incident solar power decreases – meaning the feedback becomes stronger and stronger negative as the surface incident energy increases. This is the exact opposite behavior required for positive feedback on incremental forcings.

    • In the above reference graph, the green and blue dots represent 2.5 degree latitude slices from the tropics (far right) to the poles (far left).

  67. This is addressed at RW. First, let me go back to a question RW posed that may not have been answered.

    RW stated: “The 3.7 W/m^2 of ‘forcing’ from 2xCO2 is supposed to be the equivalent of post albedo solar power, is it not?”

    Can someone answer this? [Was it answered?]

    If the answer is no, then the rest of RW’s analysis does not follow. RW uses a simple model without a lot of layers and interacting systems, so if the answer is no, RW is not modelling this effect at all and the “discrepancy” with the IPCC model is a figment of a broken assumption.

    If the answer is yes, then I think Barry provided a decent reply (quoted at the bottom), and I will try to explain it in different words.

    RW asked, “If watts are watts, how can watts of GHG ‘forcing’ be nearly 3 times more effective at warming the surface than watts from the Sun?”

    No, this 3x off is not something that must exist.

    If we have S1 (space) and S2 (atmosphere) and S3 (Earth), then we can’t draw any conclusions from the S1 S2 boundary about the S2 S3 boundary.. at least not without more detailed modelling. We would need this detailed modelling whether we look at CO2, at 2xCO2, or at any other hypothetical condition.

    One problem is that RW is using a very simple model (essentially no S2). RW’s model doesn’t account that perhaps Earth’s energy release is being reflected back towards it by the atmosphere. This degree of this reflection back down can be rather large and made invisible to satellites.

    We can have 3.7 at the S1 S2 boundary lead to a net 3.7*1.6=5.92 making its way partially into S2, while at the same time having a completely different interaction between S2 and S3. We need to model S2 to have an idea of how S1 S2 boundary will play into the value at S2 S3. We can assume pass through, but that model would likely be very wrong.

    My interpretation of a few other events:

    Barry covered the essence of what I just stated. RW apparently didn’t follow, so Barry started listening to see where RW might come to realize the problem (and to see if Barry had in fact understood RW).

    Specifically, Barry said way back:

    “Anyway, it seems to me that if the climate sensitivity is 3 °C, then that means the emissivity would change to 240/406.6 = 0.59. If the climate sensitivity is only 1.2 °C (i.e., the “no-feedbacks” value for 2x CO2), then that’s saying the emissivity would be 240/396.6 = 0.61. So in other words, if you’re boiling the whole system down to a 1-layer atmosphere and grey body behavior, you can couch “climate sensitivity” in terms of changes in emissivity.

    “You appear to be saying that a change to emissivity = 0.59 with 2xCO2 is “outside the system’s bounds”, whereas Spencer’s estimates of a change to emissivity = 0.61 is “within the measured bounds of the system.” But I haven’t seen where you have defined any “bounds”. ”

    Basically, Barry was pointing out that it appears RW is de facto assuming that an S2 with different conditions at the lower and upper boundaries can’t exist. Since RW hasn’t given any description of S2, we can’t know. RW appears to be ignoring this issue, not seeing it as something of value.

    My answer above assumes this last case (that RW doesn’t see the problem of de facto assuming S2 is homogenous), so I try to explain to RW one simple effect that might lead to S1 S2 being rather distinct from S2 S3 (Earth’s heat being reflected at a rate that has little connection to the Earth+atmosphere’s emissivity as seen from space/satellite).

    Tom, also added details as to how S2 might be interacting with S1 and with S3, even mentioning a few times that upper atmosphere is colder than lower atmosphere. The implication of upper and lower having different temperatures — that S1 S2 and S2 S3 exist and are distinct — was missed by RW.

    Or so I think.

    • I thought I clarified this. The atmosphere (S2) definitely does not act as a single layer and is not homogeneous. No doubt there are multiple GHG absorptions and re-emissions throughout the whole of the atmosphere, as well as a large amount of non-radiative energy transport.

      I’ve been focusing mostly on net energy flow in the system, because it’s what matters most in regards to how the system will likely respond to additional ‘forcing’.

      The IPCC is claiming that 3.7 W/m^2 of ‘forcing’ will become +16.6 W/m^2 (3C) into the surface. This is an amplification factor of about 4.5 (16.6/3.7 = 4.49), where as the 240 W/m^2 forcing the system from the Sun is only amplified by a factor of about 1.6 (390/240 = 1.625). Do you see the discrepancy? How can watts of GHG ‘forcing’ have a greater ability to warm the surface than watts from the Sun?

  68. RW said, “Spencer’s estimates fall within the measured bounds of the system.”

    I’m not sure if that comment is sensible.

    Barry already asked what bounds.

    I want to add that you can’t know the “measured bounds of the system” if by that you mean taking some measurement or other at 2xCO2. We don’t have 2xCO2, and I am pretty sure Spencer did not release enough CO2 into today’s atmosphere in order to recreate 2xCO2 in order to confirm measurements predicted by his model.

    Anyway, this is a minor point because I think what was intended by RW by that quote above was that under the simplified model RW used, and using values from IPCC, an “absurd” 1/emissivity of greater than 5 is implied; however, as already argued in the last comment, that simplified model used by RW is too simple and almost surely incorrect. [It’s a foolish assumption that the atmosphere boundary with the Earth is transferring energy anything like the atmosphere boundary with space — perhaps similar to how we can’t ignore a capacitor that lies in between two adjoining circuit components.]

    …or so I am guessing since I am not in this field and have not read that report etc etc etc etc etc.

  69. >> so by deduction 155 W/m^2 of what’s absorbed by the atmosphere is returned to the surface (155 coming back from the atmosphere + 235 post albedo from the Sun = 390, the net flux coming into the surface)

    390 is blackbody from Earth into the atmosphere

    235 is from atmosphere back into space

    the difference of these is some number, 155.

    Looking at the diagram, there are other heat source going into the atmosphere besides 390, so I am not sure what you think is significant about 390 – 235. 390 is not the total heat going into the atmosphere. It’s not even the total coming from the Earth’s surface (we have 78 and 24 as well).

    • The latent heat and thermals are non-radiative fluxes from the surface to the atmosphere. Whatever amount leaves the surface is generally returned in equal and opposite amounts. If there is an imbalance (i.e. more is leaving the surface than is returning on average), non-radiative flux is just being traded off for radiative flux at the surface, requiring the surface to emit less to achieve equilibrium output power at the TOA.

      The diagram is assuming a steady-state condition, so any trade off effects like this are already embodied in the net surface flux of 390 W/m^2

      It’s important to remember that all the energy entering and leaving at the TOA is radiative, so the non-radiative fluxes are just moving energy around from the surface to the atmosphere, from the atmosphere to other parts of the atmosphere, and from the atmosphere back to the surface in a way that the planet’s energy balance is what it is (about 390 W/m^2 net flux into the surface).

      So I’m not ignoring latent heat and thermals, as they are no doubt contributing greatly to system’s energy balance as a whole. A huge portion of the 320 W/m^2 absorbed by the atmosphere is from water vapor and clouds, which are moved non-radiatively from the surface to atmosphere.

    • Also, the non-radiative fluxes don’t affect what fraction of atmospheric absorption is ultimately emitted to space and what fraction is returned to the surface (to influence its temperature). The split is generally very close 50/50 (half to space and half back to the surface). This arises because when a GHG molecules absorb and re-emit upwelling photons, they are flung equally in all directions, or if the absorbed energy is transferred to the other gas molecules in the atmosphere via collisions, the heated gases also emit the absorbed energy isotropically. The net result of this is, at the radiative flux boundaries of the surface and the TOA, half of what’s absorbed ultimately goes to space and half is returned to the surface. The half returned to the surface is not necessarily an all radiative flux, but it’s the net result equivalent of the half radiative flux.

      Again, net energy flow is what matters most in the system.

    • I should also note that Trenberth has a little more than half of what’s absorbed being emitted to space and less than half returned to the surface (about 52% to space and 48% returned to the surface). This is probably because his ‘window’ transmittance is a bit too low. The 70 W/m^2 in the figure isn’t really referenced in the paper and only appears to be a rough estimate or guess. Other estimates I’ve seen put this number more like 90 W/m^2.

  70. >> This arises because about 38% of what’s emitted from the surface is ‘blocked’ by the atmosphere, so the surface has to warm up about 62% more (+2.3 W/m^2) to emit 3.7 W/m^2 back out to space.

    First, keep in mind that we are looking at numbers that represent values from today (I think) when we look at the diagram. If you make a prediction about 100 years out or even 1 year out, you will end up with a different set of values. So your 38% only applies to today. In 100 years, it might be 50%. A priori, there is no reason to justify the emissivity value to be constant or to be similar 100 years out when compared to today’s value.

    Second, the 38% comes from the fact (using the diagram) of 342 coming in from space only 235 goes back out, but, as mentioned earlier, this is the view from space and by itself says nothing about the temperature on Earth’s surface or the heat transfers on the surface and throughout the lower atmosphere.

    To use a greenhouse analogy. If you look at a greenhouse from above, you only see the “little bit” of energy leaving the greenhouse, you don’t see the energy shielded by the greenhouse ceiling which stays inside the greenhouse.

    Another example: when you look at a house, you don’t see the visible light energy inside of the house (unless the house is made out of glass). This energy is blocked by the inside of the house walls. Similarly, if you put on night goggles at night, you won’t see the people’s infrared images when they are inside the house. The walls also block this. In either case (day or night), you can see them when they go outside, if you observe at a proper frequency range relative to the ambient environment and if at least some of the identifying energy coming from them is not blocked or completely “scrambled” (unlike when all of it was “blocked” when they were inside the house).

    Again, I might be wrong about all of this. I was, however, at least confident enough to post this comment tonight.

    • Again, I put the word ‘blocked’ in quotes for a reason. It’s not really blocked in the literal sense – it’s just redirected, back down toward the surface.

    • It’s also important to note that none of the energy entering in surface, either directly from the Sun or from GHG ‘forcing’, is trapped anywhere. Its exit at the TOA is just delayed (and not for very long).

      • I meant to say “entering the surface,”

  71. RW >> This is an amplification factor of about 4.5 (16.6/3.7 = 4.49), where as the 240 W/m^2 forcing the system from the Sun is only amplified by a factor of about 1.6 (390/240 = 1.625).

    I see that 3.7 forcing is how they model the CO2 doubling. A value of “forcing” depends on the model. Why should a value used in one model/eqns work with a different one (such as the one Spencer uses)? I could use more information to see the context of that 3.7 number.

    As one guess… That 3.7 forcing may be related to the infrared region only. In other words, it could be 3.7 W/m^2 of energy in some limited spectrum region. The mathematics/physics might then lead to an equivalent Sun energy of a greater magnitude if we were to use a different model (such as what you are using).

    Put differently, if that forcing is of infrared energy adding up to 3.7, that is a different beast than forcing 3.7 distributed across the energy spectrum as if from the Sun. [Adding a Joule of visible light is not the same as adding a Joule of x-ray. One will, eg, stimulate an optical sensor differently than would the other.]

    • The 3.7 W/m^2 from 2xCO2 is the increased atmospheric absorption. In the current system, about 300 W/m^2 of the 390 W/m^2 emitted at the surface is absorbed by the atmosphere on average. If CO2 is doubled, the average total atmospheric absorption increases to 303.7 W/m^2.

      The roughly 90 W/m^2 passing straight through reduces by 3.7 W/m^2 to 86.3 W/m^2.

      • I am trying to help resolve the question you posed, paraphrased, “isn’t a Watt a Watt no matter the source”? This was in reference to the 1.6 ratio.

        I don’t see the problem, but please ask the question again after reading the comments above if you still think there is a problem.

        Briefly, there is no rule that says that the blackbody radiation of the Earth must go up a certain amount because of some 3.7 forcing value that gets modeled into some unspecified equation. Perhaps you can point out the IPCC equations associated with that 3.7 and we can look at those. Remember, adding 3.7 Joules of red light is not the same as adding 3.7 Joules of microwave energy or of energy distributed across a very wide spectrum. The context, the equations, matter. The IPCC likely used different equations than you are using and their 3.7 does lead to the other value below. Their equations don’t say at any point that the 1.6 ratio must hold between the relevant two measured points.

        I’ll give another analogy.

        If I raise my hand 1 meter, then Person A might raise his hand 1.6 meters and Person B her hand 3.6 meters (and there is a nonlinear relationship between the 1 and the 3.6). If I …. OK, never mind this analogy, because I don’t know what you are talking about specifically, so I don’t know what further analogy to use.

        Please restate any question you want and justify physically why you believe some relationship must hold. You have not to this point justified why that 1.6 ratio must hold between the two values you mentioned.

    • Also, watts are watts, independent of where they last originate from. If the 3.7 W/m^2 of ‘forcing’ from 2xCO2 is not equal to 3.7 W/m^2 of net incident solar power, then CO2 ‘forcing’ cannot be quantified in units of W/m^2 as it is.

      • You haven’t even specified what equations you are talking about! No one has said a Watt is not a Watt. We have said that a Watt in one model leads to something that doesn’t follow your simplified “analysis” of 1.6 ratio.

        I have said that a Watt of red light is different than a Watt of blue light. At least my eyes notice a difference and I doubt they would notice a difference if there wasn’t any (well, assuming our “light” models are relatively sane so that I can use that example).

        A kg of rice is different than a kg of feces. You get different results in many systems when you apply these distinct kilograms each separately, even though they are each “kg”.

  72. Spencer (if I understand correctly .. and did read a bit of one of his papers and read the 3 parts here) is reacting to the past by curve fitting. As new data develops, his preferred curve fitting solution of today will very likely be wrong, and he will have to refit.. over and over. And the values chosen may be all over the board from run to run, or he may settle on a set of values that hardly match what the parameters are claimed to represent (if you get away with a small number of parameters to tune, it’s only because the observed data, out of all possible data, would have lots of correlation among the many variables.. for the particular reality we observed through some limited data set). [The approach he is taking offers potential as a tool, however, as Barry pointed out.]

    When you instead use complex models and base them on understood science, you are much more likely to get a model with at least quasi predictive potential. And it can be a dangerous game to ignore warnings about the future from physics you understand today (even if imperfect).

  73. I also want to point out, RW, that we can easily cook humanity down here on Earth if we allow sufficient energy to accumulate. The pdf you mentioned stated we have presumably measured (or maybe it’s calculated) that the amount of energy leaving the TOA is not the same as is entering (despite being very near steady state). Such a difference (assuming a net increase) over time can lead to a very hot environment. As an analogy, if we have a large reservoir being filled with water very slowly at the top and drained more slowly at the bottom, it can go from empty in energy to being rather full in energy despite the small trickle coming in and the even smaller net trickle.

    Also, when you have a larger energy reservoir and potentially greater temperature imbalances, you open the door to more powerful and more frequent “bursts” of energy (eg, more powerful lightening and tornados).

    • It doesn’t work like this. Only the Sun adds energy to the system and no energy is trapped – its exit from the system is just delayed (and not delayed for very long). If the surface is to warm by ‘x’ amount, it has to be receiving ‘x’ amount more energy and this increased energy flux into the surface has to be continually replaced or else the surface will lose energy and cool back down.

      You can’t simply create more energy out of thin air, or just by arbitrarily claiming “it will come from the feedback”, as many seem to be doing on this subject.

      • Listen to what you are saying. You are saying the Earth’s atmosphere cannot trap energy.

        You are making this up. It even goes against the analogy of the covered pot on the stove used by Spencer.

      • You are saying, essentially, that we have the same temperature today in our atmosphere as we would if there were no atmosphere. The pdf you linked to on those very pages says the exact opposite.

      • I don’t know if you are playing around wasting people’s time, but let me ask this. What justification do you have to say that energy is not trapped in our atmosphere?

  74. Barry,

    Let me pose the issue/question to you from another angle. If, to cause the 1.1 C ‘no-feedback’ rise from 2xCO2, about 3.7 W/m^2 comes into the surface directly from CO2 ‘forcing’ and the remaining 2.3 W/m^2 comes from the current average opacity of the atmosphere, where is the additional 10.6 W/m^2 required for the 3 C rise coming from (16.6 – 6 = 10.6)??? In other words, where is the energy coming from that is supposed to be causing all the enhanced positive feedback warming?

    Can you specifically explain how about a 1 C rise in temperature causes a +10.6 W/m^2 flux into the surface? If you think it will come primarily from increased water vapor, are you claiming that the water vapor absorption will increase by 10.6 W/m^2 from a 1 C rise in temperature (actually more than 10.6 W/m^2 because half of what’s absorbed escapes to space regardless)???

    Surely, you are aware that water vapor in the atmosphere is not a homogenous distribution, but highly dynamic and constantly changing spatially and in time – all the time, right?

    Because of this, how does the 1.6 to 1 ratio from solar forcing not already account for the lion’s share of water vapor feedback in the system?

    • RW, before I read this comment, I hope you explain your justification for saying the atmosphere does not trap energy. I think you are wasting people’s time here. I don’t know how much time people have to read over your broken problems to try and fix your broken conclusions.. only to have you reword things again.

      Honestly, are you saying the atmosphere doesn’t trap energy. If you are saying this, let’s just get down to that simpler problem instead of these other more elaborate problems that rely on such broken assumptions.

      • RW, you are pulling explanations out of the air that make no sense.

        It would get pretty boring if I would stood here and said:

        Barry, why did my dog have 3 puppies when I fed here 1kg of food daily for 1 year, yet the next year she had 7 puppies even though I fed her 1.2 Kg daily???

  75. Barry, why did my dog have 3 puppies when I fed her 1kg of food daily for 1 year, yet the next year she had 7 puppies even though I fed her 1.2 Kg daily??? Isn’t energy conserved????? How could she get more puppies if energy is conserved????????????????????? 1/3 does not equal (1.2)/7 !!! How can you justify this ratio perversion? I continue to ask climate scientists this, and they haven’t given me a good answer I would accept.

    RW, stop asking questions that are not supported by reality. We have a process that has been show to be a powerful analytical tool. You are not using it. You are making things up, just like I made up with that puppy analogy.

    RW, are you being paid to fill up forums with complaints? Is this you way of creating “doubt” in the scientific community?

    …blah, she had 3 puppies one year and 7 the next. Energy is not being conserved .. !!!#$#@$@#$@#$#@$!@$@#$@#$

  76. Jose_X,

    “Listen to what you are saying. You are saying the Earth’s atmosphere cannot trap energy.”

    Yes, this is exactly what I’m saying. What GHGs and clouds in the atmosphere do is slow down the release of energy emitted from the surface by re-directing some of it back to the surface, requiring the surface to emit more energy in order to emit the same amount back out to space as is arriving from the Sun. No energy is being trapped anywhere or the surface temperature would be continuously increasing indefinitely. If you doubt this, just think about how much it cools down overnight. If GHGs and clouds were trapping energy in the system, this would not happen.

    Now, this does not mean that an increase in average atmospheric opacity can’t raise the equilibrium surface temperature. It can, but no energy is trapped anywhere.

  77. >> No energy is being trapped anywhere or the surface temperature would be continuously increasing indefinitely.

    WHAT!

    Food stores energy from the Sun. Do you see an apple hanging on a tree increase it’s temperature indefinitely.

    >> If you doubt this, just think about how much it cools down overnight. If GHGs and clouds were trapping energy in the system, this would not happen.

    WHAT!

    The fact we can cool overnight is because energy had been trapped. If we didn’t trap energy, we couldn’t cool down.

    Hello?

    • Apparently you have a different definition of the word ‘trap’ than me.

      • Yes, we might need to go back to physics 101 to try and discover why you keep asking your questions.

        Trap is a loose terms used by many to mean that something goes in somewhere and doesn’t come out right away. You could be trapped for minutes, days, or years. It’s a common nonscientific term.

        Do you want to explain what you meant by trapped?

      • We can use “storage” instead of “trapping”. Deal?

        Heat from the Sun is stored in our atmosphere. Doubling CO2 over what we have today will lead to more energy stored in our atmosphere.

  78. RW,

    CO2 in our atmosphere leads to a higher equilibrium point in temperature, just as we add blankets on our bodies on cold nights to keep more heat near our bodies and less escaping into the night. This imbalance of less heat escaping than what is being generated by our warm bodies (biological processes inside of us) only lasts for a little while. At some point a higher equilibrium temperature is reached and heat into our little blanket atmosphere equals heat lost.

    We have life on Earth as we know it because of the heat trapping gasses. Adding more of these heat trapping gasses leads to more heat trapped and higher temperatures.. which leads to worse weather and many other things, some potentially catastrophic if the temperature rise is sufficient.

    • RW, consider restating your problem using the equations used by the IPCC. Otherwise, there is no basis to justify why some ratio should or should not hold.

      Energy conservation is fundamental, but you must analyze the system properly first. You must make sure you are measuring the proper quantities and using the proper formulas.

      Just throwing numbers out there is a waste of time. You must justify the connection among the values being measured.

  79. Jose_X,

    By the word ‘trap’, I generally mean to stop and hold in place.

    • To prevent from eventually escaping into space.

      • To make life easier, I’d rather use “storage” or “store”. How do you define either of these, and let’s consider using them instead.

        Also, things can be stored (trapped) for a **period of time** and then no longer trapped as the environment changes (eg, as I unlock your jail cell).

  80. Jose_X,

    “We can use “storage” instead of “trapping”. Deal?

    Heat from the Sun is stored in our atmosphere. Doubling CO2 over what we have today will lead to more energy stored in our atmosphere.”

    No, I do not agree. No energy is stored either. This is getting silly though. I’m afraid I can’t help you anymore if you don’t know what I mean.

  81. >> No, I do not agree. No energy is stored either.

    Very well, don’t agree. You can use whatever inventions you want. I was just trying to help. You seemed rather confused by something that should not have been such a problem to someone who presumably has done some modelling and physics before.

    In case you want to take a look, http://en.wikipedia.org/wiki/Energy_storage . You will notice our atmosphere is loaded with matter capable of storing energy in many forms.

    You should also read the pdf you linked since it may help you understand how our atmosphere helps keep Earth having a more stable and hotter environment that would be the case without an atmosphere.

  82. I don’t understand something about Spencer and Braswell 2008. I assumed that their point was simply about the regression dillution, when adding noise to a predictor biases the regression slope towards zero.

    But they regress -alpha*T + N against T, where T is the independent variable, right?

    Their equation 4 express the difference between true alpha and its estimator as a Sum(NT)/Sum(T^2), and they comment that “It is apparent from Eq. (4) that one gets a biased estimate for to the degree that the summation of NT is nonzero.” But the summation of NT can be either positive or the negative, so there is no reason alpha-prim should be biased low.

    Am I missing something obvious?

  83. Barry,

    Are you still participating in to the discussion here?

    • Hi RW,

      My car died, and I had to shop for a new one, so I’ve been mostly out of commission for a few days. I’ll try to get back in the swing in the next day or two.

      • Sorry to hear that. I hope you find a new care you like.

  84. NW, in case you are being serious in this discussion and not trying to waste time, a quick googling:

    http://www.gfdl.noaa.gov/bibliography/related_files/ao7601.pdf : First page makes numerous references to energy storage, such as, “[f]irst of all, the oceans and to a lesser degree the atmosphere serve as reservoirs where large amounts of energy are stored during summer and released during winter.”

    Notice how we are talking about storage and about a time period of a year.

    Similarly, energy above and beyond the amount that would exist without an atmosphere is stored in our atmosphere continually year after year (so that our nights and Winters don’t get as cold as they otherwise could get).

    Another mention of energy storage and net energy changes:

    http://www.sciencefile.org/system/component/k2/item/2668-a-new-global-climate-change-equation.html : “All planetary warming or cooling in any period occurs because there is a difference between incoming and outgoing energy, an energy imbalance. The imbalance results in changes to the amount of energy stored, mostly as heat in the atmosphere and oceans, in Earth’s climate system. If more energy enters the atmosphere from the Sun than is reradiated back out into space – the planet warms. Conversely, if less energy enters the atmosphere than leaves – the planet cools. Thus Earth’s energy budget can be completely defined in three terms. In any period, energy in is equal to energy out plus the change in the amount of stored energy.”

    Another:

    http://geography.uoregon.edu/envchange/clim_animations/ : First paragraph makes reference to energy storage “and the change in energy storage in water or substrate on land.”

    A reference to “trap”:

    http://www.columbia.edu/~vjd1/carbon.htm : “[Carbon dioxide] is a greenhouse gas that traps infrared radiation heat in the atmosphere.”

    The greenhouse effect:

    http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/grnhse.html : “The greenhouse effect refers to circumstances where the short wavelengths of visible light from the sun pass through a transparent medium and are absorbed, but the longer wavelengths of the infrared re-radiation from the heated objects are unable to pass through that medium. “

    • @NW, greenhouse part 2

      The greenhouse effect link just given gives the example of being inside the car with the windows up. The visible light energy gets in but the infrared energy doesn’t leave ..or rather it leaves slowly. [But why doesn’t the temperature go to infinity? See below.] I’ll get back to this below.

      Back to the atmosphere on Earth.

      The lower atmosphere radiates heat as you mentioned (this may be related to the 155 figure you stated). Some of this is energy from the ground. This absorbed energy is radiated in all directions but only some goes up above that lower atmosphere.

      The energy that goes down comes back up to re-radiate with some part leaving again. This second cycle release comes after a delay from the first cycle release. The third such cycle with what was left over (sent downward) happens after yet another delay. In this way, if we shut the sun off, we see that after many up/down cycles, the energy that was trapped initially leaves in decreasing parts over the sum of many delays. I believe a simple model of this would give exponential decay. [The “cycles” analogy is a simplification.]

      However, the sun is not turned off on Earth. Before all of this energy accumulated during daylight hours would leave at night, we have another day arriving. Also, even during the night, we have convection currents from the side of the Earth that is being heated which make their way to the night side. Thus, the temperature never decays fully.

      So the effect is that the NET energy that once came in never fully dissipates, and we have year after year a net increase in energy over what we would have without an atmosphere. This is energy storage from the point of view of the atmosphere (and even from the pov of a particular molecule, there is a certain time lapse between energy absorption and release).

      Evidence of this storage is (a) the higher temperature readings near the ground all throughout the year in contrast to the upper atmosphere and (b) the sharp changes in temperature above the lower atmosphere where we’d find the greenhouse gases (including water vapor).

      Let’s get back to the greenhouse effect we observe inside a car, which leads to higher temperature (and not infinite temperature).

      When we first close up the car, we have energy coming in at some rate but what leaves is slower (the windows would be like greenhouse gases). More energy keeps coming inside leading to extra energy that must also “decay” towards outside environment temperature. Over and over this happens, raising the temperature inside the car; however, at some point the energy released from the windows is high enough, due to the higher temperature inside, to match the constant energy coming in from the sun (and from the lower outside temperature air in contact with the window), so a balance is established at this temperature higher. Yes, at higher temperatures, the windows allow a greater amount to escape.

      Whenever we have a “surface boundary” that slows heat transfer, we can end up with different temperatures on either side of the surface. In the case of the lower atmosphere, a “boundary”, we have higher temperature below next to the Earth which would otherwise be warmer (because of heat absorbed from the sun, volcanoes, etc) and the colder temperature above next to empty cold space.

      Some other notes on the analogy:
      — Another contributor to the higher temperature inside is the heat from humans inside the car if there are any.
      — The car hull (and to a lesser degree tinted windows) serves to block most of the sun energy. As the day starts (if no humans are in the car and some other ifs), this may serve to keep the temperature inside the car a little cooler than the increasing outside temperature. You observe this effect when you walk inside a large closed off cavity early in the day after the temperature outside has risen some but the cavity is still nearer to the night time temperature. [An example is closed off non air conditioned stairwells. Another example is a cave.]
      — In contrast, clear windows allow some of the sun energy inside. This is captured by the car interior (including molecules in the air) and released as (I believe) blackbody radiation. We already have blackbody with the car mostly opened, but the release outside the car slows down when the car is closed and windows up. The “cycles” means that molecules are radiating photons based on other photons captured both from the sun as well as from blackbody of other car molecules.
      — Windows allows visible radiation from the sun to enter and visible radiation from the inside of the car to leave. Windows block most of the infrared radiation across it. The problem is that the radiation coming from the sun has a high visible energy component of its total energy.. at least in comparison to the amount of visible energy relative to the other energy as released in blackbody from the inside car.

  85. Jose_X,

    You’re engaged in a semantics argument. Of course, energy can accumulate in the system, but that does not mean it’s ‘trapped’ in the system.

    • I’m not playing games (or at least I haven’t yet), but it’s good we finally are understanding each other a little better.

      Since I am new to this, I really want to understand and that will take time. I am sure you have much to learn as well.

  86. Barry,

    No reply to my post on September 11, 2011
    at 4:39 pm???

  87. Hi RW,

    The extra flux into the surface comes mostly from backscattered radiation.

    • This is your answer?

    • From what and where, and how are you coming up with the quantifications?

      I don’t feel you really answered my questions in that post.

  88. Barry,

    Do you at least see the discrepancy? How can watts of GHG ‘forcing’ have a greater ability to warm the surface than watts from the Sun, especially given the non-linearity of the system – the ratio of surface emitted power to net incident solar power, decreases as the net incident solar power and surface temperature increases?

  89. Barry,

    Before you answer, let approach this issue from yet another angle. The IPCC defines climate sensitivity in units of degrees C per W/m^2, right? That is about 0.8 C per 1 W/m^2 of incremental radiative ‘forcing’ (0.8 x 3.7 = 3C), right?

    Why even define sensitivity in this way? Units of power in W/m^2 are already quantified in degrees C by the Stefan-Boltzman law! Do you see how defining sensitivity in the way they do essentially hides its applicability to solar forcing? If they didn’t do this and instead defined sensitivity in units of power to power densities, it would be very easy to see they are claiming watts of GHG ‘forcing’ have a greater ability to warm the surface than watts of solar forcing, wouldn’t it?

  90. Many people have hard time accepting this because of its relative simplicity. They think it couldn’t be that simple.

    That the sensitivity to 2xCO2 has an upper limit of about 1.1 C, because there is no physical or logical reason why the system would respond to an additional 3.7 W/m^2 more powerfully than it does the original 98+% (240 W/m^2) from the Sun (let alone 3 times more powerfully!).

    The only way to reconcile this discrepancy is to claim watts of GHG ‘forcing’ have a greater ability to warm the surface than watts from the Sun. As you can see, I can’t accept this.

    • RW, you make no sense.

      If I give someone a little tap and that person turns around and decks me across the face, I just provided an example of a little stimulus (W/m^2 or whatever other related unit you want) results in a much higher energy release.

      The new energy comes from stored energy, eg, energy stored in body tissue (eg, blood stream, fat cells, muscle cells, etc).

      Thus, even with a ratio of apples to apples (as you seem to want), we still can get arbitrary values depending on the system at that point in time.

      Tomorrow, that person might not push back as hard. Our linear analysis (a constant ratio expresses a linear relationship) is but a hint of the current rate but says nothing about future or past rates if the overall function is nonlinear. The constant rate comes from the instantaneous derivative of a nonlinear function.

      Thus, again, what you say about 3x ratio doesn’t make sense because that 3x is an average and we are looking at instantaneous values of a nonlinear function.

      You are giving no analysis or equations for the Earth system, so you can’t argue very much here. As far as you know, the Earth system is like such a person that erupts when prodded or, in contrast, maybe is like a timid individual that absorbs extra energy. You have to analyze and model the system to get a decent idea what is happening. So you make no sense by saying that we can’t have a ratio of [pick you number here]. To argue that you need to say something more meaningful, because it is too easy to give counterexamples to your statements above.

      • >> you make no sense.

        Just wanted to clarify I was referring to my interpretation of what you said about how the ratios can’t change or must be x or y just because. If I understand correctly, I don’t see how that comment makes any sense in the general case of a physical process that could potentially be modeled by an arbitrary function. Obviously (and assuming I understood most of your concern), it is reasonable to be confused about certain things, but that won’t make the math any different. Your question seemed to be about something where it is easy to find contradictory examples (and not a question of shades of gray or subjective judgements).

      • So what’s so special about the next few watts that the system will respond to it in a 3x more powerful way?

        Can you quantify specifically why the system will respond the next 3.7 W/m^2 so much more powerfully and specifically why it does not on the original 98+% (240 W/m^2) from the Sun?

        You should also explain why it doesn’t take 1077 W/m^2 of surface power to offset the 240 W/m^2 of incident solar power.

      • >> That the sensitivity to 2xCO2 has an upper limit of about 1.1 C

        I have no idea how you came to this conclusion.

      • This is getting silly again. I didn’t say or imply that it’s hypothetically impossible.

        I’m asking what specifically is the physical basis? So far no one, including Barry, seems to be able to provide any, other than vague, generalized statements like “it comes from the feedback” or “from downward LW”. This kind of incomplete and sloppy scientific reasoning seems to be pervasive in the pro-AGW climate science community.

        You can’t create more energy out of nothing. It has to be coming from somewhere specific and from some specific physical process or processes that can be corroborated by real, measurable physics and data.

      • I gave other analogies below, but to get back to the pushing match. Pushing someone with 20 Newtons won’t necessarily result in the same instantaneous ratio in reaction (ie, a hypothetical +1 change) as would pushing with 10 Newtons.

        The relationships are not simple or linear. So instantaneous rates will generally differ from each other (at different points in time, for example) and from the averages.

        If you give me the formulas, I will try to be more specific. If you don’t have the formulas, then I don’t see why you would eliminate some options over others.

        OK, let me return to your early comments:

        >>The IPCC claims that 3.7 W/m^2 of ‘forcing’ from 2xCO2 will become +16.6 W/m^2 at the surface (+3C), requiring an amplification factor of about 4.5 (16.6/3.7 = 4.49). Post albedo power coming in from the Sun is only amplified by a factor of about 1.6 (390/240 = 1.625).

        You are saying that 390 results on Earth *at the time when* 240 is at TOA. You aren’t saying anything about how that 390 got built up as that 240 got built up. You also aren’t saying anything about the current rate of growth. It’s possible that for our current system/atmosphere a 100 at TOA today would give 100 at Earth. I mean, without equations, there is no reason to know this or not. Then at sun 200, we might find 300 at Earth. Etc. So the instantaneous today might produce a ratio of 4 rather than the average smaller ratio when averaged from way back when.

        Look at the T=t^2 example. The average (secant) “slope” from t=0 to t=10 is very different than the instantaneous (tangent) slope at 10. The secant average would be (10^2 – 0)/10=10. The instantaneous tangent slope at 10, however, would be 2*10 = 20. The current instantaneous rate is always faster than past such rates and that is why the rise in T is faster than linear (for this example). Draw a parabola and measure the inclination of the tangent at some point vs the inclination of a line drawn from the bottom point to that same point. [I wish I had a graph.]

      • OK, one more time. Look at this table of made up values. The first column is the TOA value and the second is the Earth surface value:

        0 00
        50 40
        100 100
        150 180
        200 280
        240 390
        250 400

        Note, the pattern I used (roughly): With no sun, the earth has no heat (let’s pretend). Adding 50 to TOA at first leads to a 40 rise on Earth. Adding another 50 afterward leads to a 60 rise. Adding another 50 leads to an 80 rise. Another 50 leads to a 100 rise. Another 50 leads to 120 rise (and the 240 is just before that last one, leading to 110 rise).

        This shows that the growth was not linear. The “instantaneous” rate of growth today would be about 110/40=2.75. Meanwhile, the overall average rate would be 390/240=1.625.

        So we note that the Earth is heating up faster per addition of new energy at TOA.

        Now, I made this up. The TOA energy has not changed that much, but it shows a plausible scenario if we today were to play god and start the oven slowly and measure the reaction on earth.

        A nonlinear rate is what the 3 C represents. The 1.6 you calculated was an average that is not the hypothetical 1 W/m^2 change but is the total 240 W/m^2 implied change.

        As another example of calculations:

        You drop a ball from a tower on some planet. The time vs the distance traveled is shown:

        0 0
        1 1
        2 4
        3 9
        4 16

        OK. Let’s compute the “instantaneous” rate at t=4: (16-9)/(4-3)=7. Now, for the total average rate during this whole time period: (16-0)/(4-0)=4.

        Do you see how the overall rate says that 4 meters pass between each second (ie, 16 meters in 4 seconds) while the current rate is faster at 7 (ie, in the last second, we went 7 meters)?

        The Earth is warming up faster at today’s sun temperature than it would if the sun energy was much less.

        Now, again, this is hypothetical, and in fact the atmosphere has changed much even while the sun energy remained about constant.

        A better analogy would be how, by covering a pot of heated water, we can get the air above the water to be hotter than if we left it open. And this rate will be greatest just as we are about to fully cover it than when we first started to cover it (see the example of the water and door shutting at different degrees). We still have the same heat underneath but the rate of rise in temperature is faster just as we are near covering the whole and advance by another inch than when we covered the pot initially by its first inch. It’s a nonlinear relationship between the temperature and the inches we advance the pot cover from open to fully shut.

    • Let me add another analogy.

      Let’s say the earth has been loaded with nuclear reactors that will “explode” if the sunlight hitting them goes up by a very small amount or if the temperature average over a year goes up by .1C.

      Then the temp does go up and these start to explode. At times the Earth air surface temp is as high as 200C. Overall, the average is much higher than now.

      So what happened was that a small signal lead to higher temperature from energy release that was stored.

      This example is not like GHG (and is rather contrived), but the point is to show that we can end up with higher temperatures on our atmosphere from small triggers.

      Without modeling GHG, you won’t know that temperature will rise 1 C or 10C from some particular trigger. And either of these is possible a priori.

      Now, as Barry mentioned, backscatter is one way to look at how GHG behave a bit like the nuclear explosion example I gave. We essentially end up with energy hitting the Earth that is in addition to the direct energy from the sun. In one case, this happens because of atom energy released. In the other, it’s because of regular molecule vibrations from GHG that capture some of the upward going energy from the planet and nearby black-body entities and re-release back down. The GHG act a bit like a door that allows energy to come in fast (at a wide range of frequencies that can sneak in the door) but to exist slowly. Think of skinny fast moving people coming into a room and getting embraced by bigger fatter people. The two in an embrace cannot readily leave. Thus, the room keeps getting more and more filled with moving people and energy. An even cruder example would be a roach trap designed to let roaches check in but not check out. The “temperature” within the box will keep rising even if little energy trickles in.

  91. Temperature today (or at any point in time) is not our only problem. Consider that the ice that is melting faster than expected, if it weren’t there, might lead to a faster rise in temperature. The heat it would absorb instead would remain in the atmosphere leading to a higher average temp and greater future climate sensitivity values. [Of course, ice over land melting in large quantities would pose the other problem of raising ocean levels and which might potentially destroy many coastal cities. Structures like levies designed to withstand a likely scenario at the time they were built may no longer serve their purpose.]

    There is much we don’t understand about nature and life, and any stark changes in balance can lead to us potentially losing things we always took for granted (eg, perhaps a large drop in fresh water, fertile soil, key organisms, air clean of certain agents, .. whatever).

  92. RW,

    Take a function, T=t^2.

    It has a derivative of 2*t.

    At time t=0, the rate of change of this function is the derivative evaluated at t=0. That rate is 0.

    At time t=1, the rate is 2.

    At time t=2, the rate is 4.

    At time t=3, the rate if 6.

    Thus, we see with such a simple example that the rate of change at the time of 3 is:

    — 1.5X the rate at time 2,
    — 3X the rate at time 1, and
    — infinitely faster than at time 0.

    Also, the average rate from 0 to 3 is (T(3)-T(0))/3 = (3^2 – 0^2)/3 = 3; thus, we see that the (instantaneous) rate at time 3 is 2X the average rate from t=0 to that point in time.

    The function is nonlinear so the rate of change (the ratio) changes over time. Like essentially all nonlinear functions, whatever is the rate of change at some point in time in the past will likely be different than it will be when we next measure. [The example I gave above always has that rate growing, but that need not be the case in general, eg, y=sqrt(x) always has the rate decreasing, and y=3x^3-2x+1 has both increasing and decreasing sections.]

  93. A ball falls to the ground similar to that function mentioned earlier. The distance grows with the square of the time.

    Even though we have the same ball and the same gravity, the distance changes faster and faster per same increment of time. So, yes, it is very possible and very normal for the same thing (same intensity of gravity) to have a different effect on the same item at different times.

    **At the later time, the distance is growing faster under this same Earth gravity (for the same ball).

    Similarly..

    **At a later time, the temperature is growing faster under the same sun intensity (for the same “surface”).

    See the identical pattern?

    By the way, “instantaneous” rate of change is like looking at two frames of a video shot at something. The time delta is very small (it’s approximately instantaneous). In contrast, an average over a long period of time would be like a slow speed video that shoots each frame say a minute apart.

    A ball dropping has a very slow change in distance as soon as it is released but after a few seconds is going very fast if we look at two consecutive frames. So the instantaneous rate is slow at first and gets faster and faster (assuming no wind resistance, etc). If we wait longer, the speed is even faster. At any given point in time, its instantaneous rate is much faster than its average rate because the average rate includes the time when its rate was very slow (at the beginning) and every other point in between leading up to the current fastest rate. It’s like the average of 1, 2, 3, 4, 5, 6, 7 (which is 4) is always less than the largest value (7). In this case, 7 (like the instantaneous rate.. high speed camera) is almost 2x of the overall average (the slow speed camera), but using different numbers we could end up with 3x, 10x, 1000x, or something even higher.

    So it makes no sense to outright categorically claim that the current rate of change of something under similar conditions cannot be 3x faster than the old rate (or average) under similar conditions. [Similar here would be a similar sun intensity or hypothetical small change from it. Just replace sun intensity with gravitational intensity. The gravity case is not the same system and mathematical modeling, but it shows that your claim of impossibility makes no sense as a universal rule without looking at specific equations.]

    I could try to be clearer if this description doesn’t make sense (or has mistakes), and it is possible I am not understanding your question.

  94. Another analogy.

    A door/latch allows water to escape a container. If the door is at 180 degrees, there is no extra obstruction to the water as it pours away (as if the door wasn’t there). At 90 degrees there may be some but not much at all. At 45 degrees there is a more noticeable change. By the time we get to 10 degrees, we might really notice the difference and closing it 1 extra degree at a time will make increasing amounts of a difference until the door is entirely closed. The relationship is not linear. Clearly, at some points, a similar addition can create a much greater effect than earlier similarly sized additions.

    And we also note that the door is clearly related to the amount of water; however, the “units” in which door obstruction is measured is at least a bit different than how water flow is measured.

    This is not a direct analogy, but it is not too dissimilar. I think it addresses what appears to me to be the thrust of your complaint. [I am not sure about the climatology equations, so if you provide more clues, maybe I can understand better.]

  95. Jose_X,

    “I have no idea how you came to this conclusion.”

    Based on how you are responding to me, I’m not surprised. I suggest you go back and read everything from the start of the discussion again, as I think you likely missed or misunderstood some critical things along the way.

    • Look at the last two comments I wrote above. I use a table of values (eg, look up “50 40”).

      You are confusing instantaneous rate of change vs average rate of change. You are comparing the change in temp at 1 W/m^2 and assume this rate must be the same as the average rate of 390 over 240.

      • I’m not really. The system, as Barry correctly pointed out, is non-linear. The problem is the non-linearity is in the opposite direction of the positive feedback case on incremental forcings (i.e. incremental warming). The evidence of this in the data is the ratio of surface emitted power to incident solar power decreasing as the climate warms.

        BTW, there is specific physical reason for this and it’s that as the climate warms, it evaporates more and more water from the oceans, which removes more energy from the surface (as latent heat), which the evaporated water condenses to form clouds which block more the energy from the Sun, resulting in even more surface cooling.

        In your last two comments, you’re really just talking in hypothetical terms.

  96. >> You can’t create more energy out of nothing. It has to be coming from somewhere specific and from some specific physical process or processes that can be corroborated by real, measurable physics and data.

    Who is creating energy? If you excite more and more moving particles in a box without allowing them to slow down, then you will raise the temperature. And you can achieve this by using a good insulator and simply adding little bits of energy at a time.

    For example, one mistake appears to be in thinking that the energy must only come from that second in time, rather than recognize that it can be accumulating slowly over time.

    • >> accumulating slowly over time.

      I’m going to bed now so may not reply right away, but first let me add this one note.

      When atoms absorb energy, it takes time for them to release it. With perfect information, we could calculate the quantum state probability distribution (wave equation solution) and from it derive the expected time to release of energy and drop to a less excited or to a ground state. The point is that it takes time, even if it is very fast by our standards. Now, consider how many particles are in the atmosphere that might absorb LW released by the Earth, and you can get an idea that it might take a long time before all of that absorption and release cycles to eventually “dissipate” into TOA. Before it has fully dissipated, however, we get more sun energy. The point is that if we didn’t have GHG, we would have less energy bouncing around in the lower atmosphere, and this would translate to a lower temperature.

      So the key to this temperature rise was the GHG “insulator” “trapping” heat near the Earth for a longer period and not any increase in the sun’s energy. Without changing anything else, we would get more warmth because of the GHG effect. Note that the 240 stays the same, yet the 390 might keep rising. If we calculate instantaneous rates again in the future, we might get a higher value than 3 C (after converting to temp).

      It’s not a simple 1 TOA source, 1 Earth blackbody example. There are many blackbodies radiating, and as we add more GHG, we get even more blackbodies to model.

  97. RW,

    I re-read the whole conversation here. Here are my observations.

    1. Your only argument boils down to a claim that the gain in surface radiation to incoming solar radiation is the same going from 0 to 1 K as it is going from 288 K to 289 K.

    2. You keep denying that’s your argument, so I wonder what I missed. Then, after a lot of roundabout talk, you say the same thing again.

    3. You sweep away all the paleoclimate evidence away, saying it isn’t good enough. Sorry, but I usually pick the models that explain more evidence, not less.

    • 1. How do you figure? What is it that I’ve said that makes you think this?

      2. It’s really not my argument. I’m not sure what you may have missed.

      3. Yes I do for the many reasons I outlined.

      • >> 1. How do you figure? What is it that I’ve said that makes you think this?

        To give one example, the following does:

        “The IPCC claims that 3.7 W/m^2 of ‘forcing’ from 2xCO2 will become +16.6 W/m^2 at the surface (+3C), requiring an amplification factor of about 4.5 (16.6/3.7 = 4.49). Post albedo power coming in from the Sun is only amplified by a factor of about 1.6 (390/240 = 1.625).”

        Look at how you calculated that 1.625 “amplification factor”. That is how you calculate average rate.

        Then you calculate 4.5 essentially as an approximation to the current instantaneous rate.

        And then you say,

        “You can’t create more energy out of nothing.”

        This along with other comments (such as references to “3X”) suggests to me that you think that 1.6 and that 4.5 are supposed to be the same.

        That is why I gave example of system responses that don’t show a linear relationship and where the average rates are not equal to instantaneous rates.

      • Jose is right. Tom was right. I was right.

        When you take the ratio of surface radiation to incoming radiation, you are calculating the AVERAGE gain from 0 to 288 K. And as I pointed out long ago, over this temperature range the Earth would have passed through a “giant iceball” stage, and so on, where the climate dynamics would have been entirely different than they are now. Above some threshold (say somewhere below 273 K avg. temperature) water vapor would have started playing a much larger role, so perhaps the gain in this regime is now higher than it would be at much lower temperatures. And yet, the AVERAGE incremental gain works out to about 1.6 right now. If climate sensitivity were 3 °C, and we actually doubled CO2, the AVERAGE incremental gain would still work out to about 1.6.

        I was hesitant to say this before, because you always said you weren’t assuming a linear response. I kept listening, but you would always come around to the same argument. I’ve finally come to believe that it’s not me who isn’t understanding your argument–it’s you. (Note that Jose, Tom, and I have all come to the same conclusion about what your argument means.)

        Nobody is saying the gain from solar energy would be much different than for forcing from GHGs. We’re just saying that the incremental response to forcing is different in this temperature regime than the average over 0-288 K.

        Nobody is saying that energy is made from nothing. We’re just saying that the structure of the atmosphere changes in response to forcing. And HOW MUCH it changes is different from 288 to 289 K than it is from 0 to 1 K, and hence, different than the AVERAGE response over 0 to 288 K.

  98. >> you’re really just talking in hypothetical terms.

    I am speaking using hypothetical examples, as I stated, but that doesn’t mean they might not be accurate. It’s just that I can’t conduct those experiments without turning off the sun, etc. The examples convey the difference between instantaneous rates and average rates.

    >> The problem is the non-linearity is in the opposite direction of the positive feedback case on incremental forcings (i.e. incremental warming).

    It doesn’t seem this is what you have been arguing. You haven’t been arguing about a change of sign but instead about a change in ratios.

    >> BTW, there is specific physical reason for this and it’s that as the climate warms, it evaporates more and more water from the oceans, which removes more energy from the surface (as latent heat), which the evaporated water condenses to form clouds which block more the energy from the Sun, resulting in even more surface cooling.

    First of all, this is a theory. You aren’t providing experimental evidence and convincing experiment to go along with this theory.

    Let me suggest a possible variations to that story.

    Some more water evaporates but not nearly enough to affect the temperature very much. Some extra heat does go into the ocean and then makes its way to the cold regions to melt extra ice (so some of the heat is siphoned off to convert ice into water). The extra evaporation goes up and then condenses releasing the heat higher up. This does help relieve the surface temperature average a little but not nearly enough to make up for the GHG effect. In fact, dare I say that if it weren’t for this extra evaporation you mentioned, the sensitivity today might be 5.83250280734653365663466734567778453445 instead of 3.

    Now, I didn’t provide numbers for my version of the theory either, so feel free to ignore it. It was only a hypothetical, after all.

  99. >> If the surface is to warm by 3 C (from 288K to 391K), it must emit 406.6 W/m^2, which is 16.6 W/m^2 more than the 390 W/m^2 its currently emitting. Conservation of Energy dictates that this +16.6 W/m^2 flux has to be coming into the surface from somewhere

    Back-scattered radiation was already mentioned (ie, energy that earlier in time came from the sun perhaps but which still has not “dissipated” into outer space… “lingering” longer because of the extra GHG).

  100. Barry,

    Maybe I haven’t explained this as well as I could have, so I will try again.

    You say:

    “When you take the ratio of surface radiation to incoming radiation, you are calculating the AVERAGE gain from 0 to 288 K.”

    Yes, agreed.

    “And as I pointed out long ago, over this temperature range the Earth would have passed through a “giant iceball” stage, and so on, where the climate dynamics would have been entirely different than they are now.”

    Yes, agreed.

    “Above some threshold (say somewhere below 273 K avg. temperature) water vapor would have started playing a much larger role, so perhaps the gain in this regime is now higher than it would be at much lower temperatures.”

    This is the part that I think you missed, that I didn’t explain quite right or that you perhaps didn’t understand. The system’s ‘gain’ becomes less and less the higher the temperature and the greater the incident energy.

    Take a look at this graph, which plots the basic climate system ‘gain’ relative to temperature:

    As the temperature increases, the ratio of surface emitted power to net incident solar power decreases. If the behavior you’re describing were true, the opposite would be the case – the ‘gain’ would increase as the temperature increased due to the increased water vapor from warmer air and increased water vapor feedback in response.

    Do you see? Above the current global average temperature (288K), the ‘gain’ is less – not more.

    This is what I mean by the non-linearity of the system being in the opposite direction of the positive feedback case on incremental warming. Each incremental watt of net ‘forcing’ causes proportionally less and less warming in the system.

    Do you also see that Spencer’s sensitivity estimates are located on the far right side of the graph above 288K (i.e. net negative feedback or a gain of less than the global average of about 1.6 from which the ‘no-feedback’ response of 1.1 C is derived)???

    • Here also is the system’s gain graph with the net incident solar power instead of temperature on the ‘x’ axis that I referenced earlier. Also plotted, is the IPCC’s gain for reference purpose (to show it is far outside the system’s measured bounds).

      • RW,

        I looked at the first graph when you posted it before, but …

        1. it didn’t have any units on the y-axis, so I didn’t have much idea what it was talking about,

        2. it had no context, so I don’t know where the data was supposed to be coming from, and

        3. when I backtracked to one of the parent directories, it was the website of some guy who can’t spell, and who makes some elementary mistakes about climate science.

        Color me suspicious.

  101. 1. The ‘y’ axis is the ‘gain’ – the ratio of surface power to incident solar power.

    2. It’s taken from 25 years of ISCCP satellite data (1983-2008).

    3. What mistakes, specifically?

    • Here’s one. I’m looking through his slides, and he’s trying to say that there have been times with a greater rate of temperature change than what has been going on lately. I wouldn’t be too surprised if that turned out to be true, but he’s using one local record (Vostok). A single local record could have pretty significant temperature swings that are simply due to changes in circulation patterns.

      Very Moncktonesque.

      • That’s it? You’re dismissing the graphs just because of this?

        Do you at least agree with my interpretation of the meaning of the gain graphs – that is assuming they are plotted correctly, they show that the non-linearity of the system is in the opposite direction of net positive feedback on incremental warming (i.e. each additional watt of net incident solar power causes proportionally less and less warming)???

      • I also agree that the ice core data from Vostok is not necessarily indicative of the magnitude of global change, but I think the author wouldn’t disagree.

      • Really? Here’s a quotation:

        “This indicates that the short term changes seen in both data sets is the likely result of aliasing to even faster, shorter term variability. To get an idea of an accurate historic rate of change, consider the interval from 12.5K to 12K years ago, where the temperature increased about 5˚C or about 1˚C per century. This doesn’t even include the effect of the more rapid short term changes during this interval and already far exceeds the few of tenths of a degree average increase we’ve experienced during the last 100 years.”

        If the author would agree with me, then he is playing with his audience.

        Much of the presentation is also about how temperature changes preceded changes in CO2 in the ice cores. This is totally non-controversial, and yet the author presents it like it was some kind of new revelation. If CO2 has been a feedback in the past, you have to try to nail down the physics going on to predict what will happen when it becomes a forcing agent. No physics here. Just correlations. Once again, the author is either an ignoramus, or is trying to manipulate ignoramuses.

        As for the graph you linked, I don’t know where he got the data, how he analyzed it, or anything. Has it been published in a peer-reviewed journal? Or am I just supposed to believe it because some guy put it on the Internet?

        So yes, I’m going to dismiss it, at least for now. If I were ever feeling motivated, I might try to figure out how to reproduce the analysis. But given that it’s from some unknown person who makes very poor arguments in other cases, and who hasn’t bothered to subject his analysis to peer review, and given that you’re the only one I’ve ever heard pushing these kinds of arguments… I’m not feeling motivated enough, at the moment.

      • Here’s another question about the graph here:

        The grey data points seem to form lines that have positive slopes, but George White has drawn some green and blue dots that cut across the grey lines. Intuitively, I would expect the separate grey lines to be data points from individual locations. (I may be wrong. Maybe you can clarify, RW.) If so, then it seems to me that almost any given location would have a positive change in gain with a rise in temperature.

        Anyway, maybe you can now see why that graph doesn’t do it for me. I need a lot more context to make any sort of judgement.

  102. Here’s another example. He says to solve global warming, “all we have to do is wait” because “another ice age is eminent [sic].” Yeah–expected in about 30,000 years.

    • How do you figure this? The interglacials typically only last about 10,000 years and we’re about 10,000 years into the this one, are we not?

  103. Looking around for anything about this George White guy, I think I found an explanation for what he’s doing. I think he’s looking at very short-term gains from sinusoidal inter-annual solar forcing. But given that some feedbacks in the climate system are much slower than that, such an analysis would ALWAYS bias the sensitivity low.

    So yes, I’m dismissing your graph.

    • Roy Spencer knows about this. It takes a fairly long time to heat up a lot of water, for one thing.

    • I don’t know if this is related, but looking at graphs of temperatures over the years, it’s very possible we are currently due for some sort of pullback. At least the rate of rise has slowed down (short-term). So any data that focuses on the last few decades only (or last few years, worse) very possibly may show a rate of growth that keeps slowing down. It would be like jotting a little bit of data from a Dow Jones Industrial plot near the high part of a swing. Odds really are that we are headed back down after that point at least for a while even if the moving average of any significant size is still going up (eg, because of constant inflation weakening the dollar on a continual basis). To extrapolate from such short-term movement could lead to absurd results. [“30% gain for this hot stock in 1 day means I will be a trillionaire in no time!!!”]

      I think that graph might be derived from “slope” data. I think this was the case for a Forster/Gregory 2006 paper covered in a critique here http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/ . Like I have said, I am very new to these studies and field (and as an amateur participant), but I got the impression that paper simply derived a single value within certain error bars at some confidence level and that critique (and maybe the paper itself) assumed a normal distribution would apply centered on that average value calculated. Since that FG06 study is based on (I think) slope data on recent satellite flux values, then it would be biased to small climate sensitivity values (ie, larger Y values). It would make no sense to believe a normal distribution applied. In fact, that critique praises that paper for being the only one to depend only on data (and no models), but we can see how relying on models supported by physics is the better way to go if the data is likely to be so skewed. At least, we should use the model predictions to conclude something besides a normal distribution curve. [The IPCC report appears to do just that]

      • I think you’re right, Jose. But the fact is that satellite data isn’t the only data that can be used to calculated climate sensitivities. You can also use paleoclimate data. I find it interesting that George White did an extensive series of analyses on ice core data, but never touched climate sensitivity. To do that he focused solely on a few years of satellite data, and it looks like the slopes he was calculating were over very short time periods.

        In fact, since his graph goes far below the freezing point of water, I’m betting that he is using LOCAL slopes, which is nonsensical, given the amount of “sideways” heat transfer that goes on.

        Once again, it comes down to the fact that simple models of complicated systems are good for some things, and not for others.

  104. Here’s another comment I found from George White here:

    http://www.climateark.org/blog/2008/08/climate-skeptic-slap-down-the.asp#comment-192364

    “Why is water vapor ignored. This contributes almost 2/3 of surface
    warming (CO2 is the other 1/3). Combustion produces twice as many water
    vapor molecules than it does CO2 molecules. Burning H2 produces only
    water vapor. Of course, I understand that regulating evaporation makes
    about as much sense as regulating breathing.”

    George doesn’t appear to understand that water vapor molecules added to the atmosphere typically come out in a few days because of condensation. That’s why water vapor can only act as a feedback.

    Get me some information from a credible source, and maybe I’ll listen.

  105. George White was also complaining about how the IPCC posits “runaway warming” due to positive feedback. As I noted above, this is a very common misconception among electrical engineers. On the one hand, I can understand the source of the confusion. On the other hand, what kind of arrogant person does a full-blown critique of another discipline, without asking a few experts in that discipline to give him some feedback to make sure he isn’t saying anything stupid?

    • P.S. George White appears to be an electrical engineer.

    • The “positive feedback” explanation is very important. I came across it for the first time in this article. From above:

      “Typically, when climate scientists say there is zero feedback, alpha is actually about 3.3 W/m^2/°C. This is the amount of extra energy the Earth would radiate back into space (all else being equal) if the temperature were raised 1 °C, simply because hotter objects give off more radiation. So if alpha is less than 3.3 W/m^2/°C, scientists say there is a net positive feedback in the system, and if it’s more than that, they say there is a net negative feedback. Essentially nobody thinks alpha should be less than zero, though, because that would lead to really crazy swings in the climate.”

      A lot of people coming from other disciplines are likely to be confused and likely to be attracted to a number of things Spencer might say.

      However, I think I read yesterday somewhere that some people think actual positive feedback may have been what occurred in Venus. It seems such a tipping point might exist for Earth (maybe if we lose a large amount of our climate ice). That is a rather scary thought.

      • This is actually one thing I think Spencer explained really well in his book.

      • I agree with this too. The problem with it though is it ignores specifically why the alpha is 3.3 W/m^2/°C. The reason is because it’s the net result of the all the physical processes and feedbacks in the system that have manifested themselves from the forcing of the Sun over centuries, millenia, millions of years, etc.

        The 3.7 W/m^2 from 2xCO2 causes about 0.7 C of ‘direct’ warming from S-B. The additional 0.4-0.5 C added to get the 1.1-1.2 C is the system feedback amount added on, only it’s an upper limit because net negative feedback is required for basic stability.

        You can’t just arbitrarily ignore the feedbacks that result in the 3.3 W/m^2 alpha and then claim there is some nebulous new feedback acting on top of this. Not without explaining specifically why the alpha isn’t 1.2 W/m^2/°C (3.7/1.2 = 3C) or why the emissivity isn’t 0.22 (3.7/16.6 = 0.22). Or not without explain what is so special about the next 3.7 W/m^2 that the system will respond to it in some radically more powerful way than the last 3.7 W/m^2 from the Sun.

  106. Barry,

    Yes, George White is an electrical engineer.

    He does have an extensive analysis on climate sensitivity here:

    http://www.palisad.com/co2/eb/eb.html

    I understand your reservations about the gain graphs and that you would need more information in order to verify their validity. I will try to get more detailed information on them or invite George here to explain them himself if he wishes to.

    As far as the quotes of his you cite, I think you’re largely misinterpreting the context of his remarks.

    Also, I agree that longer-term feedbacks (ice albedo) are not really included from the kind of measured data involved here, but they are generally small even by the IPCC’s quantification. Most of the enhanced positive feedback warming to get the 3 C rise comes from positive water vapor and cloud feedback, which operate on very short time scales. Very little comes from ice albedo.

    I’m short on time at the moment, but I will return to discuss in more detail.

    • Even “fast feedbacks” take a while because of the thermal inertia of the ocean. I wonder what kind of time scale George White was using for his correlations.

      • The thermal inertia of the ocean does not take all that long. Certainly not multiple years or decades. If it did we wouldn’t see anywhere near the magnitude of seasonal change that occurs each year.

      • I’m pretty sure he’s using 25 years of averaged data from ISCCP (1983-2008), at least for his sensitivity analysis. This is plenty long enough for ‘fast feedbacks’ to fully manifest themselves in the system.

      • RW,

        You are wrong about thermal inertia being so insignificant that it wouldn’t damp oscillations. I just ran a couple tests using a simple climate model, with a climate sensitivity of 2.5 °C, and an ocean mixed layer depth of 25 or 110 m. I forced the system using either a sine wave that went between 1 and -1 W/m^2 each year, or a constant forcing of 1 W/m^2. I also used a time step of 0.01 years.

        In the case of the sinusoidal oscillation, the variation in temperature was about +/- 0.012 °C for a 110 m mixed layer and +/- 0.05 °C for a 25 m mixed layer. In the 110 m case, the system reached equilibrium (+0.67 °C) after about 50 years. In the 25 m case, it reached the same equilibrium in about 15 years. There’s no way the effective mixed layer is less than that over any time scale of interest, so we’re looking at damping of at least a factor of 12, due solely to thermal inertia.

        The model treats feedbacks as instantaneous, so the ONLY source of delay is thermal inertia.

  107. One point I wanted to clarify is that I referred to rate of change of temp in a few places, but I was actually getting at acceleration (rate of change of the rate of change). So while current temperatures are going up, essentially, it seems there has been a slowdown (iirc and I may not). It is this slowdown in rate of increase (such as when you approach a local maxima) that might be abused within some studies. For example, arguments in favor of low climate sensitivity might gain superficial support from plots of “recent” temp changes. [Sorry, if what I said in places was disconcerting.]

    Also, this video is interesting (to readers who might want to catch up) http://www.agu.org/meetings/fm09/lectures/lecture_videos/A23A.shtml

    As is this page addressing many challenges to currently accepted theories and explanations (again, particularly valuable to an audience getting their feet wet with climate issues) http://www.skepticalscience.com/argument.php

  108. >> I’m looking through his slides, and he’s trying to say that there have been times with a greater rate of temperature change than what has been going on lately. I wouldn’t be too surprised if that turned out to be true, but he’s using one local record (Vostok). A single local record could have pretty significant temperature swings that are simply due to changes in circulation patterns.

    On this topic of local variations, here is something fresh by the Associated Press: “Canadian Arctic nearly loses entire ice shelf” By CHARMAINE NORONHA

    “Copland said mean winter temperatures have risen by about 1 degree Celsius (1.8 degrees Fahrenheit) per decade for the past five to six decades on northern Ellesmere Island.”

  109. Barry,

    “You are wrong about thermal inertia being so insignificant that it wouldn’t damp oscillations. I just ran a couple tests using a simple climate model, with a climate sensitivity of 2.5 °C, and an ocean mixed layer depth of 25 or 110 m. I forced the system using either a sine wave that went between 1 and -1 W/m^2 each year, or a constant forcing of 1 W/m^2. I also used a time step of 0.01 years.

    In the case of the sinusoidal oscillation, the variation in temperature was about +/- 0.012 °C for a 110 m mixed layer and +/- 0.05 °C for a 25 m mixed layer. In the 110 m case, the system reached equilibrium (+0.67 °C) after about 50 years. In the 25 m case, it reached the same equilibrium in about 15 years. There’s no way the effective mixed layer is less than that over any time scale of interest, so we’re looking at damping of at least a factor of 12, due solely to thermal inertia.”

    If this were true, how do you even explain the seasonal change that occurs in the system? How do you explain that global average temperature changes by over 3 C in just 6 months time? There is definitely a delay or dampening due to the ocean thermal inertia, but it’s not decades and certainly not 50 years. If the system was this sluggish to respond to changes in forcing there would not even be any difference between night and day even. At most the response time is a few years with most of the effect coming in the first year.

    I suggest you run your model with seasonal changes in flux and see if it matches the actual measured change in temperature in the system that occurs.

    • Please, RW, consider the possibility that George White might be pulling your chain. I think I remember him talking about such huge seasonal swings in his slideshow, but I just downloaded the HadCRUT3v monthly global average temperature series, and it appears to me that monthly global avg. temperature only varies by a few tenths of a degree. So he’s off by about a factor of 10. See the data here:

      http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt

      Now take a look at the following comment on another blog, where George White makes the same argument you have about the factor of 1.6 gain, and so on.

      http://www.climate-skeptic.com/2009/09/what-a-daring-guy.html#comment-5642

      George says, “We can also measure the dynamic gain by examining the response of the climate to the +/- 40 W/m^2 sinusoidal variability in solar energy that occurs between perihelion and aphelion.”

      But if you look at the daily data on solar irradiance, it looks to me like it only varies RANDOMLY (not sinusoidally) by less than maybe +/- 2 W, which would be divided by 4 to normalize to surface area. So here George is off by maybe a factor of 100 or so.

      I’ll tell you what. I won’t trust a single number you get from George White. You would be wise to do the same.

      • Barry,

        I don’t have it at my fingertips, but I’ve looked at the monthly global average temperature data. It is about 3 degrees C cooler in January than it is in July. If you doubt this, look it up as I have.

        The change in solar flux from perihelion to aphelion is well documented. It’s about 80 W/m^2 (+/- 40 W/m^2 from the average). Wikipedia says the range is 1,413 – 1,321 W/m^2:

        http://en.wikipedia.org/wiki/Sunlight#Intensity_in_the_Solar_System

        I don’t get what you’re referring to. He’s not talking about daily fluctuations.

        I sense your nitpicking to try to justify some kind of broad rational for dismissing anything and everything GW claims or has put forth. That seems pretty silly to me.

      • Hi RW,

        I understand the bit about seasonal variations, now. George says they are normalized out of the plot I linked, and I believe it. See below.

        I still don’t understand how you would get around the monthly averages. George says it’s because I’m using temperature anomalies, but that just involves picking an arbitrary zero point.

  110. The graphs you saw are plots of ISCCP data. Each small dot represents the relationship between 2 climate variables. Each dot is the intersection of the monthly averages for a 2.5 degree slice of latitude, with complete coverage from pole to pole. The larger green (N hemisphere) and blue (S hemisphere) dots are the average over all samples for each slice.

    The data spans nearly 3 decades of 3 hour samples for each 2.5 degree square spot (~280 km/side) on the surface which I’ve aggregated into a hierarchy of spatial and temporal averages. The variables in the data set are many and include surface and cloud reflectivity (albedo), surface and cloud emitted power (temperature), the fraction of the surface covered by clouds, incident solar power, water column, cloud emissivity and optical depth. If you look at the path of monthly averages over time for each slice and any variable combination, they follow Lissajous patterns centered on the path defined by the larger green and blue dots.

    The reason for choosing slices of latitude, is that the only thing that’s different between slices is solar forcing, so the resulting plots illustrate the response of the climate system to changing forcing with topographic variability seen as the difference between the blue and green dots. Keep in mind that the green and blue dots are not a short term response, but is the 30-year average response.

    This is the same kind of critical analysis one would apply to reverse engineer an unknown system. The method compares measurable attributes against each other, from which you calculate their interdependencies and the systems response to change. This ultimately leads to a derivation of a circuit model for the system. Both the climate and electrical circuits are described by similar forms of differential equations and both must follow the same rules. The responses I’ve plotted do. The way CAGW climate science is quantified defies these rules.

    Chaos can be modeled as ‘noise’, but noise cancels over time, so chaos has no effect on the average, only the path it took to get to that average. The critical concept is that there’s an infinite number of equivalent states that have the same average.

    • Thanks, George. But as you can see from previous posts, I don’t trust any of your numbers, due to past experience with your numbers. E.g., I pointed out to RW that your statistical analyses seem to ignore thermal inertia (among other things) in the system. So maybe you can “reverse engineer” a circuit system in this way, but in this case I think you are ignoring too much physics.

    • co2isnotevil, you are not revealing the whole picture. It would be useful to us if you gave a better description of the analysis you are undertaking.

      For starters, data points won’t match up with any function solution except the most general of functions which simply map the given points a to b and nothing else. So almost surely (especially given you mentioned differential equations), you are approximating to an extent that should quantify such an approach (eg, including the errors and the methodology).

      After you come up with your approximate answer, we can just add another point afterward or replace one of the existing points and it would foil whatever parameters you had built up.

      In short, it seems you are curve fitting, and, by using only recent data, you will have an infinite number of approximate solutions, of which your chances of picking one that matches past temperatures outside of your range (eg, from earlier in the century or earlier) will approach zero. You are repeating what Spencer was criticized of doing right in this article by Barry.

      Since it’s very easy to speak mumbo jumbo jargon and make wild claims that appear to defy current scientific understanding, unless you specify precisely your methodology so it can be placed under the microscope as was done for Spencer’s work, you are wasting people’s time and will likely be ignored by most people doing serious work. [For example, I may not totally ignore you since I am learning and currently find it interesting enough to dedicate some time to such discussions.]

    • Let me add, and I am sure you know this, but the less data you use or the more localized is the data, the easier it is to get tighter agreement with just that data. However, if you can’t associate such a solution to physical processes with proven mathematical models or if the equations do a horrible job with past data, then those solutions will fall by the wayside as do most such results.

      I think it is great that you are analyzing, but I would not focus too much on such a short period of time. If you are honest about letting the data lead you to a solution, then it’s very likely you will end up with a solution not too unlike current climate models (but I may be wrong). And I would make sure you have a theory that allows you to explain a fair amount of paleoclimatic evidence.

    • Another specific problem is that climate data comes in many shapes and sizes. We have a very large and complex system not at all like a small local circuit with well defined input and output ports, well-defined interpretations to the values on those ports, and a comfortable mostly unambiguous understanding of the measuring process.

      With the climate data, you need a sense of the physics and the details of the measuring instruments in order to recognize “flaws” in the data. [see for example this: http://www.skepticalscience.com/satellite-measurements-warming-troposphere-advanced.htm ]

  111. A) You are probably looking at AU normalized solar radiance. All of the data from NASA is of this variety. Look at the fine print. The difference between perihelion and aphelion is 80 W/m^2, or an average of 20 W/m^2 across the whole surface.

    B) You are probably looking at anomaly temperature graphs, not actual temperature graphs. This why you fail to notice the large swings arising from seasonal temperature variability which is a response to seasonal solar variability. If the Earth responded as slow as CAGW requires, or as anomaly plots suggest, there would be no differences between night and day and little seasonal temperature variability.

    When you concentrate your analysis on data that removes seasonal and orbital variability by ‘averaging it out’, you miss out on the richness in the data which is necessary to see before you can gain an understanding of how the climate works.

    • Hi George,

      A) I had not heard that before, but after I did an inverse-square law calculation given the different distances from the Sun, this makes sense.

      B) Anomaly plots just choose an arbitrary zero point. Why would that have anything to do with it?

    • >> at AU normalized solar radiance

      And? Assuming that is true, you personally don’t like those units, fine, but why should that bother me?

      >> You are probably looking at anomaly temperature graphs

      I looked that up (http://www.ncdc.noaa.gov/gcag/ and the faq), and it almost seems to me that you are complaining because the data encoding of each frame includes deltas rather than bulkier absolute values.

      Are you suggesting the reference point is non-existent?

      In fact, by focusing on deltas, we can get better resolution and the reference points can actually be adjusted a bit over time in order to accommodate for things like imperfect thermometers or measurement procedures.

      The NOAA happens to like that model. Would you like to argue why you think it is flawed?

      >> This why you fail to notice the large swings arising from seasonal temperature variability which is a response to seasonal solar variability

      I am not sure what you mean. Are you suggesting that the reference values don’t exist?

      >> If the Earth responded as slow as CAGW requires, or as anomaly plots suggest, there would be no differences between night and day and little seasonal temperature variability.

      Are you suggesting that the models don’t provide that analysis and adjustments if such is necessary in order to carry out their differential equations evaluations?

      BTW, I am new to this, so feel free to take a stand or enlighten me.

      >> by ‘averaging it out’, you miss out on the richness in the data which is necessary to see before you can gain an understanding of how the climate works.

      Assuming all we have are these averages, I can see how that would be a major problem for the analysis you are doing, but I really think the models in use are based on an analysis of the physics of the system and many types of measurements.

      We are not in a situation where reverse engineering is the only path. We have access to the internals of our system. You are reducing your options if the extent of your study is reverse engineering.

      Don’t mean to sound harsh. Feel free to make your case (for my benefit).

  112. What you are calling thermal inertia is modeled by the time constant of the system. Certainly, the land has a much shorter time constant, but the oceans have a time constant too and it’s measured in years.,not decades.

    Perform this thought experiment. Suppose the Sun stopped shining. How long would it take for the Earth to become a frozen ball? The point here is that the time constants are all short enough that all of the effects of incremental CO2 have already manifested themselves. The idea that there’s deferred warming because the ocean responds too slowly doesn’t work. Besides, half of the CO2 man has emitted has been in the system for many decades.

    BTW, the alpha of 3.3 W/m^2 is not a constant, but non linear as a consequence of Stefan Boltzmann. At lower temperatures, feedback is more positive and at higher temperatures it’s more negative. At the current average temperature of the planet (about 278K), the feedback is somewhat negative, but increasingly negative as the temperature rises.

    • At least short enough for the bulk of the additional CO2 already in the atmosphere. I suppose one could correctly argue that the last 4 or 5 ppm have not yet fully reached equilibrium, but such a small quantity represents a negligible forcing.

    • Hi George,

      I just ran a zero-D climate model again, this time with the climate sensitivity set to 2.5 °C, the ocean depth to 25 m, and sinusoidal forcing (1 period per year) of +/- 40 W/m^2. The response is temperature oscillations of about +/- 0.2 °C, which is in the ballpark of the real oscillations in the temperature anomaly data from month to month.

      I’m guessing your whole argument boils down to this. If you look at a short enough time scale (e.g., 3 hours), the whole thing boils down to the Planck response. Big surprise.

      The fact is that the ocean does heat up, and there is considerable thermal inertia in the system. So maybe there are many similarities between a circuit system and the climate system, but there are some differences.

      • Barry,

        The thermal inertial itself does not have any direct effect on the net direction of the actual feedbacks in the system or the ultimate magnitude of response to perturbations. In other words, even if the response time was actually 50+ years, the net feedback could still be strongly negative.

        The 1.6 to 1 power densities ratio certainly includes virtually all of the feedbacks in the system, both long and shorter-term. How could it not? The Sun has been forcing the system for billions of years. The one exception is the ice albedo feedback, which is very small by comparison. That is unless you want to argue that system prior to human CO2 emissions was not in a relatively steady-state condition and had much more warming in the pipeline because the response time was not sufficient for the feedbacks to fully manifest themselves, and I know you don’t want to argue this. Obviously, this doesn’t make any sense, but it illustrates the main point of contention of GW’s analysis, I think.

      • And yes, it largely boils down the so-called ‘Planck’ response, because the ‘Planck’ response is derived from the surface response to solar forcing, which is the net result of the physical processes and feedbacks in the system.

        This is what I was trying to illustrate much earlier in the discussion and again in my post on Sept. 29th at 9:59pm.

        • Remember also that near the beginning of this discussion, I pointed you to this piece at RealClimate.

          http://www.realclimate.org/index.php/archives/2006/11/cuckoo-science/

          You said it didn’t really address what you were talking about, but let’s review some passages from the RC piece.

          “So for our next trick, try dividing energy fluxes at the surface by temperature changes at the surface. As is obvious, this isn’t the same as the definition of climate sensitivity – it is in fact the same as the black body (no feedback case) discussed above – and so, again it’s no surprise when the numbers come up as similar to the black body case.”

          Here’s another one.

          “But we are still not done! The next thing to conviently forget is that climate sensitivity is an equilibrium concept. It tells you the temperature that you get to eventually. In a transient situation (such as we have at present), there is a lag related to the slow warm up of the oceans, which implies that the temperature takes a number of decades to catch up with the forcings. This lag is associated with the planetary energy imbalance and the rise in ocean heat content. If you don’t take that into account it will always make the observed ‘sensitivity’ smaller than it should be. Therefore if you take the observed warming (0.6°C) and divide by the estimated total forcings (~1.6 +/- 1W/m2) you get a number that is roughly half the one expected. You can even go one better – if you ignore the fact that there are negative forcings in the system as well (cheifly aerosols and land use changes), the forcing from all the warming effects is larger still (~2.6 W/m2), and so the implied sensitivity even smaller! Of course, you could take the imbalance (~0.33 +/- 0.23 W/m2 in a recent paper) into account and use the total net forcing, but that would give you something that includes 3°C for 2xCO2 in the error bars, and that wouldn’t be useful, would it?”

          “And finally, you can completely contradict all your prior working by implying that all the warming is due to solar forcing. Why is this contradictory? Because all of the above tricks work for solar forcings as well as greenhouse gas forcings. Either there are important feedbacks or there aren’t. You can’t have them for solar and not for greenhouse gases. Our best estimates of solar are that it is about 10 to 15% the magnitude of the greenhouse gas forcing over the 20th Century. Even if that is wrong by a factor of 2 (which is conceivable), it’s still less than half of the GHG changes. And of course, when you look at the last 50 years, there are no trends in solar forcing at all. Maybe it’s best not to mention that.”

          It seems to me that the issues Gavin was addressing are quite similar to the ones you have been bringing up. This has been quite frustrating for me. You say one thing, and I think I understand, but when I say so you reply that, no, you were saying something different. So I keep listening and you keep coming back to the same arguments, which are addressed (or close enough) in the article I pointed you to in the first place!

          Now George has shown up. He knows better than all the atmospheric physicists and oceanographers, because he can use the S-B equation like nobody’s business! They don’t know how to do the spatial statistics and correct for all the errors and do the data homogenization right, because they’ve only consulted with statisticians, rather than electrical engineers. Roy Spencer and John Christy are some of the schmucks who apparently don’t know how to properly do statistics on the satellite data, because George’s huge seasonal swings in the global average temperature don’t show up in their data product, either. See here:

          http://www.drroyspencer.com/2010/03/february-2010-uah-global-temperature-update-version-5-3-unveiled/

          They don’t know how deep the mixed layer of the ocean is–it must be much more shallow than all those thermometers have indicated, because that would work perfectly with George’s idea that everything can be described by the S-B equation.

          I pointed out earlier some things George said that were patently false (the water vapor one was the worst, I thought), but now I’m supposed to believe his statistics dump because he knows what a time constant is, and so forth. I must have taken those false statements out of context, anyway.

          I’ve been pretty patient with all this. You’ve now done 99 comments on this single page, and George seems to be just warming up. You’ve made your case, and it all boils down to the fact that you don’t want to deal with thermal inertia. I think that’s nonsense. Your only answer is to simply contradict what all the atmospheric scientists and oceanographers say. End of story. I don’t want this to become a launching pad for George’s crank science, so you two can make one last statement, and then I’m cutting you off.

      • So once again, you are arguing for an unchanging gain ratio from 0 to 288 K and above. Oh, I know… you’ll deny that’s what you are doing and say the gain actually gets smaller as you go up in temperature, and point to George’s plot. Which are 3-hr. data points, and would not include thermal inertia effects. Therefore, it is inevitable that it boils down to a simple black body problem.

        Not buying.

    • >> but the oceans have a time constant too and it’s measured in years.,not decades.
      >> Perform this thought experiment. Suppose the Sun stopped shining. How long would it take for the Earth to become a frozen ball?

      I don’t know if I am deviating from your main point in answering this as follows.

      If we are talking about how long for certain effects to take place, then if the time constants are beyond 1 day, the day to day cycles will be neutered at least somewhat. If the time constants are beyond 1 year, then the yearly cycle will keep resetting the clock (at least to first approximation and perhaps more), and we will only get effects as we deviate from the past averages.

      Anyway, I am not disagreeing or agreeing with claims about “ocean time constants”. I don’t know the answer (or the question exactly), yet I would not be surprised to learn that at least some of them are on the order of years.

      Note that this reply has little to do with man-produced and un-recycled CO2 since I was only looking at variables like temperature and anything else that is basically cyclical.

      >> all short enough that all of the effects of incremental CO2 have already manifested themselves

      Well, there are important chemical reactions (eg, weathering) involving CO2 that have time constants much longer than mere years (or so I think I heard and can easily believe).

      This question of time constants needs to be formulated better. For example, we have a very complex system here on Earth. If today the US passes certain laws to curtail CO2 release, then the human-to-law time constant to react to CO2 was obviously much more than a few years. I don’t mean this as a joke. We can assume that such a law might be needed to ultimately achieve some desired result with the planet.

      Looking solely to the non-man natural world, there can be many physical/chemical reactions that only really engage sufficiently after other reactions have largely been carried out (ie, each evolved way beyond a single time constant length). An analogy would be with cell biology or any complex chemistry where things like the presence of catalysts/hormones/vitamins/proteins/whatever at certain levels are needed to essentially allow the next stage of the global reaction to proceed.

      We can get staggered effects. [Another hypothetical and non-serious example would be that CO2 and H2O in X:Y ratio in the atmosphere at below 100 K explodes when a comet passes within a certain distance of this gas mixture. In this case, the full effect of unchecked CO2 release might require many stages, including that a comet approaches the Earth.]

      In these cases, we are looking at potentially a large number of time constant lengths which effectively would need to be added up to produce an effective global one. Sure, calculating an “ocean time constant” obviously involves lots of little cases, but I reject your conclusion that obviously any effects have already been felt for say CO2. There might numerous unknown trigger levels to more serious effects awaiting over the horizon.

      >> The idea that there’s deferred warming because the ocean responds too slowly doesn’t work.

      Read what I just wrote. In any complex system, the eventual result might be carried out in many steps.

      >> At lower temperatures, feedback is more positive and at higher temperatures it’s more negative.

      I don’t know what you mean. It’s not clear what equations you are using (ie, what your model is like). I really expect you aren’t just using S-B if you are modeling the Earth’s climate, so I am not sure.

      • >> I reject your conclusion that obviously any effects have already been felt for say CO2. There might numerous unknown trigger levels to more serious effects awaiting over the horizon.

        I should have added that one possible real example is that as ice from certain regions melts, more global warming gasses are released. There might even be a tipping point past which we’d get some degree of actual net positive feedback (at least short-term) leading to a very distinct and potentially much less hospitable climate.

  113. I am adding to the comment above about anomaly temperature graphs and reference points.

    Let me just quote from http://www.ncdc.noaa.gov/cmb-faq/anomalies.php

    > For example, a summer month over an area may be cooler than average, both at a mountain top and in a nearby valley, but the absolute temperatures will be quite different at the two locations. The use of anomalies in this case will show that temperatures for both locations were below average.

    > Using reference values computed on smaller [more local] scales over the same time period establishes a baseline from which anomalies are calculated. This effectively normalizes the data so they can be compared and combined to more accurately represent temperature patterns with respect to what is normal for different places within a region.

    > For these reasons, large-area summaries incorporate anomalies, not the temperature itself. Anomalies more accurately describe climate variability over larger areas than absolute temperatures do, and they give a frame of reference that allows more meaningful comparisons between locations and more accurate calculations of temperature trends.

    • FWIW, from that same faq, #11:

      > What is the difference between the gridded dataset and the index values?

      > The land and ocean gridded dataset is a large file (~24 mb) that contains monthly temperature anomalies across the globe on a 5 deg x 5 deg grid. The anomalies are calculated with respect to the 1971-2000 base period. Gridded data is available for every month from January 1880 to the most recent month available. You can use it to examine anomalies in different regions of the earth on a month-by-month basis. The index values are an average of the gridded values (see question #7); however, the anomalies are provided with respect to the 20th century (1901-2000) average. They are most useful for tracking the big-picture evolution of temperatures across larger parts of the planet, up to and including the entire global surface temperature.

      Thus, we have access to a comprehensive per region data set as well as to the “index” (global average) values.

      You can graphically see this data on a world map here http://www.ncdc.noaa.gov/gcag/app.html

      The directory to the full data set http://www1.ncdc.noaa.gov/pub/data/cmb/GCAG/data/

      The reference values are found below #11 in a list of tables.

      Then there is analysis software you can install from GISS that may leverage this or related data (I don’t know). The main page and a link to the source code (at the bottom of page) is here http://data.giss.nasa.gov/gistemp/ And looking inside the source “tarball” we find a link to the data files ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/

  114. There’s some push back about the intrinsic problems with anomaly analysis, so let me address that issue first. I will do so by example. The first plot shows the monthly average temperature of the planet as a function of time shown in red and is direct from the on-line ISCCP data. You should notice the 3-4C p-p yearly cycle. The black dotted line is the running 12 month average. As another rhetorical question, can you say why this is 180 degrees out of phase with max solar power at perihelion which occurs in early January? (Hint:: consider the hemispheres as acting independently)

    You will notice a sharp jump in temperature between September and October 2001. This occurred when NOAA-14 was replaced with NOAA-16 and there was no overlapping polar orbiter coverage to connect the 2 differently calibrated satellite sensors together, which were also used as the baseline for the other satellites. Most of time, there’s redundant polar orbiter coverage, this specific switch over was the exception. This was a shortfall in Rossow’s cross satellite calibration method, which I pointed out to him many years ago and the data has still not been corrected; moreover, it’s the biggest error in the record and isn’t even mentioned in the list of known errors. He has privately acknowledged the error, but It’s only vaguely mentioned in an obscure part of the documentation describing satellite sensor responses. This error turns out to be the biggest single problem with the data that prevents it’s use for temperature trend analysis. Unfortunately, some people still do.

    A 5-year anomaly representation of this data is shown in the next plot.

    The red monthly average is the average of the previous 5 years of that months measurements and becomes the base of a 5-year anomaly plot. The black dotted line is a running 60 month average and is equivalent to an anomaly plot, except shown with an absolute scale. But as you can see, this incorrect data correlates quite closely to some hockey stick anomaly plots. When the cross calibration error is fixed, the monthly data becomes this,

    http://www.palisad.com/co2/bias/temp_fb.gif.

    When you apply 5-year averaging, you get this,

    The moral of this story is that anomaly analysis can hide a multitude of sins turning anomalous data processing errors and anomalous data into false anomalous trends.

    • George,

      Do you have data on how much fluctuation there is globally averaged on very short time periods – say days to weeks?

  115. The 3 hour samples reflect the completeness of the raw data for the purpose of calculating a high certainty average. Each monthly average is the average of 240 3-hr samples, where 100 or so raw pixels per sample are aggregated into each 280 km on a side, equal area cell. Each monthly average per cell is the average of 10’s of thousands of individual measurements and each 2.5 degree slice consists of an average of about 100 cells.

    You should note that this differs from Hansen/Lebedeff homogenization, which would take a measurement representing a tiny fraction of one pixel and extrapolate it to the rest of a cell 500 to 1000 km on a side. This is just another technique to smooth the relevance out of measurements,

    • It’s still too short if there is thermal inertia, which there is. Wave action mixes the upper layer of the ocean quite a ways down, and on a longer time scale, ocean currents circulate between the surface and the deep ocean.

    • My guess is that you’ve done the spatial statistics wrong. If you think differently, take it up with Roy Spencer and John Christy.

  116. The time constant is defined as how long it takes for 63% of the final effect to occur in response to a step change in stimulus. This in an exponential delay of the form e^-(t/tau), where tau is the time constant. This rate is independent of the size of the change, If tau was 6 months and the Sun increased by 100 W/m^2, 63 W/m^2 of this effect would be felt within 6 months. If the Sun decreased by 10 W/m^2, 6.3 W/m^2 of this effect would be apparent within 6 months.

    http://en.wikipedia.org/wiki/Time_constant

    It’s important to understand that this exponential response is a solution to the differential equations describing the system. Functions of the form e^jwt are also valid solutions, representing a sinusoidal response to sinusoidal stimulus, which is clearly evident in the raw data, but averaged away by the presentation as smoothed, homogenized, anomaly plots.

    Any reason my previous post didn’t post? Is there a limit to links? The post only had 4, but that many were required to explains in detail how anomaly analysis can easily misrepresent trends that aren’t there,

  117. NOTICE:

    As I mentioned to RW above, I’m letting RW and George post any last words, and then I’m cutting this discussion off. It all boils down to the fact that they don’t accept that there’s significant thermal inertia in the system, as all the climate scientists I know of (including Roy Spencer) believe. They also don’t seem to believe the monthly global temperature series, including the one managed by Roy Spencer. Fine. But if they want to dispute basic points like that, they need to publish a paper on it. We’re not going to settle it here.

    • If I can have a last “word” as well, it would be a very short reply to co2isnotevil’s last comment+4 figures on anomaly data:

      Diff/delta/anomaly encoding and transmission is very useful for numerous reasons. You bring up a good point on robustness (eg, “error correcting”) of data. We had an unplanned satellite mishap and lost our bearing to the diffs. A well engineered system takes this into account, and it is a distinct issue. What I assume is the solution here is that there are ground stations and a number of other overlapping ways to recalibrate this data.

  118. You seem to be under the impression that thermal inertia is like the inertia of mass, where if you apply a force to a mass resulting in motion, inertia will keep it moving once the force is removed.

    If the thermal mass of planet is being heated by the Sun and the Sun stops shining, that thermal mass will immediately start to cool, not continue to warm as you seem to think it does. This is as true for Venus as it is for the Moon. Just think about what happens when the Sun sets.

    You’re also over emphasizing the use of SB. This equation is just a tiny part of the whole and simply establishes the relationship between surface temperature and the power flux emitted by the surface. SB is certainly one of the relevant equations in the differential equation formulation of the climate system, but it’s only one of many.

    Once more I will remind you that the reason you don’t see the 3C seasonal variability in the planets global average temperature is because you’re blinded by anomaly analysis. Even Spencer and Christy report results in terms of anomalies, simply because the idiots reviewing papers ignorantly fail to comprehend any other metric. This ignorance is part of the problem.

    Here are 2 more plots supporting 3C of global variability. The first plots global reflectivity with global temperature. As you can see, the planet is colder in January, despite increased solar power at perihelion, because the global surface reflectivity is higher. Similarly, the planet is warmer in July because the global reflectivity is less. Notice how quickly the surface temperature adapts to changes in surface reflectivity and visa versa? Notice how this data set includes the effects of ice albedo feedback?

    The blip in 1995 was caused by another case of satellite mis-calibration, where one of the GOES satellites failed and the other was moved to improve coverage across the US and this shift was not properly accounted for by the cross satellite calibration method.

    The next plot shows the relative variability of the 2 hemispheres, (green abd blue), where the red line is the global average temperature with the NOAA-16 bias removed.

    Notice that the shorter N hemisphere time constant results in a wider range which when summed with the shorter range of the S hemisphere produces a result with the signature of the N hemisphere. This is why the global response echoes the response of the N hemisphere.

    • See my last reply to RW for comments on thermal inertia. No, I don’t think of it like inertia of mass. I think of it as the result of mixing surface water with deeper water. We know it happens. We know how deep the mixed layer is, on average. And a mixed layer depth in any reasonable range would result in decades-long time lags.

  119. “I’ve been pretty patient with all this. You’ve now done 99 comments on this single page, and George seems to be just warming up. You’ve made your case, and it all boils down to the fact that you don’t want to deal with thermal inertia. I think that’s nonsense. Your only answer is to simply contradict what all the atmospheric scientists and oceanographers say. End of story. I don’t want this to become a launching pad for George’s crank science, so you two can make one last statement, and then I’m cutting you off.

    Barry,

    This is bizarre. Yes, you have been patient, but so have I (this is a good thing for both of us and rational, productive discussion in general). And no one is forcing you to ‘buy’ anything here. You are of course free to make up your own mind – as everyone should, based on your assement of the credibility of the evidence and logic put forth.

    First of all, the notion that what’s been presented here must be wrong primarily because it supposedly “contradicts what all the atmospheric scientists and oceanographers say” (i.e. what you believe) goes against even the most basic scientific logic.

    You know, many people on pro-CAGW boards have accused me of being George’s diciple, but the primary reason I’ve latched on to these things is because when push comes to shove, no one ever seems to be able to dispute them with any real science or data. And I’ve watched many try, including Gavin over at Real Climate, only to completely fail. And that’s only when comments weren’t censored or removed beforehand.

    You critcize and don’t accept the global the global gain graphs because you didn’t have enough information about how they were done, and I understood and even appreciated that. George then came and layed out the details and methodology for the graphs, which are a little complex but pretty logical none the less. If you can’t accept those graphs, consider these, more straightforward, hemispheric gain graphs:


    The gain is 180 degrees out of phase with the incident energy. When the net energy flux increases, the gain goes down below the average and vice versa. These are huge increases in energy flux – far larger than the small increase that would come from 2xCO2. Unless you want to argue that the system obeys different physics hemispherically than it does globally or that watts of GHG ‘forcing’ obey different physics in the system than watts from solar forcing, I don’t see how this glaring discrepancy can be reconciled. The measured behavior is exactly the opposite of that predicted by the IPCC’s computer models. If these plots don’t convince you that the net feedback to increasing radiative forcing is negative and the models are wrong then I don’t know what could.

    You know also, sometimes I wonder if I’m in the twilight zone with this issue. I mean there are so many glaring flaws in the CAGW hypothesis, I am truly baffled as to how many obviously intelligent people, such as yourself, don’t see them (I’m being totally serious). Take for example the idea that the combined feedback of water vapor and clouds is a strong net positive. This is completely oblivious to the basic physics of water vapor and clouds, which are obviously the primarily mechanisms maintaining the planet’s energy balance (i.e. the tightly constrained net surface flux of about 390 W/m^2 despite such a large amount of local, regional, hemispheric and even sometimes globally averaged variability).

    Isn’t it obvious that the atmospheric water cycle (ground state water -> evaporation -> water vapor -> clouds -> precipitation -> ground state water) is the thermostat controlling the system’s energy balance and ultimately the globally averaged surface temperature? Is it just a coincidence that energy from the Sun drives evaporation of water? Is it just coincidence that the evaporated water removes heat from the surface, condenses to form clouds and clouds reflect the sun’s energy? Is it just a coincidence that water precipitated out of the atmosphere emanates from clouds?

    If water vapor is the primary amplifier of warming, as claimed, what then is the controller? If not clouds via there ability to reflect incoming solar energy and precipitate out the water from the atmosphere, then what?

    I just cannot reconcile basic stuff like this, among many other things. Call me crazy if you want.

    • Hi RW,

      You have probably been more patient than me, which I appreciate, so I hope you don’t feel too miffed about this. I usually don’t cut off comments, unless I think they are getting useless. In this case, I think we are going around in circles, and the important points have been made.

      In George’s case, all I can do is look over what he has written, and I notice a bunch of rookie mistakes that I am in a position to recognize. (I’m talking about how we’re supposed to be going into an ice age, talking about how water vapor is a more important GHG than CO2 and so why aren’t we regulating evaporation, dismissing thermal inertia, and so forth.) Now he comes along and makes his case by dumping all these statistics on us, in which he dismisses all the standard temperature records, and so forth. For one thing, people spend their whole careers handling this kind of data, but he just dismisses them as “idiots”. I could go to all the trouble of learning how to handle all that data, but given the rookie mistakes I know of, I’m not feeling motivated. It seems to me to be a recipe for crank science.

      What’s more, even if all these statistics George is laying on us are right, his conclusions are still wrong. One would expect the more-or-less instantaneous response to follow the S-B law. The question is simply whether there is significant thermal inertia in the system, and whether some of the feedbacks are more long term. The key to understanding this is that it is ONLY the surface that radiates energy back into space. If water on the surface absorbs some energy, and then gets mixed down lower into the ocean, then there will be thermal inertia. We KNOW this happens, because of the existence of the mixed layer at the top of the ocean. If we use some reasonable value for the mixed layer depth (maybe around 100 m), or even one that is probably too low (maybe around 25 m), we can use a simple climate model to show that this would result in thermal inertia significant enough to cause time lags on the order of decades. And that is BEFORE we account for longer-term feedbacks.

      As far as I’m concerned, this one point completely undercuts George’s conclusions, whether he did the spatial statistics right, or not. I’m sure Gavin pointed this out, as have I. If you don’t think this undercuts George’s point, that’s fine, but unless you have some way of showing that thermal mixing in the ocean doesn’t occur to any significant degree, I don’t think there’s any point to continuing the thread.

      Feel free to respond if you feel you have something cogent to say, and I’ll let you have the last word.

      • Perhaps I’m missing something, but I do not understand the fundamental physics behind the logic you seem to be using here.

        Firstly, neither myself nor George are claiming an instantaneous response. There is definitely a delay due to the thermal inertia of the ocean. We are saying the delay is on the order of few years and not decades, but even if were decades, I don’t understand how this signifies anything in regards to the direction of the feedback and the net final response to the perturbation (let alone an indication it’s 300% net positive). I also don’t understand how the physics of the measured shorter-term behavior (the yearly hemispheric gain graphs, for example) would become the opposite long term. Now, I do understand that the measured data does not include ice albedo feedback, which is long term. But as I mentioned before, this is relatively small and probably negligible, especially if the net feedback is negative. I also don’t understand how the measured behavior could be what it is unless the response time was relatively short (at least much shorter than decades or 50 years). If the response time was as long as you claim, the measured behavior would not occur, would it? If yes, how?

        I do understand fundamentally how the thermal inertia of the oceans works. The question seems to be the amount of inertia (the time constant) and the relative depth of water involved to effect a change in ‘forcing’. Once these are known, I don’t see how you can make the case as you have, unless you want to argue that GHG ‘forcing’ heats the ocean from the bottom up and solar forcing heats it from the top down, and I know you don’t want to argue that (just kidding).

      • I should more correctly say the magnitude of measured behavior would not occur if the response time is a long as you claim it is.

  120. If you’ll permit me one other last thing, I think you are giving the IPCC and much of the so-called climate science community way, way too much benefit of the doubt. The IPCC is primarily a political organization – not a scientific one. Their goals with this issue are political first and foremost. They’re not in business to get the science right. This is not an accusation of conspiracy, but of strong bias toward anything supporting CAGW and a strong bias against anything that would undermine or discredit it. Because of this, I think it would be wise of you to be a little more dubious of their purported or assumed authority on the subject.

  121. The way to understand ‘thermal inertia’ is to notice that the hottest/coldest days of the year occur about 2 months after the longest/shortest days. This lag isn’t the result of static inertia. The temperature continues to rise/fall as the power falls/rises because the temperature hasn’t reached an equilibrium value with respect to the incident power yet. Once it does, and the power continues to rise/fall, temperatures will turn around and rise/fall, manifesting a phase delay. While the S hemisphere lag is slightly longer than the N, the time constant is more closely related to how fast the output is changing, relative to how fast the input is changing, or in other words, how quickly does the system respond to change.

    Part of your misunderstanding arises from your impression that the effective thermal mass of the planet is very large. You only consider the positive work energy stored as surface waters heated above the planets average temperature. You neglect the equal and opposite negative work energy stored as water at a temperature below the average of the planet. These effectively cancel making the net energy stored relatively small, enabling the planet to react far faster to change than you believe, even as the magnitude squared of the stored energy is large. The total net energy stored in the climate system is better quantified by the temperature difference across the thermocline, where the thermocline stores energy by insulating warm surface waters from the deep ocean cold. The evidence of this is the temperature profile of the thermocline which mimics that across a wall insulating a warm space from a cold one.

    I would also suggest you refrain from speculating on the specific positions of scientists like Spencer and Christy, for if you understood their position, you would have a different opinion yourself. You should also refrain from assuming that I’m not accounting for different things. For example, I fully account for water vapor, it’s dominance in atmosphere absorption and it’s relationship to clouds and I properly account for the effects you attribute to ‘thermal inertia. I would also prefer if you refer to my work as well tested hypotheses, rather than ‘crack pot’ theories. A crack pot theory would be asserting that CAGW even rises to the level of a theory, when in fact, it’s a no more than a weakly tested and widely falsified hypothesis.

    And you are right about me just getting started. I have so many lines of evidence of low sensitivity and falsification tests of CAGW, it will make your head spin moreover; I can also explain how and why the climate behaves as it does.

    • George,

      If Barry will permit me ask, am I correct in assuming that the thermal inertia, in and of itself, is not at all an indication of the direction of the feedback in response to changes in forcings? Meaning if it was 50 years, the feedback could still be strongly negative.

      Mind you, I do understand that if the feedback was positive in the way the IPCC defines, the time between initial forcing and final effect would be longer than if the feedback was negative.

      • I should clarify. What I mean is even if the time constant was 10 years (instead of only about 1 year) for a total response time of 50 years, this in and of itself is no indication of the direction of the feedback.

    • I’m feeling a bit left out here.

      >> the time constant is more closely related to how fast the output is changing, relative to how fast the input is changing, or in other words, how quickly does the system respond to change

      A system can show a lag of 2 months yet have a time constant of 100 years.

      For example, the top layer of the ocean might take 2 months to reply (hypothetically), but the ocean in its entirety might take 100 years to respond fully at its slower rate and reach its final steady state in response to a net increase in flux over these 100 years (ie, to an increase in the bias level of the sinusoid flux forcing).

      If you work with circuits, you probably know this. I’m not sure why your comment sounds to me as if you don’t understand this.

      >> You neglect the equal and opposite negative work energy stored as water at a temperature below the average of the planet. These effectively cancel making the net energy stored relatively small

      Maybe *you* are neglecting that, but climatologists are talking about average gains/losses over time.

      Perhaps you think they “effectively” cancel. I haven’t seen anything convincing from RW or you on why some numbers should be anything in particular.

      Remember, if you have analysis that can hold up to scrutiny. Consider publishing it as Spencer has (despite the number of flaws in some of those papers).

      >> I would also suggest you refrain from speculating on the specific positions of scientists like Spencer and Christy, for if you understood their position, you would have a different opinion yourself.

      And so does this means *you* should refrain from speculating on the specific positions of most climatologists, for if you understood their position, you would have a different opinion yourself?

      >> I fully account for water vapor

      Words are easy to say.

      >> I have so many lines of evidence of low sensitivity and falsification tests of CAGW, it will make your head spin moreover; I can also explain how and why the climate behaves as it does.

      Yawn.

  122. BTW, Barry – moving on to another topic, are you aware that Dessler cannot explain the actual physics to support the conclusions in his 2010 cloud feedback paper? All he’s done is just correlate TOA fluxes with temperature changes, more or less assuming the temperature changes caused the flux changes.

    For one, a significant portion of the alleged positive feedback comes from increased SW radiation. Does increased water vapor from warming cause decreasing clouds? Also, at the very beginning of the paper, he states that the net effect of clouds in the current climate is to cool by about 20 W/m^2, yet ascribes absolutely no significance to this and just brushes it aside. In light of his conclusions this would have to be explained, wouldn’t it?

    It amazes me how basic stuff like this can be so callously ignored by the pro-CAGW science community.

    • I’m not in a position to go out and rush to read and analyze that paper. I presume a broken paper will be criticized by others in time. [Do you have a write-up to it?] And I don’t see what this has to do with any other paper (assuming you were correct).

  123. RW,

    Yes, the time constant and the phase delay are independent on the sign of the feedback. In fact, the time constant and the phase delay must always be positive for a causative correlation. Only coincidence or magic can result in a signal occurring before it’s presumed cause, otherwise; the assumed causality is falsified, moreover; a negative time constant is a physical impossibility. This is the fundamental problem with mutual feedback models which attempt explain why the CO2 signal lags the temperature signal in the ice core records even through the presumed causality suggests the opposite.

    The whole discussion about climate feedback is broken anyway since historically, climate scientists do not comprehend, or even acknowledge, the fundamental differences between gain and feedback, nor do they understand the restrictions imposed by Conservation of Energy. These are yet more examples of the basic stuff CAGW centric ‘science’ has wrong.

    • >> This is the fundamental problem with mutual feedback models which attempt explain why the CO2 signal lags the temperature signal in the ice core records even through the presumed causality suggests the opposite.

      I think you are misunderstanding things.

      CO2 can both lag and lead global warming. 1: There is stored CO2 that is released as the temperatures rises. 2: As CO2 levels are raised, the temperature rises.

      I am not sure what papers you have been reading that have attempted to use a negative time constant.

      BTW, for another simple example of lag/lead: Light fireworks and watch the intense sparks afterward. Alternatively, first create intense sparks around fireworks, and watch the fireworks light up.

      >> climate scientists do not comprehend, or even acknowledge, the fundamental differences between gain and feedback, nor do they understand the restrictions imposed by Conservation of Energy

      Yawn. I have a feeling there is much you don’t understand and much correct analysis you have not undertaken.

      RW, I have already replied to you enough, and you apparently don’t understand what I write.

      • ..and I know I sound harsh some times. That’s mostly a writing failure on my part… and now also that we are on our final words.

      • Sorry Jose, it’s not me that has a hole in my understanding.

        The IPCC claims that the 1.1C intrinsic effect of 3.7 W/m^2 of incremental forcing is amplified to 3C by feedback. How much extra CO2 do you expect the 1.1C of ‘pre feedback’ warming to unleash, another 10 ppm or so? Well, if an increase of 280 ppm (doubling) causes 1.1C, the subsequent 10 ppm adds about 0.05C, which unleashes another fraction of ppm of CO2, etc. Now, the original 1.1C is amplified to 1.15C. Where is the rest of your feedback coming from?

        You also need to rethink your statement that “CO2 can both lag and lead global warming.”. Only one of these behaviors will be evident in the paleo record and this will be the net, or dominant effect. Yes, there is a minor effect on temperature from incremental CO2, but it manifests almost instantly with the change in CO2 concentrations. The delayed effect of temperature on CO2, as seen in the ice cores, is the signature of evolutionary biology sequestering more CO2 in the biosphere as the conditions for life become more favorable, i.e., it gets warmer and CO2 levels rise.

      • >> Where is the rest of your feedback coming from?

        The models come up with 3 C. Are you asking me specifically to detail the models? I do not know those details (although I came across a couple of source code repos for such models). I am new to this.

        Without those equations, we aren’t going to get anywhere in arguing the warming should be any particular value. See, the 1.1 forcing (and I really am trying to follow along here because I am new to this) is what one puts in as forcing terms **into the climate equations they are using**. If you don’t use those equations but use something else, then adding 1.1 forcing to your equations will produce.. something.. anything else.

        To understand the 3 deg C, you have to use their complex physical models and equations. Without doing that, I don’t understand why you think you should be able to rationalize that value?

        >> Now, the original 1.1C is amplified to 1.15C.

        Your logic was broken. I thought maybe you wanted to assume the IPCC’s values and then come to a contradiction, but you did no such thing. If you assume 1.1 gives 3 C, then what you ended up doing was to conclude (from that assumption), that there might be another bit tagged onto the 3 C.

        If you didn’t assume that 1.1 goes to 3 C, then I suppose you simply concluded that you don’t know what their equations are. [see first part of this comment.]

        If you want to make an argument that their equations don’t make sense, then do so.. but I guess you will first need to know their equations.

        >> You also need to rethink your statement

        If we assume for a second that the IPCC might be wrong and that you might be wrong, then, in general, we can have an effect possible both as a lead and as a lag (I gave the fireworks sparks example).

        So, I don’t see why I have to rethink my point that CO2 can potentially be both a lead and a lag.

        By appealing to existing paleoclimatic records showing essentially lag of CO2, at best you pointed out that in our planet’s believed history CO2 was not released like crazy as it is today from ancient hydrocarbon sources. More generally, you are pointing out that current models include something that is different today than thousands and millions of years back, because back then CO2 was a lagger. But that is exactly the point! Things are different today. Today, CO2 is rising ahead of (or alongside of) temp, and many believe there is a causality relationship there — not millions of years back — but from the last couple of centuries.

        So you can’t disprove CO2 causality by pointing out that in our past we didn’t get such effects. The justifications climatologists are using for this lead are conditions introduced recently, so this is consistent with older records not showing a CO2 lead.

  124. RW, [look, I’m not the only one having trouble putting in final words 🙂 ]

    >> [two pictures shown] When the net energy flux increases, the gain goes down below the average and vice versa. These are huge increases in energy flux – far larger than the small increase that would come from 2xCO2.

    Let me ask this. How long does this flux you pictured remain high before it goes down to negative values? That’s right, months… and then it goes negative.

    Now, let me ask the obvious follow up: How long is the man-released CO2 in the atmosphere before that gets undone? Aha, it has been up there for decades and decades without any reversal. I see.

    Now, I suspect your response to this might be to invent out of the air that you just know that the time constants related to CO2 are days or a few years at most (although at least in two comments you did consider hypothetically that it might take decades.. that’s a start).

    I haven’t seen any analysis on your time constant values, and co2isnotevil perhaps thinks one can look at a particular lag and infer the time constant from that. No, you can’t. Not without a set of equations or something similar.

    Also, the other argument I think co2isnotevil and/or you might be hinting at is that because we have yearly seasonal waves, that somehow this means everything cancels out. Yawn. Possibly, the suggestion there was saying, to use an analogy, that if I throw a bunch of positive and negative numbers together, that the sum is always zero no matter what these numbers actually are. No, that would be a wrong conclusion, if easy to make when you don’t have any models or real analysis (“it’s a gut feeling everything cancels out”).

    >> but even if were decades, I don’t understand how this signifies anything in regards to the direction of the feedback and the net final response to the perturbation (let alone an indication it’s 300% net positive).

    Well, I see this quote came afterwards. You sound honest here.

    To repeat the gist of some of what I recently said, if the forcing is still being applied (ie, the extra CO2 is still in the air) and if the response time is very long, then we still have not approached the final value and it might be growing and growing (we’d have to calculate this final value based on some model).

    I’m not trying to predict this final value’s size. I’m only explaining how the response time duration is important if you want to measure the final value. Your measurement will keep going up at least until you get to the time constant range and can guesstimate the rest.

    But to add to that, CO2 is still being added, so even if the majority of the response to what happened one year ago has been seen, we are not close to ending our CO2 releases.

  125. 1 — It will convince many more “in the know” people if you take a paper and write up a formal reaction to it pointing out mistakes.. essentially engage the community formally, like Spencer appears to be doing now. Just making crazy allegations without a careful analysis worthy of peer-review will not be as successful in changing direction. You have to address adequately what is in the literature or have such a compelling argument that others will jump up and quickly see the flaws before and help you.

    2 — Short lag vs long time constant response for electrical capacitor (an example): If a large value capacitor is in a circuit with other minor storage elements, then “forcing” to a new level will correspond with a small lag; however, the overall “time constant” may be very very large (long duration) if the capacitor is very large and has only a small amount of current coming in (eg, large series resistance).

    3 — Correlation may not prove something, but it can still be useful observation. The main point is to avoid jumping to conclusions.

  126. Jose,

    The time constant can be determined from the magnitude of the solar change, the magnitude of the surface change, and the expected magnitude when the average gain of 1.6 is applied instantaneously to the changing input. Knowing that (1-e^-(t/tau)) is the step response and that after 1 time constant, 63% of the total change towards equilibrium will have occurred, we can then calculate from the data an upper bound for the time constant of the N hemisphere of about 15 months and that for the S hemisphere of about 30 months. The difference between hemispheres is due to the relative orientation and fraction of ocean coverage.

    Note that if the time constant was a decade, 3 months of seasonal change with 170 W/m^2 of solar change would only achieve 2.5% of the expected change of about 1.6*170 = 272 W/m^2 , or about 6.7 W/m^2. The observed change on the surface is closer 25 W/m^2.. For comparison, a time constant of 1 year results in 22% of any final change to occur within 3 months. Note that this is the time constant for how fast/slow the planet heats/cools in response to forcing power.

    The time constant for how quickly atmospheric CO2 starts to affect surface temperatures is seconds.

    The time constant for how quickly evolutionary biology adapts to changing temperature is on the order of centuries.

    • >> The time constant can be determined from the magnitude of the solar change, the magnitude of the surface change, and the expected magnitude when the average gain of 1.6 is applied instantaneously to the changing input.

      First, thank goodness you did not imply lag is used in the calculations.

      Second, you can’t test 2xCO2 or capture it from data because that doesn’t exist today and hasn’t in the past. You can’t use 1.6 on 2xCO2. OR… dare I ask, what analysis did you do to relate 1.6 to 2xCO2?

      >> Knowing that (1-e^-(t/tau)) is the step response

      What analysis leads you to that conclusion? What makes you think the earth system is described entirely by a relatively simple linear differential equation?

      The rest of your comment assumes this as well.

  127. Hi RW,

    Since you’ve been so polite (probably more so than me), and since I think I understand what is being plotted, now, I’ll take one last stab to answer you about George’s graph of surface gain vs. temperature.

    As I understand it, each of the little grey points is for a small area, 3 mo. average. So any gains that they represent are pretty instantaneous (especially since there is no lag time included). The green and blue points are the hemispheric averages.

    Now, instead of just looking at the trend in the averages, let’s look at the spread of the points. If we look at the bottom, where the biggest concentration of points is at all temperatures, it starts out around 1.3 K and goes up with temperature to about 1.6 K. This is consistent with the fact that we are talking about effectively instantaneous temperature changes, so it is pretty well approximated by the S-B response, and increasing gain with higher temperatures.

    The question now becomes, why does the average gain go down with temperature? The average at lower temperatures (right around or below the freezing point) is higher because the little grey dots spread WAY out. Why would that be?

    My bet is that it has something to do with freeze-thaw cycles and albedo. At these low temperatures, some places will be covered in ice, and others won’t. The ones that are not covered (or are always covered, no matter what) won’t have a big change in gain when it heats up a little. Other places might have a drastic change in albedo when it heats up.

    This is only a stab in the dark, but here are the advantages of my explanation:

    1. It explains the odd distribution of data points, especially around the freezing point, rather than just the averages.

    2. It is consistent with the near-instantaneous nature of the data, which would require an S-B type response.

    3. It is consistent with a higher equilibrium response, required by the paleoclimate data.

    4. It is consistent with a decades-long temperature lag, which is required by the known depth of the ocean mixed layer.

    • Barry,

      My understanding of why the grey points’ range of variation increases with latitude is because of increased seasonal variability with latitude. Because they are 3 month averages, that means averages from Summer, Fall, Winter and Spring – where the ratio of emitted surface power to incident solar power changes significantly. It would be interesting to know whether the ratio is higher or lower in winter or summer. If it’s consistent with what’s been claimed here, I would guess the ratio is lower in summer and higher in winter.

      Also, George can correct me if I’m wrong, but I think the primary reason the gain increases as the temperature decreases is because energy in the system is largely distributed from the tropics to the polls and not the other way around.

      • That and warmer temperatures are associated with increased evaporation from the oceans, which removes more energy from the surface as the latent heat of evaporation.

      • The idea here being that increased latent heat of evaporation with warming makes warmer and warmer temperatures harder and harder to sustain. Also, increased evaporation is associated with increased or denser cloud coverage, which reflects more and more of the sun’s energy, which causes more surface cooling.

      • So what you’re saying is that the gain at low latitudes (and high temperatures) would be systematically too low, and the gain at high latitudes (and low temperatures) would be systematically too high.

        The graph is even more meaningless than I thought. The instantaneous values (rather than monthly averages) might actually be easier to interpret.

        Let’s just mercifully end this conversation now, please. Last comments today, and then I’ll cut it off.

    • BTW,

      Here is the global gain monthly plot:

      This too shows the gain out of phase with the incident energy, but a lot of this is likely the result of perihelion coinciding with maximum reflectivity.

      This is why the hemispheric gain graphs showing strong net negative feedback to changes in forcing are more significant and what really sold me.

      • The global ‘power in’ is also largely out of phase with the global gain, which is consistent with everything else.

      • I cyclically forced a simple climate model, and it turns out that the surface temperature is always out of phase with the forcing by about the same amount (a few months), no matter how deep I make the ocean (and hence change the time constant).

        Isaac Held published some work where he optimized a simple climate model to reproduce the output of a GCM. Whereas the GCM had a climate sensitivity of something like 3 °C, the simple climate model sensitivity was only 1.5 °C. And yet, they BOTH gave the same response over a century. The reason for the discrepancy was that the GCM incorporated feedbacks that operated on different time scales, while the SCM only had one time constant, based on the thermal lag.

        My point is that I just don’t think you can extract as much information as you seem to think out of the response to short-term cyclic forcing.

    • Also, what matters relative to GHG ‘forcing is the globally averaged long term behavior, which is precisely what is plotted in the graph (25 year average). The additional watts of GHG ‘forcing’ are just added on top of the changes in solar flux, whatever those changes may throughout the system.

      • “…may be throughout the system”

  128. I agree with all of Jose’s last points. If the short-term data George plots is cyclical on time scales much less than the time constant of the system (as required by the known depth of the mixed layer,) it doesn’t tell us much.

    What’s more, I agree that George should definitely try to submit all this to reputable climate journals. It involves some pretty hairy spatial statistics, and must be interpreted in the context of a lot of physics. Frankly, I don’t feel like I’m in a position to criticize a lot of it (except to point out some obvious mistakes that cause me to mistrust George’s work, in general) without months of work, which I’m not in a position to give, at the moment. However, I have been around the block enough to know that there are many ways these kinds of statistics can be screwed up, so given the mistakes I’ve already seen, I’m simply not going to believe it until some real experts have combed through it.

  129. [Barry] >> I’ll take one last stab to answer you about George’s graph of surface gain vs. temperature.

    That graph can have any number of varying parameters across its data points. There are an infinite number of curves one can draw in there potentially (eg, you had mentioned lines at one point in time.. and in this comment mentioned a number of diverse scenarios/variables that might also explain certain parts of it).

    co2isnotevil is perhaps trying to pretend the Earth system is like a circuit. A circuit is designed by humans to have interesting results to a limited input context. In contrast, the Earth can have perhaps an unbounded number of significant variables. Those graphs I’m seeing aren’t holding all variables constant except the two being shown. There is no rhyme or reason. There are many things varying within the dump of points, and all of those interesting relationships are hidden. It would take a bunch of work, potentially replicating the work climatologists already have done (and come to similar conclusions), in order to find sensible data points to analyze in isolation from the rest in order to try to keep most important variables constant so as to sort of see a relationship that might be insightful.

    Again, what he is doing isn’t “wrong”. It’s potentially early steps taken in order to try and gain insight, but this is a problem expected to be much more difficult than with a typical circuit from a market competitor. The approach doesn’t replace creating good physical models. Rather, to make good use of it, such graphs should be more finicky and “smart” and be used to try to support or weaken specific physical models.

    As concerns climate sensitivity, I don’t see what this graph you point out here could say about 2xCO2 (or even CO2 over past years).

    It would be more interesting to me to see the data in the graph limited to a decade and then comparing graphs for each decade (since CO2 has grown alongside each decade). However, we still would have many variables (ENSO, etc).

    PS: Maybe we can throw into that dump of points all the measurements we can come up with from Mars and Jupiter. That would really make it lively. Don’t separate out each planet (or any other variable) ever. Each 2 variable graph will include the full dump. … Really, I am not trying to be cynical, disrespectful, or discouraging.. except in retaliation to the thought: that this approach and associated premature guesses is the holy grail, that all climatologists are buffoons, that all their existing work is clearly wrong and nonsensical, etc, etc.

  130. Each little gray dot is the 1 month average for a 2.5 degree slice of latitude. Each monthly average accumulates about 240 3-hour measurements across an average of about 100 cells per 2.5 degree slice.

    The larger green and blue dots are the per 2.5 degree slice averages across all samples, spanning about 3 decades. As you can see, the 30 year average lines up right in the middle of the monthly distributions, making the monthly data completely consistent with 30 year averages, specifically relative to the extracted response. The criticism that this is short term, not long term data, can’t be substantiated.

    There are 2 reasons why the gain increases at lower temperatures. First is the T^4 relationship between temperature and power which makes it incrementally more and more difficult to maintain higher and higher temperatures. Second is a net transfer of heat from the equator to the poles which inflates the gain since there is an alternate source of input power. All of the very high gain values occur at the poles during months with little or no Sun, so they really have little significance. It’s also important to note that overall behavior of the climate must be weighted by energy, that is, the poles contribute far, far less to the whole than the tropics.

    Also, the measured time constants of the 2 hemispheres are on the order of 1 year for the N hemisphere and 2 years for the S hemisphere, both of which are well within the detection range of the data. The data is absolutely unambiguously clear about demonstrating relatively short time time constants. Once you can internalize the fact that the system time constants are far shorter than you think, then I can show you how and why this is the case.

    This is how science is supposed to work. First understand what the data is telling you and then you can determine why the data is telling you this. CAGW presupposes so many things that simply aren’t true which makes it impossible to reconcile the data within the context of CAGW. I’m far more inclined to believe the data than the word of a IPCC bureaucrat or ‘main stream climate scientist’, both of whom have a vested interest in insuring that CAGW is not invalidated.

    • Oh, so the grey dots are 1 month averages (not 3 months). I assume that because there becomes a wider and wider range of season variability the higher the latitude, this is why range of grey dots increases with latitude, correct?

  131. Jose,

    It’s not necessary to have 2XCO2 data to extrapolate what 2xCO2 does. The IPCC claims it causes 3.7 W/m^2 of incremental forcing, My analysis shows how the climate responds to 3.7 W/m^2 of forcing, independent of it’s source. 3.7 W/m^2 of post albedo solar power has a measured, post feedback effect on the surface temperature of about 1.1C.

    If you wish to continue and insist that CAGW is valid, you must show how and why W/m^2 of forcing from incremental CO2 are 3-4 times more powerful at warming the surface than incremental W/m^2 of post albedo power from the Sun. As far as I’m concerned, this is the definitive falsification test of CAGW.

    The top level differential equation for the climate system is,

    Pi = Po + dE/dt,

    Pi is the power coming from the Sun, Po is the power reflected and radiated by the planet and dEdt is the sensible heat, or the power flux in and out of the planets thermal mass. When dE/dt is positive, the planet warms and when it’s negative, the planet cools. E is the total energy stored in the planets thermal mass. Note that this equation is always true, both instantaneously and in the aggregate. In the steady state, dE/dt = 0 when integrated over one period of the stimulus.

    dE/dt is linearly proportional to dT/dt, where T is the temperature of the planets thermal mass (approximately the surface temperature). The reflective component of Po is proportional to the albedo and the radiated component is proportional to T^4, cloud coverage and cloud temperatures, which are linearly proportional to surface temperatures. This can be cast into the same kind of LTI system as an RC circuit. Look at section 4.2 here to see how thermal time constants work and how they are related to .RC time constants.

    http://en.wikipedia.org/wiki/Time_constant

    You should look here as well. Laplace transforms allow us to solve these kinds of systems in a general way.

    http://en.wikipedia.org/wiki/LTI_system_theory

    This math is not specific to electric circuits, but applicable to natural and synthetic systems of all sorts.

  132. >> First is the T^4 relationship between temperature and power which makes it incrementally more and more difficult to maintain higher and higher temperatures.

    Generally, true (and draws to my attention that one hypothetical example I used before you joined the conversation was likely very unrealistic to first order, although, I used it to argue a different point related to graphs).

    >> Second is a net transfer of heat from the equator to the poles which inflates the gain since there is an alternate source of input power.

    Again, this seems correct.

    >> All of the very high gain values occur at the poles during months with little or no Sun, so they really have little significance.

    I think you mean little significance in terms of contributing overall flux to the planet; however, it might be very significant in a negative sense to have our planetary cold regions warm up too much and, among other concerns, lose so much capacity to regulate heat through ice melts/freezes.

    >> the measured time constants of the 2 hemispheres are on the order of 1 year for the N hemisphere and 2 years for the S hemisphere, both of which are well within the detection range of the data.

    We depart ways here as you seem to be using a very simple model that ignores effects that occur on much longer time constants. The seasonal solar flux drives most of the changes we see in any given location year round but not the very important accumulated effects that build up over much longer periods of time. [Ie, daily weather traits and also the drift of operating temperature and other of its effects on the environment]

    I will continue to be wary of any changes to our environment that are fast as judged by societies (eg, disruption of water and food sources and to established infrastructure) and as judged by the process of species evolution (including losing access to many things from loss of many species, including a balance to proliferation of certain other species).

    >> CAGW presupposes so many things that simply aren’t true which makes it impossible to reconcile the data within the context of CAGW.

    Ignoring that errors almost surely exist, a greater flaw than getting some of the complex details wrong is assuming too much simplicity and ignoring necessary complexity. The main flaw of simple models is in not accounting accurately away from the limited data set used to set up their parameters. The IPCC is ahead of the game of those who insist there is little complexity to the Earth system.

    >> I’m far more inclined to believe the data

    You are ignoring a lot of data.

    >> It’s not necessary to have 2XCO2 data to extrapolate what 2xCO2 does. The IPCC claims it causes 3.7 W/m^2 of incremental forcing, My analysis shows how the climate responds to 3.7 W/m^2 of forcing, independent of it’s source.

    Those claims are interpreted with respect to their their models. Their models include CO2 levels that don’t exist in the data you are analyzing. You can’t test their “claims” by adding their forcing values to different models. [This is my interpretation of what is going on here.]

    >> If you wish to continue and insist that CAGW is valid, you must show how and why W/m^2 of forcing from incremental CO2 are 3-4 times more powerful at warming the surface than incremental W/m^2 of post albedo power from the Sun.

    There are different equations being used. An effect seen through some equations is not seen through others. [Assuming I knew the details,] I can’t show you with simple equations. And I can’t show you with data that doesn’t include some of the necessary parameters (like higher CO2 than we have had in the past).

    >> The top level differential equation for the climate system is,
    >> Pi = Po + dE/dt,
    >> dEdt is the sensible heat

    I had to look up sensible heat. Why do you assume everything beyond Po is sensible heat? Wouldn’t ice melting, to pick a major example, not be sensible heat? [and ice plays a greater role than simply adjusting the albedo]

    Of course, this equation doesn’t yet have temperature variables, which is what most of this discussion is about and what I criticized “1-e^-(t/tau)”.

    >> dE/dt is linearly proportional to dT/dt, where T is the temperature of the planets thermal mass (approximately the surface temperature).

    As you ignore some possibilities for E, approximate its relationship to T, and simplify how that T is distributed throughout the planet, you end up that at least a few degrees here or there more or less are surely within the assumed error bars. But this means you can’t really argue against current accepted climate models because they likely all fall within these assumed error bars.

    Alternatively, if you fit these simple curves to limited data set, then you likely will be way way off as we test it on data further removed in time.

    >> This can be cast into the same kind of LTI system as an RC circuit.

    Only in a crude approximate sense where a “few” degrees here or there doesn’t matter, and only in the sense where we will avoid driving or using the circuit (but not the planet) with values that deviate from our analysis.

    >> This math is not specific to electric circuits, but applicable to natural and synthetic systems of all sorts.

    To first order approximation, but the Earth is a huge system relative to humans. We humans have to care about what amounts to “noise” in this simplified analysis of the Earth system. We care about “small” deviations in operating temperature.

    Our bodies, for example, can’t just change their 98.6 reading. We won’t live the same lifestyle if all of us have to live as dessert nomads.

    • > >> Second is a net transfer of heat from the equator to the poles which inflates the gain since there is an alternate source of input power.

      > Again, this seems correct.

      I want to qualify this.

      If we look at the Earth as one system, then conduction would cause the whole body to radiate a similar temperature (this would be a first order guess). There should be no ice at the poles. Essentially, in this simple model, the poles should be warmer than they are. This is the opposite of what I agreed to above. In essence, this argument is that the poles, as part of the Earth, should radiate about as much heat as the rest of the Earth.

      However, I took your comment to be a point about how the direct flux on the poles is less than on other parts of the Earth. So I then took the view that if these ice-covered regions were a distinct entity separate from the whole, then black-body would dictate an even lower temperature (I’m going to assume, in lieu of plugging in values into a calculator and looking constants up to verify), warmed thanks to convection or other means from the “other” part of the Earth that gets more flux.

      An important point to take from these two views is that the Earth is a complex system.

      • Jose,

        The hype about rising sea levels is nothing but scare mongering. The Antarctic ice cap will not melt until plate tectonics moves Antarctica much closer to the equator. It’s average interior elevation is thousands of meters, which by itself is enough to keep it frozen. The fact that it’s dark half the year also keeps it from melting. The average temperature in Antarctica is well below freezing and no amount of CO2 will make enough difference. The same is true for Greenland. Only the N polar ice melts and freezes seasonally and that’s already floating.

        You underestimate the completeness of my ‘simple model’. This model matches the satellite data exactly. It connects together surface and cloud temperatures, surface and cloud reflectivity, solar input, the fraction of cloud coverage, water vapor concentrations and more. Moreover, unlike GCM’s which get a different answer each time, this gets the same answer each time.

        You overestimate the legitimacy of more complex GCM”s. My model calculates the energy fluxes directly from thermodynamic requirements and has only one solution. A GCM has an infinite number of solutions, each exhibiting the same thermodynamic behavior. It’s the difference between simulating an engine from thermodynamic considerations, or simulating the combustion process at the atomic level and hope that the correct thermodynamics emerges. The later is possible, but requires absolutely accurate knowledge of all conditions and requires extreme computational complexity,

        Globally, the effects of ENSO’s etc. are strikingly small. When it gets hot in one place, it gets cold in another. I’m not trying to predict the distribution of hot and cold, just how much hot and cold there is.

        We also really don’t care about small changes in temperature. As a species we’ve easily adapted to seasonal change orders of magnitude larger than the change predicted by the wildest claims of alarmists. We have also survived ice ages and interglacial periods even warmer than the current one.

      • >> You underestimate the completeness of my ‘simple model’. This model matches the satellite data exactly

        “Exactly” to me is that all data points match exactly; however, for science, we need to match within error bars. What are the error bars to your “model”?

        If you just plot points, that gives no predictive power.

        If you use simple curves, you likely will be far off or will be curve fitting and likely really fail a reality check when testing years away from that data. It isn’t a useful model, one that only agrees with satellites and with your lifespan. That is not useful science.

        Finally, satellite data is not all the data that matters. It may not even be the most reliable data, and certainly by itself is no match for observations made by many in independent ways.

      • There is power is utilizing the greatest tool ever created for processing data (the computer). Before it, people limited themselves largely to simple linear and explicitly solvable models whenever possible.

        There is power in relying on simple indeterminate models that are closer to representing the many unknowns we have and then seeing order arise from that variability. In a sense, this helps build somewhat more independent confirmation and doesn’t attempt to reduce everything to simple models ahead of time. If nothing else, you need to “prove” to others these claims you make about the wonders of your model. Until you go open with the details and entice experts to look, you won’t get too far, I suspect.

      • As for humans ability to adapt, there are several concerns not covered by simple temp differences. [And refined models might eventually conclude worse scenarios.]

        One is quality of life, eg, for nearby future generations. This is an important issue whenever we “go too fast”.

        A “specific” example is that we depend on a large ecosystem of species of all sorts, and any drastic changes can lead to significant problems for many years. Just like once you destroy all evidence of works created by man you truly send us back into the cave world, so too if you destroy many evolved species, we can end up with many problems and unable to solve them for a while (eg, maybe we can’t get proper nutrients or lose lots of vegetation and important nutrient sources from pathogen proliferation). We might end up with too many unchecked diseases, etc.

        And rising water levels is no joke. It hurts the economy and many people a lot and can lead to significant revolution and abandonment of wise investments as people go nuts and become susceptible in large numbers to radical revolutions hurtful to our future.

        Humility is valuable.

      • BTW, note that we are good at surviving cold, while many other simpler organisms don’t do as good a job. More heat creates more competition as more simpler species can survive longer durations, and this creates a different set of rules.

        I don’t think global warming is our only potential threat, of course.

      • Jose,

        I have no problem using lots of computers and do so on a daily basis and to an extent beyond the comprehension of most.

        The point is that GCM’s have dozens of dials and arbitrary curve fit constants. My model has none, in fact, it’s over constrained with more equations than unknowns, considering that most of it’s variables are trivial functions of direct satellite measurements and are no longer unknowns.

        As a result, GCM’s can be tuned to exhibit any behavior you want to see. My model only behaves one way and the way it behaves is the same as the way that aggregate satellite measurements behave.

        You can throw as many computers at a problem as you want and all it does is solve the problem a little quicker. Garbage in, garbage out still applies, and in the case of GCM’s, all that happens is the model spews out garbage at a faster rate. Unless each and every one of the hundreds of tweakable parameters in a typical GCM is exactly right, you can have no confidence in the veracity of it’s results, no matter how many times you run the model with different initial conditions.

        Again, I wouldn’t worry about rising sea levels. It ain’t happening. Even the IPCC has backed off from these silly claims. Subsidence is a far larger problem, relative to low lying cities. If you want to live at or below sea level, you must do so with your eyes wide open. Hurricanes, tsunamis and even the run of the mill storm have always been a bigger problem than a few mm of estimated sea level rise.

        We are just past the peak of the current interglacial. Pretty much all of the ice that can melt has already melted. It’s not like ice can melt at the same rate as it was coming out of the last ice age, when there was a whole lot more of it to melt. In the summer time, most of the N hemisphere ice/snow already melts and the lack of Sun in the winter means it will definitely be coming back year after year.

      • >> The point is that GCM’s have dozens of dials and arbitrary curve fit constants. My model has none

        We have a very complex Earth system. I think as one tries to include more physics modelling and come close to more and more values farther and farther away in time, one would have no choice but to add dials. Also, those models make predictions for numerous variables.

        Show me a formula that you think expresses temperature in a simple way. I’ll take that formula and put in the year -10,000 BC, 0, 1000, 1600, and 1900, and see what I get. If you can’t give me such a formula or if it can’t do a good job handling those dates, then I think you are being silly mocking the models that do give such predictions with sane values.

        >> in fact, it’s over constrained with more equations than unknowns

        That is true for any climate model pretty much. The satellite data by itself provides this over constraining unless you have a model with more variables than there are such data points. The idea of curve-fitting wouldn’t really exist except because we are always over constrained.

        >> As a result, GCM’s can be tuned to exhibit any behavior you want to see.

        Right, and they are tuned to come close to matching the climate average values we have observed and believe to have existed over many decades, centuries, and “eons”.

        They don’t suffer from this, http://arthur.shumwaysmith.com/life/content/roy_spencers_six_trillion_degree_warming “Roy Spencer’s six trillion degree warming”, exactly because they recognize the Earth is complex and requires a lot of knobs to try and “match” real values.

        >> My model only behaves one way and the way it behaves is the same as the way that aggregate satellite measurements behave.

        Right, just like Spencer’s paper and what I have been guessing at. Did you not read this 3 part article Barry wrote?

        >> You can throw as many computers at a problem as you want and all it does is solve the problem a little quicker.

        I’ll assume you were using figure of speech by saying “little”.

        I mention the (super)computer because it allows you to do calculations within seconds that 1000 humans working for years could not achieve.

        >> in the case of GCM’s, all that happens is the model spews out garbage at a faster rate.

        That is your opinion, but the public data doesn’t seem to support your view.

        Again, can you point to your precise equations or source code and the results produced for average global surface temperature when I test the following years -10,000 BC, 0, 1000, 1600, and 1900?

        You talk big, but I don’t see you producing anything.

        >> Unless each and every one of the hundreds of tweakable parameters in a typical GCM is exactly right, you can have no confidence in the veracity of it’s results, no matter how many times you run the model with different initial conditions.

        We can have lots of confidence because we are constraining the system to approach measured and sane results.

        The Earth is complex; it’s too complex to measure every point or to conduct the actual experiments. You haven’t measured the temperature at every inch in our atmosphere and square inch on the land and water surface. You haven’t even measured 0.00000001% of those values. How can you have confidence if you haven’t done that?

        Fact is it all comes down to the accepted models approaching reality and Spencer’s and others’ broken models not doing so.

        >> Pretty much all of the ice that can melt has already melted.

        It would appear that the rate should have been coming to a standstill if I were to believe your view, but that does not appear to be what has been happening over the last few decades from what I have been hearing.

        >> In the summer time, most of the N hemisphere ice/snow already melts and the lack of Sun in the winter means it will definitely be coming back year after year.

        And your comment name is co2isnotevil, so we aren’t surprised getting these opinions from you that appear to contradict a lot of measured data and the consensus among the experts in the field.

        Your graphs clearly show that the colder regions are getting heated up through some means besides the sun at a large rate, so why would you boil it all down to an issue of the sun?

    • >> electric circuits

      We should also keep in mind that most analog circuits have a wide range of tolerable operating temperatures and voltage/current values.

      In fact, it’s hard to control these too much. Fortunately, we realized we could have much more success in many cases by working even more precisely at the information level — digitalization — and where we are allowed much wider variations in voltages and currents.

      Our minds might largely work in the digital domain, but our bodies are still analog.

      *****

      BTW, I’m sure there are arguments suggesting a warmer earth would have some benefits in the long run (eg, after cities are reconstructed inland.. maybe after population levels go down). Personally, I have respect for evolution and recognize its slow pace. There are too many unknowns. There are also too few planets we have come across in observing the rest of the universe that we believe might be able to support life as here on Earth, so I have a great deal of humility in that regard. I’m open-minded to what life could be like under a warming environment, but we have to try and understand and anticipate as much as possible. Humans have a bad track record in that we use up the planet but don’t clean up well and leave too many things broken. At some point we might go too far.

      So, it’s better if we acknowledge the good things in existing models and resist the temptation to abandon diligent work others have done. As an extreme case, I hope we don’t return to an intellectual Dark Ages period because we fail to value the fruits and investments of others which we might not appreciate immediately.

      • >> As an extreme case, I hope we don’t return to an intellectual Dark Ages period because we fail to value the fruits and investments of others which we might not appreciate immediately.

        Since the comments are scheduled to be closed forever tonight at the strike of midnight (yeah, right), I guess there is little to lose by bringing up the Nazis.

        In jury duty yesterday (after not getting picked for a jury), I got to watch Indiana Jones and the Last Crusade. One scene takes place at one of the book burnings in Germany from the WWII era. I guess that recent experience unconsciously worked its way into my comment above.

        One of the primary advantages of humans against other species and the forces of nature is the wisdom stored away in “books”. If nature turns really sour for future generations, it would be horrible if we abandoned and destroyed our best hope to taming it.

  133. Barry,

    “I cyclically forced a simple climate model, and it turns out that the surface temperature is always out of phase with the forcing by about the same amount (a few months), no matter how deep I make the ocean (and hence change the time constant).”

    I meant 180 degrees out of phase or ‘antiphase’.

    I know the surface temperature will always out of phase with a forcing due to thermal inertia.

    • I know. I’m just saying that the amount it’s out of phase doesn’t seem to have much to do with the time constant.

  134. Barry,

    “So what you’re saying is that the gain at low latitudes (and high temperatures) would be systematically too low, and the gain at high latitudes (and low temperatures) would be systematically too high.”

    No. The lower the gain the higher the temperature means that incrementally higher and higher temperatures in the system are harder and harder to sustain (i.e they require incrementally more and more net incident power to achieve the same amount of temperature increase).

    • I wasn’t talking about the S-B response. I was talking about the fact that heat gets moved from the equator to the poles, contaminating the data for this kind of analysis. The temperature response isn’t all due to the “surface gain,” in other words.

      • Yes, but I don’t see how this changes or anything. Even in a warmer world, energy will still be largely distributed from the tropics to poles. If anything, the poles would be warmer in a warmer world, so the difference in average ‘gain’ from the tropics to the poles would become less in a warmer world – not more.

  135. Barry,

    If you will permit me, might I ask what you would consider a accurate measure of the non-linearity of the system, if not the ratio of surface emitted power (temperature) to incident solar power by latitude?

    I’m also not clear what you mean by ‘the S-B response’. The Stefan-Boltzman law just quantifies the emitted power flux at the surface in W/m^2 as a result of the temperature of the surface and vice versa (assuming an emissivity of 1 or very close to 1). Can you clarify?

    • By S-B I mean the “no-feedbacks” response with an essentially unchanging emissivity.

      Look, if energy is getting shifted sideways, that contaminates the data if we are trying to find a simple relationship between solar power and temperature response.

      • I’m not following this. Are you saying GHG ‘forcing’ will not be shifted the same way in the system as solar forcing?

      • My understanding is we are trying to find the relationship between incident solar power and emitted surface power (temperature) within the system, which automatically includes all the non-radiative energy transport in the system, including from the tropics to the poles.

        How could it not?

      • I’m saying that the temperature response around the poles, for instance, is not all due to the incident solar power around the poles.

  136. I do understand that, but I’m not following the significance of this relative to the non-linearity issue we have been discussing.

    That would also be true anywhere to some degree, even in the tropics to some degree, because of oceanic circulation currents moving energy around the system non-radiatively.

    • That’s what I said above.

      In any case, my only point here is that I don’t think you can pin down the kind of relationship you seem to want to pin down from a graph like that. You know going in that the spread of the data is badly contaminated, especially at high latitudes, where the response would be spread out to higher values. It looks to me like if you got rid of the spread out part, you would have a positive correlation between gain and temperature.

      I would actually be interested in seeing it if George were to make a graph of the instantaneous (3 hour) values. That would have the same thermal lag problems (or worse,) but at least it couldn’t be that severely contaminated from convection.

      • “It looks to me like if you got rid of the spread out part, you would have a positive correlation between gain and temperature.”

        Really? How are you getting that?

  137. Again, might ask what then would be an accurate measure of the direction of the non-linearity in the system?

    In other words, if the non-linearity was consistent with an increased response with temperature, what type of data or measurement would show this?

    • Data from very long time periods, i.e., paleoclimate data.

      • Anything else? What about in the current or recent climate? I don’t really consider the paleo data to be applicable to the current climate for the reasons I outlined.

        If you could come up with something specifically measurable that I could see and understand, I would be willing to question the apparent definitiveness of the referenced gain graphs in regards to the direction of the non-linearity of the system on incremental forcings.

      • Also, care to offer an explanation of why the surface ‘gain’ isn’t highest in the tropics and lowest at the poles? This the behavior that would seem to be consistent with an increased response on incremental warming, which is why I ask.

      • >> Anything else? What about in the current or recent climate? I don’t really consider the paleo data to be applicable to the current climate for the reasons I outlined.

        Since that data is based upon a lot of hard work and forms part of the theories and physics many support, you can’t turn a blind eye towards it and hope to get very far. At a minimum, you would need to critique works supporting in a formal manner to discount those results.

        No matter what temps you want to believe from the past centuries, you need to believe something within some reasonable range (add your own error bars), and then you need to make sure your model doesn’t come with this problem: http://arthur.shumwaysmith.com/life/content/roy_spencers_six_trillion_degree_warming “Roy Spencer’s six trillion degree warming” which I believe is a simple indication that curve fitting tightly to limited data using simple models very likely leads to nonsensical results when you extrapolate outwards from that limited data.

        If you can’t create a response that would produce reasonable temperatures for years in the past (eg, something not too unlike today’s temperatures for dates in the human era), then your model will not hold up as a whole and any controversial ideas will be less likely to grab the attention of others working in the field.

        Truly, merely “outlining” reasons that reject lots of science and effort by others is not going to carry too much weight in the wider community and among policy-makers. The odds are just too high that you are deluding yourself. Having a hunch is very cheap and doesn’t match the power of surviving the scrutiny of many others.

  138. Might I also ask what you would consider as a falsification test for the 3 C rise hypothesis?

    • Doubling CO2 and waiting to see what temperature the system equilibrates too.

      BTW, if you are using Popper’s falsificationism as your science/non-science demarcation criterion, your philosophy of science is about 50 years out of date. Read Imre Lakatos.

      • I meant, aside from doubling CO2 and waiting to see what happens?

  139. I know this isn’t scientific, but how confident are you that the 3 C rise theory – give or take a degree, is correct. In percentage terms?

    • Speaking for myself, part of the confidence level comes from the information being in the open and vetted by others. There are a lot of results, and it takes a lot of time to become comfortable with it.

      Also, an important issue is confidence in this projection in relative terms to other projections which are based on “hidden” research or upon research that did not fair well in the peer-review process. The more these alternative theories take the formal route and have success or failures in the peer-reviewed process, the greater the confidence level will be with whatever survives.

      • I usually don’t go back and fix the numerous grammatical and other mistakes I notice after submitting a comment. Nevertheless:

        “projections” -> predictions

        I have no idea why I used the word “projections”.

  140. Barry,

    The phase delay is best thought of as being in degrees (or radians), where 12 months is 2PI radians or 360 degrees making each month 30 degrees (PI/6). The range of possible lag is 0 to PI/4, or 0 to 90 degrees. If the time constant is much less than the period, the phase lag approaches 0 and the magnitude of the response approaches 100% of the equilibrium response to the change. If the time constant is much larger than the period, the phase lag approaches PI/4 and the magnitude of the response, relative to final equilibrium, approaches 0. Losses in the system will limit the maximum phase delay to less than PI/4.

    For sinusoidal stimulus, when the time constant is equal to the period/2PI, the phase lag is PI/4 (45 degrees) and the magnitude of the response is 1/sqrt(2) (sin (PI/4)) of the final equilibrium value. If the time constant doubles (or the period halves), the magnitude of the response decreases to 38% of equilibrium (sin PI/8) and the phase delay increases to PI/4+PI/8 (67.5 degrees) and if it doubles again, it drops to 19% (sin(PI/16)) and the phase delay increases to PI/4+PI/8+PI/16 (75.8 degrees). A system with a time constant of 1.9 months (12 months/2PI) would produce a response whose p-p change was 70.7% of the final equilibrium p-p variability with an apparent lag of 1.5 months.

    The measured time lag is about 2 months (60 degrees) and for the N hemisphere, the surface power is a sine wave of about 65 W/m^2 p-p, while seasonal, post albedo insolation is a sine wave of about 185 W/m^2 p-p. The expected gain is 1.6 times 250 W/m^2, or 296 W/m^2 p-p, while the apparent surface change is only 65 W/m^2 or 22% of the final equilibrium value, representing a time constant somewhat less than 4 times the radian period, or about 8 months. The Southern hemisphere has a time constant about twice as large and a phase delay only slightly longer. Note that if I used short term seasonal gain to predict the average response (one of your criticisms), the gain would only be 0.26 (65/250) and not the 1.6 (390/240) that I claim. Note that while the incremental gain is quite different per hemisphere (0.26 N, 0.14 S), the global gain of 1.6 is the same for both hemispheres..

    A point you are missing with the gain scatter diagram is that the gain converges at higher temperatures and that even if the gain transiently exceeds nominal, there is equal and opposite transient gain below nominal. Also, the reason the gains are high at the poles is because the post albedo incident power is close to zero and the post albedo incident power is the denominator of the gain equation (Ps/Pi).

    The diurnal response is somewhat different. We see the nominal phase delay when the Sun is out, where the most Sun is at noon, while the hottest part of the day is about 4 PM (also 60 degrees since 24hours == 360 degrees!), however; the coldest part of the night is just before dawn. During the day, we see the sinusoidal response, while at night, we see the exponential decay.

    • >> For sinusoidal stimulus, when the time constant is equal to the period [blah blah blah lots of numbers ]

      You fail to give us your equations of the system. What you say only makes sense in the context of specific equations for the Earth system.

      I may assume you are using simple equations, and we’ve gone through this several times: I don’t believe those equations come close to describing the Earth system in any meaningful way.

      >> the reason the gains are high at the poles is because the post albedo incident power is close to zero and the post albedo incident power is the denominator of the gain equation (Ps/Pi).

      You present graphs without explaining the source code used to process which data specifically. This enables you to avoid close scrutiny and to keep moving the goal post if you need to.

      Given you haven’t explained how to derive that data, I have to make assumptions.

      My assumptions are that the gain is the power as derived from S-B applied on the measured temperature at those regions divided by the incident sun power (which, I will assume, is based largely on the angle of incidence from the sun) post albedo.

      Next, I wonder what model you are using for the polar regions. Are you assuming the rest of the planet will warm that region (as obviously is the case)?

      I will assume, however, that you want to explain away convection, conduction, backscattering, and any other effect. So I will assume you are treating the polar regions as isolated from the rest of the earth.

      If those assumptions are incorrect, then explain more clearly what you are graphing and perhaps if you recognize or not that conduction, convection, and/or backscattering will warm the polar regions beyond the direct power from the sun (post albedo).

      If the definitions are as I assumed, then you are wrong when you stated that the high gain has to do with the “post albedo incident power” being small. From the definition, we are dividing by post albedo power. The S-B resulting power would at most match this net incident power. So the gain would be bounded above by 1.

      If we used the pre-albedo power as the base point, then we would end up with a smaller gain because we would instead be dividing by power larger power (some of which would not even make it to heat the surface).

      However, the graph shows gain much greater than 1.

      Fact is that the polar regions are much warmer than would be due solely to sunlight hitting it.

      To cover some math notation.

      Po: S-B power
      Pi: incident sun flux with albedo effect
      Pii: incident sun flux without albedo effect
      gain: Po/Pi

      A gain >1 means there is some other source of heat besides the sun.

      It’s that simple.

      Ps/Pi is a red herring. That is just the ratio of the sunlight power that hits vs. what penetrates. Ps/Pi is nothing but the emissivity constant of that polar region. Gain divides by power not by emissivity constant.

      Again:

      Gain = Po/Pi where Po is from temp on surface and S-B and Pi is from sun (post albedo).

      Now, if you defined gain differently, then state so clearly. You are not being clear; you are not providing your detail. This is very convenient for avoiding peer scrutiny.

      AND, if you define gain as Po/Pii, aka, Po/Ps, then that would result in a smaller value of the gain (than 1), not larger values.

      To repeat the reason why the gain is large: It has nothing to do with albedo since that should reduce *both* Po and Pi. The gain cannot be greater than 1 using S-B analysis unless there is some other source of “incident” power besides the power making its way to the polar regions from the sun. These other source would be one or more of convection, conduction, and backscattering radiation.

      Put differently. If we consider the polar region to be independent from the rest of the planet (eg, so that conduction, convection, and radiation have no effect), then the gain based on post albedo should be 1 exactly and the gain based on pre albedo would be much less than 1.

      • >> Ps/Pi is a red herring. That is just the ratio of the sunlight power that hits vs. what penetrates. Ps/Pi is nothing but the emissivity constant of that polar region. Gain divides by power not by emissivity constant.

        My bad. I went back and reread what you said. Ps is what you call power as calculated by S-B equation I think. Also, did say the gain was Ps/Pi. I goofed and imagined you wanted to divide by this ratio.

        Anyway, so this agrees with my definition of gain. That is good. This means I still have the assumption that you might want to treat the poles as isolated.

        Which would mean that the gain should be bounded above by 1. The gain being higher means that there are other sources of power besides this sunlight.

        Perhaps you agree with all of this, but the explanation I read gave me the impression you might be trying to write off that power is surely getting into the polar regions besides from the direct sunlight.

        • Jose,

          Read http://www.palisad.com/co2/eb/eb.html for more info. Thus far, all I’ve presented is direct from the ISCCP data set and variables derived from combinations of this data. Gs is one of them and is the surface gain and defined as the power emitted by the surface divided by the power arriving into the system. It’s the efficiency of turning incident solar power into emitted surface power. It’s also equal to 1/e, where e is the emissivity of the planet. 1/1.62 = 0.62, which means that 62% of the power emitted by the surface makes it out into space and 38% is returned back to the surface. For an average surface temperature of 288K, Es = 390 W/m^2 (from SB), 62% of which is 242 W/m^2, corresponding to 255K, again from SB (the accepted equivalent temperature emitted by the planet). My value of 0.62 +/- .03 is well within the generally accepted global emissivity of the planet.

          You are right that It’s the less relevant gain value, although it is tied to the planets emissivity. A more relevant gain related to the quantification of feedback is the ratio between the power emitted by the surface and the power arriving AND reflected by clouds, because clouds are the control element of the system. The quantification of gain is only about 1.2.

          My model is one which corresponds to the radiative balance of the planet, integrated over time periods of 1 month. Of course, it becomes even more accurate when integrated over longer periods. I fully account for atmospheric absorption using HITRAN line based 3-d atmospheric simulations. The model is similar to the one used by Rossow to validate satellite data consistency (see the ISCCP documentation on their web site), except that I do full gridded calculations and use line based 3-d atmospheric simulations, rather than simple heuristics.

          Note that ONLY 38% of the power emitted by the surface is returned the the surface, and this includes the 2/3 of the planet covered by clouds. Less than half of this 38% comes from the atmospheric absorption by all GHG’s and CO2 is only 1/3 of the total GHG effect. Doubling CO2 increases the total of that absorpbed by CO2 and returned to the surface by less than 10% (do the MODTRAN simulations yourself) representing an increase in emissivity of less than 1%, for an increase in gain of from 1.62 to 1.64, which is within experimental error of the currently measured value of (1.62 += .05) and not detectable among natural variability.

          One more point is that the SB increase arising from 3.7 W/m^2 is only 0.68C, not 1.1C. The 1.1C comes from multiplying 3.7 by the gain (actually dividing by the emissivity), arriving at 6 W/m^2, which corresponds to a temperature increase of about 1.1C.

        • There’s a small net flux from the equator to the poles, mostly in the form of precipitation. At the equator, there’s more evaporation then precipitation, while towards the poles, precipitation exceeds evaporation. The thermal gradient itself is also responsible for a net flux. However, relative to the flux coming from the Sun, both combined are relatively small, especially when averaged over a 12 month period.

          Can you see how this precipitation/evaporation gradient drives the thermohaline circulation? Excess rain/snow piles up at the poles, sinks, pushing warmer waters at the equator up to make up for the water lost by excess evaporation that would otherwise cause equatorial ocean levels to drop.

          To summarize my model, consider a simplified quantification of the power leaving the planet as Pe = Ps*(1-p) + Pc*p, where Ps is the power emitted by the surface, Pc is the power emitted by the clouds.and p is the fraction of the planet covered by clouds. The surface has a corresponding flux entering it, also equal to Ps in the steady state and the clouds similarly have a flux entering them that is equal to the flux leaving. Lastly, Pe is constrained by the power entering the planet from the Sun that’s not being reflected (255K). Of course, the actual numerical model takes far more into account including accurate line based 3-d atmospheric simulations of atmospheric absorption, the top 8 GHG’s, grided water vapor and gridded topographic considerations.

          The top level equation is Pi = Po + dE/dt, as I introduced earlier. Pi is the measured power from the Sun, Po is Pe (calculated above) plus a*Pi, where a is the albedo, given by the cloud fraction weighted sum of the surface and cloud reflectivity. Every one of these variables, except dE/dt, is measured data with 3 hour samples and virtually 100% surface coverage. The resulting dE/dt corresponds linearly to the observer dT/dt (also calculated from delta Ps using SB), from which we can also extract corroborating time constants.

          BTW, my earlier discussion of the physical manifestation of time constants can be found in any undergrad EE text on circuit theory. My point was to provide a frame of reference for what time constants mean in a physical sense in the presence of sine wave (seasonal average) and step function stimulus (diurnal change).

  141. >> Generally, true (and draws to my attention that one hypothetical example I used before you joined the conversation was likely very unrealistic to first order, although, I used it to argue a different point related to graphs).

    I was referring to the comment that matches toa values to surface values.

    0 00
    50 40
    100 100
    150 …

    I forgot the details of that comment and misunderstood the gain vs temp graph. Looking at it again, I do think that example might be realistic (although with artificially chosen numbers intended to make the pattern simple to see).

    Now, here is what that graph http://www.palisad.com/co2/gf/st_ga.png says to me.

    1 — We currently (or at least on average over the last 3 decades) experience enough GHG effect or other delayed effect (eg, from oceans or somewhere else) so that effectively we have lots of power bouncing back and forth before leaving into space.

    It’s clear from that graph that we don’t just have S-B effects or otherwise the gain would have to be eP/P (plus maybe internal Earth sources) so likely would be less than 1. The fact it is greater than one should kill any argument about GHG having a significant effect.

    2 — If that data were divided into years or decades, we also might notice that the gain at each temperature (or the average gain) would likely be larger with each new decade to match the increasing CO2 during those decades.

    3 — The overall average value could be 1.6 as can be calculated through other means. The points towards the higher temperatures are involved with greater fractions of the power so would be weighted more greatly than the low temp ones.

  142. [RW] >> Also, care to offer an explanation of why the surface ‘gain’ isn’t highest in the tropics and lowest at the poles? This the behavior that would seem to be consistent with an increased response on incremental warming, which is why I ask.

    Wrong, that behavior you stated would be consistent with “increased response on incremental warming” would NOT be consistent with very much. Whether we are cooling or heating up (except maybe at extreme cases where the temp gradient is very small), convection, backscattering, and other effects would draw heat into the polar regions from the warmer tropics.

    If we are warming up, then we have to look at time slices and this should show, when we look at numerous decades to factor out cyclical effects, that the 1.6 value would be going up. If we look at data from a century ago, we should have less than 1.6 (if I understand correctly). And in the decades to come, this 1.6 should go up.

    Looking at all the data points together on the same graph erases this effect.

  143. Barry, thanks for letting me comment still (early Wednesday morning). I almost feel like I got in a few low blows after the bell had rung. I wonder if more comments will be forthcoming.

    Also, I will read the wikipedia page to try and get a better bearing on these definitions, but let me quote now from here http://en.wikipedia.org/wiki/Climate_sensitivity

    ***
    CO2 climate sensitivity has a component directly due to radiative forcing by CO2 (or any other change in Earth’s radiative balance), and a further contribution arising from feedbacks, positive and negative. “Without any feedbacks, a doubling of CO2 (which amounts to a forcing of 3.7 W/m2) would result in 1 °C global warming, which is easy to calculate and is undisputed. The remaining uncertainty is due entirely to feedbacks in the system, namely, the water vapor feedback, the ice-albedo feedback, the cloud feedback, and the lapse rate feedback”;[6] addition of these feedbacks leads to a value of approximately 3 °C ± 1.5 °C.
    ***

    [I consider that above quote fair use; otherwise, the license would be CC-by-sa I think, which is a copyleft/share-alike license, meaning that potentially the quote should be removed from this page unless the whole page would be similarly licensed. IANAL, so don’t take this remark too seriously. Also, I have no problem with all of my comments being licensed cc-by-sa.]

  144. co2isnotevil,

    If I don’t accept your simplified model, then I am not going to accept all of these other conclusions you come to (eg, lag/time constant relationships) which are being argued based on such a model.

    And I am not going to accept your simple model if you can’t give me a specific program, formula, equations, etc, so that I can unambiguously ask for the average temp for the years, 10000BC, 0, 1700, and 1900 (among others) to see the resulting answer given by your model.

    If I can’t do the above, you are asking for me to believe you by faith without any way to verify your work and check for errors or even if the results make any sense.

    Your views (and mine) on paleoclimatic data, backscattering, CO2, time constants, etc, are immaterial to the above. The fact that you claim your model is one with the satellite data adds a nice touch but is immaterial as well.

    Show me the money.

  145. co2isnotevil, hopefully you will at some point develop the details sufficiently, get results you like, and submit your source code to peer-review. As Barry commented on Spencer (paraphrased): it’s good to have a hunch and try to see where it leads you. In the end, anyone who wants to make a convincing argument will come clean with the details of their research and analysis and present things in a way that will make it realistic that experts in the field will take the time to study the claims.

  146. I have a much better plan for introducing this analysis than by wasting time fighting through a horribly biased peer review process …

    • Sure, write a book or a comic strip, create a blimp and fly it around the world, compete against IBM’s Watson in a new Jeopardy like climate related competition… but honestly, you will lose credibility at the end, no matter who your primary audience is or what communications vehicle you use, if your ideas don’t hold up to critical analysis by those most able to perform it (whoever they are) and/or once predictions start failing.

      Also, a computer can be programed to take a bunch of data and then produce some data based on a best guess heuristic programmed into it. For example, if you had lots of temp and related data for every square kilometer on the planet at numerous altitude levels for each day for many centuries (pretend it’s now the year 4000), then a good AI program might be able to do a heck of a job predicting many future values even without using any physics modelling. I expect future models will incorporate, not just physical models and stochastic processes but interesting AI and many other non-exact approaches.. and do a heck of a job.

  147. Jose,

    You;re not even close, BTW, I know how to use computers. Applying networks of networked computers to solve complex problems is my day job.

    If the ability to easily explore, compare, display and know the uncertainty of all kinds of scientific data and associated models became ubiquitous, you could make up your own mind. Such capabilities are inevitable, which will drive a scientific revolution and in the end, the data will win. The best data supports an upper bound on the effect of doubling CO2 which is less than the lower bound suggested by the IPCC I know this for a fact, because I’ve had these capabilities for many years and have examined data from many sources in more ways than you can imagine. There is only one valid conclusion.

    • >> You;re not even close

      Have no idea what you are referring to.

      >> The best data supports an upper bound on the effect of doubling CO2 which is less than the lower bound suggested by the IPCC I know this for a fact, because I’ve had these capabilities for many years and have examined data from many sources in more ways than you can imagine. There is only one valid conclusion.

      Yawn.

  148. co2isnotevil,

    “We are just past the peak of the current interglacial.”

    Out of curiosity, what data are you deriving this from? I was under the impression we were more near the end of this interglacial.

    If true, this would seem to imply we have several thousand more years before we start to plunge into the next glacial period.

    • Interglacials are relative short and to be past the peak means that we are close to exiting, although a thousand year time line to exit is about right. The current interglacial period has been somewhat unusual in that it’s lasted longer than others and has not been as warm. The ice core record tells us that the peak of the last interglacial was 2-3 degrees warmer than the current one, although it was far shorter. We just happen to be at a coincidental point in time where one orbital effect is causing warming (precession of perihelion), while another is causing cooling (axial tilt) and the net is a relatively stable climate for thousands of years, albeit slightly cooler than prior interglacials. Many anthropologists attributes the rise of man to the unusually stable climate the planet has experienced during the last 12-15K years.

  149. […] an earlier model was held up in review, and then not given much attention immediately afterwards, he took it as evidence that his message was being censored and suppressed instead of any kind of issue over the paper’s validity). How many times have Dembski, Sewell, […]

  150. RW, I came across an old discussion on a different website on this topic you bring up about 3.7 W/m^2 forcing @2xCO2 turning into 16.6 (more or less).

    I remember my answer here was partly that there is no reason for you to assume linearity along the variables you mentioned.

    Let me try and give a new explanation since I have a better idea of the question today.

    I’ll give several related replies.

    1 — First you calculated 390/240 = 1.625 as an approximation. Keep in mind that these values were approximately true in some year, but this ratio can keep changing. The 2009 value would be 396/239=1.66. [1997 values give about the same]. I’ll use 1.625, however, since we are being approximate.

    If the greenhouse effect is stronger, then we’ll get a higher flux value on the ground for the same current flux value at the top of the troposphere, so we should expect this ratio to increase. Using it as a constant value applicable today as well as in say 50 years at 2xCO2 doesn’t really make sense. To repeat. This ratio will go up as the greenhouse effect gets stronger. This is related to Barry’s comment about the emissivity changing over time.

    However, below, I will pretend we can use this ratio calculated today. This means our answer for the effect 2xCO2 (without feedbacks) will create on the ground will likely undershoot the expected value.

    2 — Let’s say (for the moment) that 3.7 is the forcing for 2xCO2 with no feedbacks based on 1997’s CO2 level as 1xCO2 (rather than the 1xCO2 level used by the IPCC of around 280).

    This leads to 1.1 C, we’ll agree. We verify this: 3.7*(1.625)=6.0 W/m^2 is the gain on the ground due only to CO2. 390 288.9K and 390+6 290.0 K giving the expected 1.1 C difference.

    Now, to add the IPCC +3 C: 390+16.6 291.9 K.

    So we agree that the surface flux will increase by about 16.6 according to the IPCC’s +3.

    This is consistent with CO2 — by itself — leading to a gain of 6 on the ground since this merely implies that the 10.6 difference is what is attributed to the effect of all feedbacks taken together.

    See, there is not inconsistency.

    You might disagree that what is being attributed to the feedbacks (the approx 10.6) is accurate, but that is a different question than what you posed. It seems you wanted to attribute the full 16.6 to CO2 even though the IPCC doesn’t make that claim.

    So, your answer is that the feedbacks pick up the slack to reach the 16.6. [Also remember that we were conservative in a number of ways.]

    3 — In a different website you considered cutting 4 in half (I assume by “4”, you meant the 3.7). However, that would be incorrect. Let’s see how the IPCC defines radiative forcing (RF): http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-2.html

    > The definition of RF from the TAR and earlier IPCC assessment reports is retained. Ramaswamy et al. (2001) define it as ‘the change in net (down minus up) irradiance (solar plus longwave; in W m–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values’

    So we see that this value of 3.7 is the net flux downward at the troposphere. It’s all 100% downward by definition.

    4 — The “equilibrium climate sensitivity” value is defined as the steady state if we hold the CO2 concentrations at exactly 2xCO2 once we get to 2xCO2. This means that if 2xCO2 is reached in say 2050, the temperature then is not supposed to be the +3 C IPCC value. If we could ideally hold CO2 at that 2xCO2 value for many years, then the stable value reached after a long time of waiting should be the 3C IPCC value. I think the complex climate models calculate “transient climate response” (an approximation to equilibrium climate sensitivity), which requires finding an average of a 20 year period centered at (in our example) 2050, under the further assumption that CO2 would increase during that time period at 1%/yr. In other words, to guesstimate the equilibrium climate sensitivity, the models look at 10 years before and after when CO2 concentrations are calculated to reach the 2xCO2 level, and then they average out the temperature over 20 model years assuming CO2 keeps increasing during that 20 year period. This computer calculation substitutes for holding CO2 steady “forever” in the models (since doing so in the models requires a lot more computation time than just doing 20 years).

    OK, so the point of this long confusing paragraph I just wrote is that the year we reach 2xCO2 is not when we should measure +3C. That +3C is the net effect after we reach 2xCO2 if we hold steady for a long time to allow the temperature lag to catch up to the 2xCO2 forcing and all subsequent side effects.

    With this in mind, let’s ask, is our actual atmosphere on pace to get close to +3 by the time 2xCO2 is expected? Remember, we don’t expect +3 at that time 2xCO2 is reached but expect something less than +3, if hopefully in the ballpark of +3.

    OK, for these calculations, let’s assume 1xCO2 was in 1900 (as was assumed in one of the conversations you had http://www.skepticalscience.com/argument.php?p=2&t=483&&a=212 ). We will also consider the (approximately) logarithmic dependency.

    In 1900 we had about 300 ppm for CO2 level. In 2000 we had about 380. The change in temp for the century was about .8 C (I am using numbers I think were used on that thread.. we only need approximations).

    So 2xCO2 would be 600. And in 2000 we were at 380/300 = 27% of the way to doubling.

    A=ln (380/300) = .24
    B=ln (600/300) = .69
    C=A/B = 35%

    Thus we expect about 1/3 of the final temperature to have been reached.. assuming the ln(C_final/C_initial) is a constant value… ie, assuming CO2 grows exponentially from beginning to end.

    So what do we get when we multiply .8 times 3? We get 2.4. Notice how this is rather close to +3C, pretty much as we expected (since +3 is the steady state value if we hold 2xCO2 for a while).

    So everything makes sense. We appear to be on track to meeting the IPCC +3 estimate, but let’s note that it might be less than 2.4. Let’s see why:

    The CO2 growth rate has not been a steady exponential. It has been faster than exponential (see this graphics if you are not sure what “faster than exponential” means: http://www.frc.ri.cmu.edu/~hpm/book98/fig.ch3/p060.html ..or google it). If the rate of growth had been constant, we would be at exponential growth. If the rate of growth keeps increasing, then we are at faster than exponential. An example of a faster than exponential function is y=x^x. An exponential function is y=k^x, for a constant k. [Note that polynomial growth, y=x^k, is slower than exponential. Don’t confuse these.]

    This faster than exponential growth rate (which is obviously unsustainable in the long term) means that we should expect that the temperature has been lagging the CO2 concentration even more than if CO2 were growing “merely” at steady exponential rate. I don’t know what the temperature response time constant is, but, overall, I think it is safe to say that our current rate of (very approximately) +2.4 C gain by 2xCO2 is rather close to the IPCC estimate of +3 steady state at 2xCO2. If the faster than exponential growth continues, we may be below that +2.4 mark when 2xCO2 is hit because the temperature will be lagging more than it was lagging (at .8) this past century.

    Of course, there are many variables. For example, we don’t understand clouds as much as we would like to, and there are other pollutant sources we can’t anticipate.

    As an aside, the other important related question has to do with the timing of when 2xCO2 is reached. This depends on many factors. The growth of CO2 has been faster than exponential (eg, the last 50 years saw greater percentage growth than took place the 50 year period before that). We also don’t know about volcanoes and various other variables which could end up cooling.

    Overall, we might be setting a path for 4 or 5 C gain some time in the next century, even if we stop speeding up CO2 growth within a few decades, hold the growth at steady exponential for a few more decades, and then bring the level down below linear all the way to a flat constant level sometime in the next century. Even ignoring the effects of higher temp of +4 or 5, we have to keep dealing with fresh water shortages arising from population growth. Add the melting glaciers and continued population explosion and things are looking ugly indeed. And don’t forget that less ice means oceans absorb heat faster, so air temps will rise faster, and the solar irradiance will increase because the albedo will go down (since less reflective surface). .. We also have rising sea level….

  151. […] https://bbickmore.wordpress.com/2011/02/25/roy-spencers-great-blunder-part-1/ […]

  152. […] denier Roy Spencer has recently been demonstrated to contain large scientific errors. (see here, here, here, here, and here, here, and here), whilea recent paper by Texas A&’M professor […]


Leave a reply to RW Cancel reply

Categories