Posted by: Barry Bickmore | July 26, 2011

Just Put the Model Down, Roy

For the past few years, Roy Spencer has had a love affair, of sorts, with “simple climate models”.  After all, who needs some fancy-schmancy global circulation model (GCM) when you can boil down the main features (energy in and energy out) to a simple “1-box” or “zero-dimensional” model that you can run on a spreadsheet?

Spencer wasn’t the first one to use such a model, and every modeler knows that it is usually a good idea to use the simplest model you can get away with to represent complex physical processes.  The key here is to recognize that the simpler the model, the more phenomena are glossed over, so simpler models are only going to be good for particular, specialized purposes.

In this case, Spencer wants to use simple climate models to estimate equilibrium climate sensitivity for a doubling of CO2.  Let’s look back and see how he’s done with that so far.

The Model

The basic model is shown in Eqn. 1 below.

Equation 1:  d(∆T)/dt = (Forcing – Feedback)/Cp

Here, ∆T is the difference between the temperature at time t and the temperature at equilibrium.  (That is, ∆T is the “temperature anomaly” with respect to equilibrium.)  Cp is the total heat capacity of a column of ocean water 1 m^2 on top and h meters deep.  The Forcing tells us the rate at which extra energy is coming in, while the feedback tells us how the climate system responds to the push, by either enhancing the forcing or hitting the brakes.  Another way of putting it is that (Forcing –Feedback) gives you the net change in the energy accumulating in the ocean, and Cp controls how quickly the ocean temperature can change in response.

The Feedback term in Eqn. 1 can be broken down as in Eqn. 2 below.  This means that the hotter the ocean becomes, the more it radiates energy back into space, and the alpha term determines the degree to which this is the case.  The alpha term also determines the equilibrium climate sensitivity

Equation 2:  Feedback = alpha * ∆T

The Forcing term can be divided up into contributions from different sources–changes in solar output, greenhouse gases, aerosols, and so on–or lumped into one.  In some versions of his model, Spencer uses the GISS forcing history, where they are all lumped together.  In others, he multiplies the index for some natural mode of climate variability (like ENSO or PDO) with a scaling factor to obtain a hypothetical forcing, as in Eqn. 3, where beta is a scaling factor and Vi is the natural variability index being used.  In still other incarnations, he combines the GISS forcing with the “internal” forcing provided by Eqn. 3.

Equation 3:  Forcing = beta * Vi

Finally, in the latest versions of his model, Spencer has begun adding more layers to the ocean.  Whereas the original version only had one homogeneous ocean layer of depth h, the latest ones have 30-40 ocean layers, each 50 m deep.  Eqn. 1 governs the net energy input into the top layer, but Spencer also adds a “diffusion” term so heat can escape into the next layer down.  The second through 30th layers all have diffusion terms for heat coming in the top and heat going out the bottom.  Eqn. 4 shows what one of these diffusion terms look like for heat going out the bottom of a layer, where ∆T is the temperature anomaly of the layer in question, ∆Tnl is the temperature anomaly of the next layer down, D is a diffusion coefficient, and Cp is the heat capacity of a 1 x 1 x 50 m column of water.

Equation 4:  -d(∆T)/dt = D * (∆T – ∆Tnl)/Cp

It’s the PDO!

In his first attempt, Spencer asked, what if it isn’t human greenhouse emissions that have been driving climate change, lately, but rather natural, chaotic oscillations?  Spencer thinks that one such oscillation, the Pacific Decadal Oscillation (PDO) may have been the culprit.  He fit his simple climate model (Eqns. 1-3 with only one ocean layer) to temperature data for the 20th century, and found that he could explain most of the observed warming!  He tried to publish these results in a climate journal, but (he says) biased reviewers and editors maliciously quashed the manuscript.  So instead, he decided to take his message directly to the people by publishing this work in his book, The Great Global Warming Blunder.

I wrote about this modeling effort in Part 3 of my recent review of Spencer’s book.  I even went to the trouble of programming his model into MATLAB and fitting the parameters using least-squares regression.  I found that some of the parameters in the model were perfectly covariant, so that there were an infinite number of “best-fit” solutions with climate sensitivities ranging from really low to really high.  You see, if you fool around with alpha and beta, you can control how fast energy builds up in the ocean surface layer, and if you fool around with the depth of the surface layer (h), you can control how much water has to be heated up, which affects how quickly the temperature can approach a new equilibrium.  He also had a couple other fitting parameters (for a total of five) that I discussed in the review.

But even though he could have gotten exactly as good a fit to the data with low or high climate sensitivity, Roy Spencer claimed his modeling provided striking evidence for low sensitivity.  The secret was that he didn’t use a normal optimization routine to get his “best fit” parameter values.  Instead, he made up a bogus statistical technique that automagically allowed him to obtain a low sensitivity (about 1.3 °C for 2x CO2).  Furthermore, by manipulating the starting value of his model temperature series, Spencer was able to make his model fit the first half of the data without much influence from the PDO.  Finally, he used wildly unphysical model parameters, e.g., a 700 m ocean mixed layer.  When you are fitting a model with 4-5 completely unconstrained parameters, after all, it’s hardly surprising if you can explain some data, but it would be foolhardy to take the fitted parameter values too seriously.

To put it bluntly, I found that this work deserved to be rejected–with prejudice–from the scientific literature.

It turned out that my book review became rather popular, and many of Roy’s blog readers were asking him for a response.  He did respond…that he wasn’t going to respond.  Why?  Because he was working on a paper for the peer-reviewed literature, so he couldn’t be bothered to respond to a mere blog critique.  I thought that was kind of funny, given that my review was about work Spencer had published in a book because he claimed the peer-review process had been corrupted, but hey, people have to prioritize.  It’s been months, however, and Spencer still hasn’t gotten around to answering my initial criticisms.  He has, ironically, had time to publish four more blog posts in which he used variants of the same simple climate model to support his claim that climate sensitivity is low.

Ocean Heat Content

In the first of these posts, Spencer drove his model with GISS forcing estimates and fit it to ocean heat content (OHC) data since 1955.  Once again, I reviewed his methods and found he had made several elementary mistakes, ALL of which drove his model climate sensitivity lower.  When I corrected these mistakes, I got a higher climate sensitivity, within the IPCC’s most probable range of 2-4.5 °C (2x CO2).  I also mentioned that it isn’t clear such a simple climate model is really suited for estimating climate sensitivity, especially when only constrained by about 50 years of data.  Isaac Held, for instance, had fit a simple climate model just like Roy’s to the 20th century output of a GCM with a climate sensitivity of 3.4 °C, but the sensitivity of the simple model (which was tuned to give the same output!!!) was only about 1.5 °C.  No response from Roy so far.

Ocean Temperature Change With Depth

Now we come to Spencer’s second and third blog posts of this type (with a follow-up on the third post here), in which he used somewhat more complicated versions of the model.  As I mentioned above, these new versions of the model are different in that they represent the ocean with 30-40 layers, each 50 m deep.  The heat flux into the top layer was determined by the same old simple climate model (Eqns. 1-2), but then for every ocean layer Spencer added another term to represent “diffusion” of heat from that layer into the next one down (Eqn. 4).  In his second blog post, he drove his model with the GISS forcings and fit the output to match a profile of temperature change with ocean depth for the last 40 years, published by the IPCC (2007, WG1, Fig. 9.15).  In his third post, he drove the model using both the GISS forcings and “internal” forcing caused by ENSO (Eqn. 4), and fit his model to the temperature evolution over 1955-present or 1880-present.  He also compared the temperature change from 1955-present in the different ocean layers to the IPCC curve.

The curve fits look pretty impressive, especially when the ENSO forcing is added in.  Even many of the little squiggles in the surface temperature data are matched quite well by the model!  What’s more, the model climate sensitivities were only around 1 °C, much lower than the IPCC estimate of 2-4.5 °C!!!  Indeed, Spencer began one of these blog posts with the following bold proclamation.

The evidence for anthropogenic global warming being a false alarm does not get much more convincing than this, folks.

But wait–remember how I criticized Spencer for the wild and crazy curve-fitting adventures he chronicled in his book?  And how he decided he wasn’t going to respond?  Well, the fact is that he’s making the same kinds of errors again, and patting himself on the back for it.  Here are several reasons why nobody should take Roy’s pronouncements of victory seriously.

In Spencer’s original model, he could tune the beta and h parameters to get ANY CLIMATE SENSITIVITY HE WANTED, with exactly the same quality of curve fit?  That was because manipulating beta and alpha (which determines climate sensitivity) changes the net rate of energy input into the top of the surface layer, while manipulating h changes how fast the ocean heats up in response.  In his latest posts, it’s about the same story.  He can still manipulate beta and alpha to change the net input into the top of the ocean, but now to change the temperature response, he just has to change the thermal diffusion rate out the bottom!  I dinked around with a spreadsheet he provided in one of his posts, and sure enough, I could fit the data just about as well with higher climate sensitivity (within the IPCC range).

That’s a big problem, because Spencer’s entire argument is statistical in nature, but he has made no attempt to find out how sensitive his model fits are to the different parameter values.  If the model fit is about equally as good with low or high climate sensitivity, after all, then the modeling exercise has given us NO INFORMATION about the relative plausibility of either scenario.  It does not count as evidence for ANYTHING, in other words.

Supposing Spencer does try to go back and quantify parameter sensitivity, then good luck with that, because his newer models all have MORE THAN 30 FULLY ADJUSTABLE PARAMETERS (alpha, beta, and diffusion coefficients for heat transfer between layers).  After Tim Lambert over at Deltoid read my review of Spencer’s book, he posted a quotation from the famous mathematician, John von Neumann.

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

Well, give me more than 30 parameters, and I can fit a trans-dimensional lizard-goat and make rainbow monkeys shoot out its rear end.

Some Spencer-boosters might complain that the GCMs used by IPCC scientists have many more parameters than that.  That’s a good point, except that those are typically not “fully adjustable.”  When a modeler is using a complicated model, instead of letting all the parameters ride in some kind of statistical free-for-all, he or she typically would constrain most of the parameters to physically reasonable values.  For instance, if in his original attempt Spencer had constrained the mixed-layer depth of the ocean (h) to a physically reasonable value (say about 100 m), he would have come up with much higher climate sensitivity.  Instead, he allowed a 700 m mixed layer depth (!!!!!) to get the answer he wanted.

In the later versions of his model, Spencer would have to constrain his diffusion coefficients to physically reasonable values, but there’s a problem with that.  In the real world, “diffusion” is governed by random molecular motions, and can be described by expressions like Eqn. 4, but it’s typically very, very slow.  Since heat transfer in the ocean doesn’t happen so slowly, it’s apparent that much of the heat transfer is due to “advection,” rather than diffusion.  Thermal advection is essentially movement of heat with the medium, e.g., in currents, and it isn’t necessarily linearly proportional to the temperature difference between layers of the ocean, as in Eqn. 4.  That being the case, I don’t have a clue what “physically reasonable” values for the model’s diffusion coefficients would be.

Let’s ignore that last objection about the form of the model for a moment, and bring up a nitpick about how Spencer set up his model.  That is, he set the initial temperature anomalies to zero for ALL the ocean layers.  Since heat diffusion is dependent on the temperature difference between layers (Eqn. 4), that means Spencer set his model up so there would be NO HEAT TRANSFER among layers at the beginning of the simulation, and it gradually builds up over time.

The Challenge

I could go on with more nitpicks, but I’m going to stop here, because it should be clear that, once again, Spencer has made a big deal out of something that doesn’t have any evidentiary value.  So if, as Spencer claims, “[t]he evidence for anthropogenic global warming being a false alarm does not get much more convincing than this,” then can we please move on?  Can Roy PLEASE put his toy model down?

I doubt he will, but maybe he will accept this challenge.  Instead of complaining about how biased and awful the peer review system has gotten, he should (at the very least) get a statistician to work with him and do the modeling right, and then submit it for publication in a reputable journal.  Personally, I don’t think the work can be saved, even then.  However, I think the exercise of working with someone who knows how to properly make statistical inferences would be enlightening for Roy Spencer.


Responses

  1. Great summary Barry. Would you mind if we re-post it on SkS later this week? It would make for a perfect lead-in to a post on Loehle and Scafetta’s paper, which uses the same simple model curve fitting tricks as Spencer, and makes similar grandiose claims about them.

    Perhaps Spencer should have tried publishing in a Bentham Open journal as L&S did. It appears that they tend to publish ‘controversial’ papers, with a rather suspect review process.
    http://www.earlham.edu/~peters/fos/2009/09/criticism-of-oa-publisher-bentham.html

    • No problem with cross-posting to SkS. Did you see on another part of the site you linked that another journal at Bentham accepted a paper that was generated by a computer program?

      http://www.earlham.edu/~peters/fos/2009/06/hoax-exposes-incompetence-or-worse-at.html

      • Thanks. Yeah I saw that too. Seems like a set of journals with some very dubious practices. Which is no doubt why they were willing to publish L&S’s exceptionally poor paper (but I’m sure they’d publish Spencer’s too!).

    • The review process you refer to is certainly significantly better than the AGU which has been captured by a cabal of “scientists” at CRU as is clearly stated in the emails from that institution’s Phil Jones.

  2. Bickmoore where is al the missing heat? Explain that , which you wil not be able to,because you are clueless.

    This wil be my only post ,don’t worry.

    • Even if I can’t, the point of this post is that Roy Spencer HASN’T. I never claimed to.

    • Denialist hit and run!

      According to Trenberth, it’s probably in the deep oceans. According to Hansen, we’re probably underestimating aerosol dimming. Bottom line, we don’t know where it is yet, because we don’t have perfect measurements of the entire climate system. I’m not sure how that’s the slightest bit relevant to this post though. Somebody is trying to change the subject.

      • probably in the deep oceans, probably underestimated aerosol dimming.. you can dream on! or just realize that hansen has no clue what really affects climate and that the agw theory is falling apart.

      • Geez Barry, how do these guys find your blog?

      • dana1981, do you understand this is not about if roy’s right or wrong? it’s about why hansen’s model doesn’t fit reality. hansen has been telling us for years he’s able to predict temperature rise. when he fails to he should abandon his wrong model. that would make sense.

      • Actually “this” (as in this post) is about whether Spencer is right or wrong. But Hansen’s model does in fact “fit reality” quite well, and has for several decades.
        http://www.skepticalscience.com/Hansen-1988-prediction-advanced.htm

      • if it does, why do you talk about underestimated aerosol dimming?

      • No model is perfect. If you’re waiting for a perfect model, you’re going to be waiting for a long time. That doesn’t mean they can’t be useful or make good predictions, which Hansen’s has.

      • again, hansen’s model predicted more warming than we observe. now he puts blame on the aerosols and deep ocean waters. so far okay. but the cooling effect of aerosols on global effect has never been accurately quantified and the same for the deep water layers. so the only thing hansen can do is to ‘fool around’ with these parameters in his model to make a fit with observations. the same what barry said about roy.

      • Then it’s just as well that Hansen’s prioroties for figuring out climate are:

        1) Paleoclimate
        2) Observations
        3) Models.

        Models, by the way, are wrong. Spencer or Hansen (even Airfix), it doesn’t matter. Even a scientific theory is, at its crudest level, a model, or maybe you’d like to take that up with Stephen Hawking?

      • No Steiger, Spencer just lets his parameters fly to make his curve fit the data. That’s why he gets ridiculous parameter values like 700 meter mixed layer depth. Real modelers like Hansen apply physical constraints to their model parameters.

        There’s a difference between modeling (what scientists like Hansen do) and curve fitting (what jokers like Spencer, Loehle, and Scafetta do).

        • Curve fiting is a standard procedure to extract information statistically from coupled parameters. Spencer’s use of these techniques allows the full range of possibilities to be exposed and monitored. What Hansen and others have been doing for years is to apply constraints which do not allow presentation of the data or the results from the data, but make a model which fits pre-conceived ideas. The most well known of these “fixes” of course was the infamous Hockey Stick of Mann which Hansen also applauded.

          • Hi John,

            You are partly right, at least. Curve-fitting is standard procedure, and it should be done in such a way as to expose all possibilities (via sensitivity analysis). The problem is that Spencer didn’t do that. Out of an infinite number of solutions that give exactly the same results, he reported ONE. Yes, ONE. And that ONE solution happened to involve low climate sensitivity, even though solutions with high climate sensitivity would have done equally well.

            Also, the idea that curve-fitting in science can’t be done with any “preconceived” ideas” is nonsense. Especially when there are an infinite number of STATISTICALLY acceptable solutions, you MUST constrain the model to have parameter values that are PHYSICALLY plausible. Roy lets his models have wildly different ocean mixed layer depths, for instance, depending on what he needs to get a low climate sensitivity. The thing is that you can go out and measure the mixed layer depth, and it’s maybe 100 m, on average, not 700 m.

          • Your comments about Mann and Hansen are also totally nonsensical.

          • >> What Hansen and others have been doing for years is to apply constraints which do not allow presentation of the data or the results from the data, but make a model which fits pre-conceived ideas.

            Pre-conceived ideas can create serious problems. Heisenberg’s great insight came from looking outside the box. Einstein looked outside the box.

            Spencer certainly didn’t bound his surface temperature to the box. His equation exploded out of the 0 K temperature box http://arthur.shumwaysmith.com/life/content/roy_spencers_six_trillion_degree_warming went into overdrive and never looked back.

      • well dana1981, i’m not spencer’s advocate, yet applying physical constraint to parameters which have never been accurately measured sounds like a joke to me too. is this a standard scientific approach in climate science?

      • Most climate model parameters have been accurately measured.

      • including those i’ve asked about earlier? if not, what method hansen used to determine the value of those physical constraints?

      • I just posted a comment at the wrong nesting level. It’s right below this one and was addressed at Steigner (“if not, what method hansen used to determine the value of those physical constraints”).

    • What is most important is that we get the most important effects modeled well. We should focus on variables that appear to most strongly impact temperature. All else being similar, using lots of data (especially if spaced widely in time) to “train” the model increases the chances of getting the model to accurately predict the future. We sacrifice the greatest accuracy in detail in order to be in the ballpark further out in time.

      This article was mentioned elsewhere: http://arthur.shumwaysmith.com/life/content/roy_spencers_six_trillion_degree_warming . Judging from a quick glance, I think it gives an example of this effect of how trying to fit some dataset very tightly (as Spencer appears to have done) decreases the predictive power of that model (no matter the complexity of the model) away from that dataset.

  3. Great work once again, Barry. I sometimes find people using Spencer’s ‘model’ (often without attribution, which probably doesn’t please Spencer) and your website is the first one I point them to.

    (Since posters often don’t even understand what they’ve posted, there is no guarantee they will ‘get it’ – but other readers will.)

  4. So is it safe to say the Spencer’s model wasn’t very intelligently designed?

  5. Have you coded this current model?

    • No, but Roy provided an Excel spreadsheet in one of the posts I linked, and I played around with it using the Solver add-in on the sum of squared error. Making a computer do the fitting is problematic, though, because there are so many stinking adjustable parameters that are NOT independent. I think Spencer just played around with the values by hand, which I would say is worse.

  6. Thanks Dr. Bickmore, excellent job as always.

    Rumour has it that Spencer and Braswell have a new Trojan paper out.

    Do you know about it?

    • I haven’t read the new paper, yet. I’ll probably leave that one to the specialists, since Spencer got it published in a reasonable journal.

      • Barry, I wouldn’t have thought the pay-to-publish journal could be described as a ‘reasonable journal’. I don’t imagine anyone will pay to get any comment on Spencer’s paper published there.

        If you get the time it would be great to see a specific article on his paper – or the parts where he discusses his model.

        • Open access journals aren’t necessarily bad, so I didn’t want to insult the journal without having any dirt on it. I looked at the TOC, and it seems like most of the papers are, well, not fake, at least. However, over at RealClimate they point out that it isn’t a journal that publishes a lot of climate science.

  7. […] […]

  8. Has it ever occurred to you Barry that if you put garbage in (i.e. if you constrain the model in an inappropriate manner or if you leave an important physical factor out) then you get garbage out. The trick of the trade is to
    include all of the significant physical factors and to provide the realistic constraints on the parameters involved.

    The main problem with the even the most sophisticated models that you have developed is that they have missed out one very important physical factor.

    A small group of climatologists know about the factor involved. You will discover what that factor is over the coming years. I cannot wait to hear the “pop” as the scientific arrogance that inflates your model-based balloon comes rushing out.

    • Apparently it has occurred to me, since that’s what this entire post was about. Can’t wait to see what your new factor is, and I hope you’re right.

    • Good lord,ninderthing,didn’t you realise that,in constructing his model, Spencer ignored all your sage advice in paragraph one?

  9. […] […]

  10. Do you ever tire of the muckraking? Perhaps not since that is the purpose of this blog…

    A bit of an olive branch this round…

    I understand your position, while I don’t agree with how it has come about we can agree on a political philosophy that is the only feasable technical solution to the alleged problem. And that is nuclear power. No diffuse source can meet our energy needs and we can only save so much energy from low hanging fruit. “Renewables” and savings is a dead end for the goals being spouted, and anyone who crunches the numbers understands this and anyone who has tried it has seen the writing on the wall.

    Find me a politician willing to streamline the nuclear regulations and lift the effective build bans and preventing those who want to burn up the “spent fuel” so we can get meaningful quantities of CO2 free energy with minimal drawbacks at non-exhorbitant pricing.

    Cap and trade is garbage, just start advocating a nuclear solution and this skeptic will help you do it.

    • I’m pro-nuke, too, and I’ve even done some geochemical work on leaked nuclear waste.

      • Sigh. This site is about [i]advocacy[/i], perhaps you could spend a little time advocating the nuclear approach for displacing coal instead of just bemoaning critics of the status quo beliefs? If your goal was truely to reduce our carbon footprint it would seem far more effective than seeing who’s scientific cock is larger.

      • Correction: This site is about whatever the heck I want it to be about. And given the Fukushima accident, I don’t think people will want to go nuclear until they are convinced there is a problem with AGW.

      • Seems like you’re splitting hairs about the nature of your blog. Obviously it is whatever you want it to be, currently you use it for AWG advocacy through the means you like (through muckraking the personalities you dislike…). My argument remains, though you are fully capable of ignoring it.

        If you insist on AGW beliefs first. Well, you have an uphill battle convincing people. And I believe an unwinnable one with your current strategy. The public is less and less inclinded to believe as the polls show the trend is away from AGW. Some places that have made government mandated strides are now backpeddling or facing even stiffer opposition. Your arguments here are doing zero to aliviate that and really only serve as a cheer leading rally for the already convinced.

        I keep trying to tell people like you that incurring costs in fundamental economic systems is going to inconvenience a LOT of people. You may well slip it through, but they will feel it eventually and the backlash will insue from there. Isn’t it better to lump problem+solution together from the get go? You ignore or propose terrible solutions at your own peril… It’s like you’re a used car salesman who doesn’t want to tell me what the lump of metal behind the curtain really costs or even if it works. But HEY, YOU’RE SURE ITS A CAR!

        Nuclear is pretty hard to argue against for any thinking people. Just calculate the casulties per MWh for all power generation types, it’s increadibly safe and modern reactors would be even safer. Then show the cost per MWh of production and what that translates to in their home electric costs. Obviously it doesn’t work for everyone but people don’t understand these things because we never even discuss it, nuclear is taboo and that needs to change if you’re serious about AGW.

      • The pro-nuke crowd is so often bizarrely single-minded. I’ve never understood that. They’re so convinced that nuclear power is *the* silver bullet solution to all of our problems. I don’t get it. I think in some cases it’s because they perceive that “environmentalists” oppose nuclear power, and that’s the reason they adopt it as *the* solution.

        For the record, you’re living in the past, Brandon. Many “environmentalists” no longer oppose nuclear power. Nukes are also incapable of being *the* solution, although they’ll be part of the solution.

      • It is “the” solution because no other power form is capable of producing the quantity required, let alone a baseline load capacity for current consumption levels at competative pricing. If you want cap-n-trade you’re asking grandma to freeze or pay a fortune she doens’t have for the privilage of winter heat. It’s not the “silver” bullet, it’s the *only* affordable bullet you can load in the non-CO2 gun. Prove me wrong instead of whining about my “bizarrely single-minded” response. Why are they incapable? What limits them? PV is not even close to competative. Wind? Don’t make me laugh, it’s among the worst renewable techs. Solar Thermal is the closest thing to competative at around 3-4x coal costs and they can also actually be designed for 24/7 power generation. That’s the only renewable that’s even close to being able to actually replace coal plants that’s within an order of magnitude in price.

        I’m not pro-nuclear per se, I don’t mind coal or natrual gas, this is my olive branch compromise to satisfy your AGW whims. I am opposed to making people pay 3-4x more for their power for no good reason.

        If the environmentalists no longer oppose nuclear power, how come we aren’t building plants en mass? We got enough thorium to power the country for centuries and we can do it with a remarkably clean fuel cycle and it can be done with minimum of land use (acrage for the plant, and the mines for materials). It was “environmental concerns” that were behind the moratorium on SOLAR POWER PLANTS (which eventually got reversed). Yeah, explain that one, it’s patently obsurd what gets done under the guise of environmental concerns sometimes.

        Perhaps the breed you refer to doesn’t oppose them. And I’ll buy that! But this thinking breed isn’t as common as you allege in my experience. Many environmentalists are, in reality, anti-developmentalists and will oppose anything that so much as moves a mound of dirt as soon as someone plants the flag to call them to arms.

      • Well I’ll say this – it’s certainly fascinating to see how other people view the world.

        As for proving you wrong, I suspect you’ll never admit that you’re wrong, but here’s a place to start.

        http://www.skepticalscience.com/renewable-energy-baseload-power-advanced.htm

        How come we’re not building nuclear en mass? Because it’s too expensive, mainly. Do you really think the big bad greenies have enough clout to prevent nuclear power plants from being built? You might consider the possibility that it has more to do with the fact that virtually all nuclear construction projects in recent years have gone way over schedule and over budget, and been immensely expensive as a result. There’s nothing stopping these nuclear plants from being built, if they could compete in the marketplace. Fact is they can’t compete with wind and natural gas, and solar PV will very soon be cheaper as well.

      • I don’t appreciate your “I’m intellectually superior to your close minded stupor” innuendo you’re running. It’s unnecessary and makes me want to lash back at you with insulting jibes, which I’ve tried to resist and edit out.

        So easy to pick appart the logic in that link… Such garbage.

        “Firstly, we currently do not use our energy very efficiently”.

        True enough, but the low hanging fruit of “efficiency” is vanishingly small. Replacing light bulbs and getting more efficient appliances only goes so far. Industry has already priced in what savings it can do at the power prices when they built out their floor spaces and machinery, and many companies in recent years have been pricing in much higher power premiums even if they aren’t there. There really isn’t much more to do here without a dramatic requirement for capital and/or some major lifestyle or process changes.

        For baseline load, they list burning food and solar thermal (which I listed) as the only two options. Nothing new there. Burning food is dumb, and always will be – which is why I didn’t list it. The principle reason that our species might have to fear AGW is reduced food production, and we will… what? Burn it to solve AGW? Logical fail! Their solar thermal example is depressingly small and limited, I didn’t realize is was that primitive. Only 19 MW and they can’t even keep it running for an entire day? So fail all around… If this is the best they have to offer, I revise my previous statement to “No realistic or logical baseline load available with renewables”.

        “Approximately half of the goal is met through increased energy efficiency to first reduce energy demands”

        Yeah, that’s what I thought. You’re hoping for what has never happened. Better lifestyle without increasing energy consumption has never happened. Or just come out and say it, in order to reach our goals we have to spend money like drunken sailors and make everyone poorer so they stop using stuff. Good luck on that sell.

        Garbage.

        As for nuclear… I did have a pent up question about what the actual costs are as many reports are seemingly contradictory. I did a bit more indepth research into costs and you are right, the “greens” are not to blame directly per se, but the NIMBY (Not In My Back Yard – my own acronym) attitude demonstrated by so many is a much larger factor. Excessive regulation through the politized fear of the industry has caused nuclear power to be priced (or red-taped) out of the market. It’s current cost woes are entirely self induced and are not a permanent fixture of the landscape. So my point remains though who I initially blamed wasn’t entirely accurate.

      • Brandon said of Dana:

        “I don’t appreciate your “I’m intellectually superior to your close minded stupor” innuendo you’re running.”

        Brandon, I don’t think that Dana was engaging in any innuendo; he was simply explaining something to you, and rather patiently, I thought.

        However, I have no qualms in saying it – Dana is your “intellectual… superior to your close minded stupor”.

        And about the “muckracking”… If Spencer hadn’t produced such an execrable paper, Barry would not have had cause to rake over the muck.

      • A problem with nuclear is .. nuclear threats to human DNA (increased probability of DNA destruction as radiation levels rise) if the radiation is not sealed sufficiently. Research should continue here, but we still have much potential in using the Sun as a steady “free” driver to forming some more usable energy fuel.

        For example, there have been interesting results recently in improving sun+water to hydrogen conversions using probably cheap semiconductor manufacturing http://www.physorg.com/news/2011-08-alloy-hydrogen-fuel-sunlight.html

        Here we see early stages to produce hydrocarbon fuels from bacteria+sunlight and CO2 http://www.gizmag.com/bacteria-sunlight-co2-renewable-petroleum/18223/ thus the “oil” would be “renewable” allowing released CO2 to later be recaptured.

        Here we have a thin spray on film that aims to make windows “tinted” with the absorbed light energy able to be stored for energy use later on http://www.gizmag.com/thin-film-turns-windows-into-solar-panels/16058/

        http://www.sciencedaily.com/releases/2011/05/110510134110.htm
        http://www.scienceknowledge.org/2011/05/25/a-new-technique-of-photosynthesis-artificial/

        hydrogen storage research:
        http://www.alternative-energy-news.info/new-advances-in-hydrogen-fuel-catalysts/
        http://www.alternative-energy-news.info/hydrogen-generation-storage-nano-technology/
        http://www.alternative-energy-news.info/new-hydrogen-storage-method/

      • Bernard, I’m done with insults. Go away.

        Jose, Radiation based DNA destruction as opposed to… Organ destruction through toxic chemicals that we already readily accept with current production means? I suggest you research the casulty rates of nuclear power vs other power types. It’s pretty well understood even if all the precise mechanisms of what exactly happens inside of cells is not.

        As for all the other stuff. Those links may well provide interesting results, but anything that hasn’t yet been on the market doesn’t provide me with any basis for measurement or evaluation

      • Brandon, I know reactors have generally not led to problems, but concerns I have include our current inability in many cases to reverse damage or even the future effects after simple accidental exposure (maybe at some point the use of stem cells or something else could work as a countermeasure); how quickly it can lead to organ failure and death for high doses; how it can poison a very wide area and possibly last for a while. The good news is that these things don’t reproduce by themselves, but as the level of accumulation grows, our ability to run from it goes down. [Also, I do not have a feel for what sort of scenario and in what quantity would likely lead to what sort of cell damage.]

        I found the idea of using the sun to recapture carbon and recreate hydrocarbons to be rather interesting.

    • Thanks to Roy Spencer, who I had never heard of until yesterday, I found your website so you can thank him for gaining another fan.

  11. Barry & others, re “Instead of complaining about how biased and awful the peer review system has gotten, [Spencer should…]..”

    I’m having trouble googling up a recent, definitive “Spencer on vile peer review” post; where has he done this?
    (I do see 2009’s “I believe the day is approaching when it will be time to make public the evidence of biased peer review”, but did that day arrive, somewhere over on his blog? I’ll ask him too, but if you know…?)

    • From his latest blog post:

      “Given the history of the IPCC gatekeepers in trying to kill journal papers that don’t agree with their politically-skewed interpretations of science (also see here, here, here, here), I hope you will forgive me holding off for on giving the name of the journal until it is actually published.

      But I did want to give them plenty of time to work on ignoring our published research as they write the next IPCC report. ”

      No soup for you Anna!

  12. The new Spencer paper is out:

    http://www.mdpi.com/2072-4292/3/8/1603/pdf

    (The reason the trolls have appeared is that this post in mentioned in the comments at Spencer’s blog)

  13. Anna:

    Here’s a recent example from Dr. Spencer’s blog (www.drroyspencer.com) posted on July 15, 2011. It’s still on the main page:

    “Given the history of the IPCC gatekeepers in trying to kill journal papers that don’t agree with their politically-skewed interpretations of science (also see here, here, here, here), I hope you will forgive me holding off for on giving the name of the journal until it is actually published.”

  14. I think you should rework your post and submit to a “real” journal. You did an excellent job of taking Spencer’s paper apart.

    As for the trolls, thanks for the heads up. I was really wondering how in the world they understood enough of your post to comment on it!

  15. “It is to be hoped that by now the naive idea has been dispelled that problems like the atmospheric general circulation can be solved at one fell swoop through the lucky manipulations of the proper equations during a day’s work of some genius in fluid mechanics. Solutions to problems involving systems of such complexity are not born full grown like Athena from the head of Zeus. …”

    Originally written by Victor P. Starr, and reprinted in the Dedication of “Physics of Climate” by José Peixoto and Abraham Oort.

  16. “I cannot believe it got published.” — Kevin Trenberth

  17. […] as possible but not simpler): well this has gone way beyond being too simple (see for instance this post by Barry Bickmore). The model has no realistic ocean, no El Niño, and no hydrological cycle, and […]

  18. Blah, blah scientific priesthood, blah blah world gubbiment, blah blah IPCC, blah blah communists, blah blah Al Gore, blah blah no consensus, blah blah tax dollars, blah blah emails, blah blah Al Gore, blah blah 1998, blah blah Al Gore, blah blah global conspiracy, blah blah Al Gore etc.

    (ad nauseum)

    😦

    ………………………………………………………….

    It’s incredible that middle-aged white conservatives STILL take Spencer seriously. Why are they always the main target demographic that laps this oogity-boogity nonsense up? (Climate denialism, evolution denialism, DDT denialism, cigarette cancer denialism.) Say what you like about Spencer and the Marshall Institute clones, they know how to promote a slick disinformation campaign.

    The American Denial of Global Warming

  19. This article is a fail. I wish Spencer had not introduced his simple model, which was used to do sensitivity analysis. THE MAIN POINT of his paper is that the satellite data does not match what has been predicted by the models, which models have been used to determine global economic policy. In short, the satellite data shows that the “official” models are erroneous. Talk about Spencer’s simple model is a red herring. Deal with the MAIN POINT. The “official” models are wrong.

    • This article is about Roy’s blog posts, not his latest paper.

  20. I had never heard of Roy Spencer until yesterday’s news about publication of his “gaping hole” article. I searched the internet and found enough information about Roy Spencer to seriously doubt his objectivity. One good thing did come out of Roy Spencer’s article. Mr. Bickmore you have gained a new fan. After coming to the conclusion that Roy Spencer is obviously biased I read several of your other blogs and letters, etc. Your blogs and articles are both easy to understand for the non-scientist but with enough technical detail to be highly educational. In that regard, you have have struck a perfect balance IMO. Keep up the good work and I will be back often and recommend this website to others.

  21. I wonder if you might update your critique with regard specifically to Spencer’s recent article in “Remote Sensing,” here: http://www.mdpi.com/2072-4292/3/8/1603/pdf. A Michelle Malkin rave-up is here: http://www.facebook.com/notes/michelle-malkin/analysis-of-nasa-satellite-data-suggests-un-climate-models-are-full-of-hot-air/10150244525745677

    Thank you.

  22. […] Just Put the Model Down, Roy (Barry Bickmore) […]

  23. Spencer,
    Stop. You’re killing us. Really.
    Besides, the only “gaping hole” in all of the data would appear to be you, Roy.

  24. […] change (see a critique of Spencer’s article here, and a general critique of his work  here)? This is what happens when you have a presupposition (e.g.- God said he’d never flood the […]

  25. Re-posted this on Skeptical Science today:
    http://www.skepticalscience.com/just-put-the-model-down-roy.html

    Tomorrow we’re re-posting Trenberth and Fasullo’s critique of Spencer and Braswell’s paper. Then on Wednesday we’re posting my critique of Loehle and Scafetta’s paper, which I think is pretty damning. Three consecutive posts on “skeptic” curve fitting exercises. It seems to be becoming their favorite pastime.

  26. […] on 2 August 2011 by bbickmoreThis is a re-post of an entry on Dr. Barry Bickmore's blog, Anti-Climate Change Extremism in […]

  27. Barry, thought you’d might like this. And he’s not a fruitcake either. He echos my sentiment exactly. And he’s the author of two climate books. Hmmmmmmmmmmm.

    http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/new_research_warmth_produces_these_carbon_dioxide_concentrations#87593

    Well, I’ve got to get back to the real world. Have a good day.

  28. A breathless post from Andrew Bolt on climate science – certain to be no bias there. I’d doubt if Bolt even got the facts of Salby’s presentation correct, let alone actually understood it. Salby is listed as a Professor in the Department of Environment and Geography at Macquarie University in Sydney. Bolt has him down as the chair of climate. If that’s the case then the Macquarie Univ website needs serious updating.

    http://www.envirogeog.mq.edu.au/directory/person.htm?id=msalby

  29. […] Just Put the Model Down, Roy […]

  30. […] Foremost is that Spencer has disagreed that flaws have been found in the article. This fits an unfortunate pattern with his research where flaws have been found and not acknowledged. It is common knowledge in the climate science community that Spencer’s work is often wrong. Some of these errors show a fundamental lack of understanding of some of the most elementary concepts used when comparing models and data. See for example https://bbickmore.wordpress.com/2011/07/26/just-put-the-model-down-roy/. […]

  31. Dr. Bickmore,

    You are probably sick of playing whack-a-mole by now, but if you did not already know, here is the latest BS from Spencer:

    http://www.drroyspencer.com/2011/08/is-gores-missing-heat-really-hiding-in-the-deep-ocean/.

  32. […] e rossi dentro mai, ieri il ten. col. Guidi era avvinto come l’edera al modello semplice, e notoriamente ridicolo, di Roy Spencer dell’università dell’Alabama. Il ten. col. crede a Spencer […]

  33. […] […]

  34. […] […]

  35. […] wrote about how the model had been simplified to the point of being useless (one of the more detailed examples comes from BYU geochemist Barry Bickmore). These criticisms, however, haven’t generally made […]

  36. […] victime ou coupables? Aujourd’hui, Wolfgang Wagner explique qu’après avoir étudié les arguments des uns et des autres au sujet du papier de Spencer, il est parvenu à la conclusion que ces travaux n’auraient pas […]

  37. […] curve-fitting without real physical merit. Despite several deep criticisms of his approach, he continued to develop the model in all the wrong ways. (When a paper based on an earlier model was held up in review, and then not given much attention […]

  38. One of the problems with the GCM approach as indicated quite clearly in the scientific chapters of the IPCC Report AR4 2007, is that they include a vast number of parameters which are not understood – a feature ranging from “uncertain” to “do not know anything”. But because of the attemps to be all encompassing, the parameteers must be included as best they can.

    Roy Spencer’s simple model looks at one aspect of the climate, the most important aspect in fact, dealing with one of the IPCC.s “strongest drivers” but one which they admit cannot be modelled accurately.

    To proceed as they do, in pumping out results from these huge model rather than looking at individual components is not at all helpful. The climate MUST be examined in its individual parts until each of those parts is fully understood. Roy Spencer has started that process and no amount of hand wringing by the GCM modelers that Roy’s model is one dimentsional should be allowed to deter people from continuing with this most sensible approach.

    Roy’s first attempt may be wrong, but it should be developed just as the behaviour of carbon dioxide in a column of air should be studied and developed before being put into a GCM as an assumed forcing, based on what is still an unproven hypothesis put forward by Arrhenius in 1896.

    Very accurate measurements of the spectra of CO2 by Jack Barrett in 1985, at a high resolution NOT available to Arrhenius’ 115 years ago, showed that this very approximate hypothesis is wrong. Yet the IPCC will not re-examine Arrhenius’ and Barrett’s work and compare the difference. Barrett’s work would destroy their dream about the forcing just as Roy Spencer’s work threatens to destroy their assumptions about the effects of the PDO not being important.

    • When modelers have to use parameters that are poorly constrained, they do a sensitivity analysis to see how much it matters. Sometimes it has very little effect. Sometimes it has a big effect.

      Roy’s problem was that he didn’t constrain his parameters AT ALL.

      • Barry, I do not think that is strictly correct.

        First, It is actually the number of degrees of freedom in any model which allows for the opportunity for error since pairs of parameters can often “trade off” their values no matter what constraints may be applied, so that both may be wrong, yet the result still fits the measurements. The need to set limits on a model is usually an indication of instability which does not auger well for accuracy, particularly when the range over which the values should be allowed, is unknown as is the case for many parameters in thge GCMs.

        Importantly: Roy Spncer used just four parameters which were the only one’s necessary to satisfy the demands of his very simple model which was looking at a subset of the whole global influnces on climate. These were used over a wide range of set values – not “unrestricted” as is claimed here. All other interactions and variables were “restricted” in some sence by not neing included.

        Secondly, in more detail, it is not correct to say that Roy allowed his parameters to be totally unrestricted and it is difficult to see how this could be the interpretation and criticism What he did was to SET the values of the four parameters, p1,p2,p3,p4 = pi (i = 1 to4) over huge ranges of different combinations and different values for each parameter. i.e. if each parameter was varied by N steps then he would have had “Factorial 4 x N” different runs to perform (4!xN). He then compared the results with measurements to study the values of combinations of pi and determined those values which provided the best fit in a consistent representation of the PDOI and the global temperature. So the allowance of the parameters to cover unrestricted values is correct, but the values are selected by proper comparison with experimental measurement as in a normal physics experiment.

        On the pther hand, the GCMs have many parameters included which, according to the IPCC reports, are almost unknown but have best guess values. In addition, the 23 models used are selected on the basis that they provide what are referred to as “plausible” results – which is that they all “show” a positive warming signal.. Those which show cooling are arbitrarily eliminated on these “plausibility” arguments according to Chapter 9-10 of AR4. This does not strike me as being good science, in spite of their confidence in the average value obtained from 23 models no two of which give the same answer and have a spead in values of four times that of the lowest value included in their sample. This is also not a good look.

        I will be interested in your reponse to see how you have interpreted Roy’s results differently. And thanks for your original article and this comment. John

        • Every model has limits set on it. You can’t name a single model that isn’t full of limitations. As you implied, every model has an infinite number of variables restrained entirely; thus, “the need to set limits on a model is usually an indication of instability” doesn’t at all appear to be a statistically supportable statement.

          Too many parameters *can be* a problem. Too few *will be* a problem. Unlike in engineering, we cannot refine the constituents of the planet in order to get behavior we desire. We have to deal with the complexity there is and make sure we have enough parameters.

          Just looking at the number of parameters is a problem. You have to look at the analysis. A bad simple model is much worse than a good complex model. For example, using a broken counting model implying 1+1 = 3 will be a problem greater than using complex and approximate integration and numerical methods appropriately. Adding suitable complexity is why we migrated beyond the caves successfully. I am not saying current models are correct or that models shouldn’t all have their details entirely open (eg, open source code), but it certainly is possible to keep parameters you are unsure of to have marginal impact in the models (eg, leveraging sensitivity analysis to contain the error).

          Spencer is using a limited set of data. The other models deal with a very diverse set of data. More data over longer time spans while keeping decent resolution at a few decades in the near term requires parameters.

          We also are not interested in getting a temp reading +/- 10 trillion degrees C http://arthur.shumwaysmith.com/life/content/roy_spencers_six_trillion_degree_warming or even +/- 10 degrees C. Using a very small number of the allegedly “most important” parameters may not be nearly enough.

          As Barry has pointed out https://bbickmore.wordpress.com/2011/09/06/roy-spencer-persecuted-by-own-data/ , Spencer appears to have failed to provide evidence he had nearby which would have totally undermined his conclusions. This results in bad science and flawed analysis.

          Using limited data sets, bad analysis, and simplified models to deal with a very complex system is not the way to go.

          Every model has limits and scientists use all sorts of plausibility (“reality check”) arguments to keep improving the model’s range and scope. Can you quote a section from the IPCC WG1 report where “plausibility” troubles you? I found the following *good* scientific approach http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch9s9-6.html :

          > The idea underlying this approach is that the plausibility of a given combination of parameter settings can be determined from the agreement of the resulting simulation of historical climate with observations. <

        • On the “plausibility” worry, each model, as updated, might not have been scrutinized by a very wide bunch of people and might each have their selection of questionable assumptions. To help discard the riskier implementations (eg, identify the clear bad data points and prior unidentified theoretical or measurement flaws http://tamino.wordpress.com/2011/12/06/johnnys-growth/ ), plausibility checks must be passed.

          The IPCC plausibility checks, as I just mentioned a little earlier, appear to be very reasonable, eg, a requirement to match historical data (or possibly even to be in agreement with the scientific consensus on key points).

          It’s sensible to trust experts at large without necessarily trusting each “word” each one produces.

          [Note, I don’t know enough about the relevant science to have a strong opinion generally over any particulars chosen by the IPCC.]

    • Is there a link to Barrett’s work? [For such a potentially revolutionary work, I would think the author would be more than willing to make the details freely available.]

      • I am curious to know how one concludes that CO2 can’t have an effect.

        My understanding is that Hottel’s data http://dspace.mit.edu/bitstream/handle/1721.1/42950/02748698.pdf?sequence=1 page 29 (and as updated by others) shows that at surface temperatures, atmosphere pressure, and at over path lengths in the 1000 meter range and up, CO2 can definitely have a very significant absorbing effect (eg, around .2 emissivity). Does Barrett make emissivity claims that widely contradict this?

        We have (I believe but am not sure) IR instrument measurements that suggest that some effect is surely leading to very large IR levels near the surface of the planet from the up to down direction vs the much much lower levels at a few kilometers up (where the radius of the earth is till not a major contributor and where the down to up levels are still fairly large). These measurements seem very consistent with the long established theories on backscattering from CO2 and from other ghg’s.

        • Thanks Jose_X. Yes he certainly presents his ideas very clearly. What he is talking about mostly is the Green House Effect which is the contribution that CO2 makes to the extra 33 C of warming from our having an atmosphere. The earth of course is warmer also because of the spreading around of the tropical heat by all of the gases of the atmosphere, evaporation of the oceans which is the biggest driver, followed by the warming of the air by contact with the heated earth with the Green House effect about the same contribution as contact/wind cooling in the troopics. The more difficult analysis is of the Enhanced Green House Effect which is the increase from now on of temperature as CO2 increases. This where the anlysis of CO2 by Barrett is important, but unfortunately his work was totally ignored by climate scientists who felt they knew more about the spectra of carbon dioxide than an infra red specialist who had spent a lifetime studying spectra of such gases. More of that in answer to your other question. Cheers, John

        • Jose,

          First, I downloaded the PDF reference you gave me, but my Adobe reader claimed it was damaged and refused to open it. I would like to see it, so if you could email me a copy at jonicol18@bigpond.com I would appreciate that.

          It is of course a matter of magnitude of the warming effect. A very rough indication is given in a sketch by Pierrehumbert in his recent APS magazine article where he shows a crude diagram of the enhenced green house effect.

          The IPCC and its modellers use the results from very primitive measurements by Arrhenius in 1895 which claim that while the centre of the bands of absorption in CO2 absorb ALL of the Infra red radiation at the corresponding wavelengths, and hence increasing CO2 has no effect in this middle region, in the wings of the lines the radiation still makes it through to outer space and increased CO2 will cause this radiation to be absorbed into the atmosphere and cause further warming.

          The amount of warming therefore depends on the very detailed structure of the spectral lines which make up the bands. If one were to increase the CO2 100 fold for instance, all of the absorption by the lines would be complete, but how much more heat energy would be retained in the atmosphere. The IPCC, using relatively crude spectra, claim a lot, as is indicated by Pierrehumbert’s rough diagram.

          Jack Barrett and others including Heinze Hug from Germany, show from their much more refined measurements that a large increase in CO2 can only cause a very small amount of extra heat to be retained so that the “climate sensitivity” to increased CO2 is very low indeed.

          Climatologists believe otherwise.

          There are other cogent scientific results which confirm Barrett’s findings from very different approaches, including the geological evidence of a complete lack of a consistent relationship between temperature and CO2 concentrations and other modern quantum mechanical, theoretical, analyses of the absorption and radiation by CO2 in the atmospheric gases.

          The IPCC have thrown all of their faith behind their very complex models for which they admit themselves in the latest major report: IPCC AR4 2007, that
          1. There are huge errors in all of their climate results commensurate with a lack of knowledge of the values of parameters they are using.
          2. The models used (23 in all) do NOT provide similar values for the expected warming by a doubling of carbon dioxide – the models are all chosen because they give what the IPCC referes to as “plausible” results meaning all provide some warming – models which show a cooling from clouds and feedbacks are not considered plausible, but this is by their own admission, an arbitrariy selected feature.
          3. The 23 models all provide different values, no two are aloke so only onwee of the models can be correct. For this reason they simply average the values which range from about 1 to 5 degrees C to be expected for CO2 doubling. The errors are therefore at least +- 4 C and if one applies that error to the lowest value, it does in fact eextend to negative values, which is again ignored.
          4. Other evidence of measurements in the atmosphere, the expected heating in the upper tropical regions by an Enhanced Green House Effect (EGHE), invariably presented by the models as a”signature” of EGHE, has never been found in over 20 years of measurements by research groups around the world. This is further experimental evidence of the lack of an EHGE but is also ignored by the IPCC.

          There are other comments which can be made demonstrating the lack of evidence by the IPCC which is totally consumed by the belief as told to me by CSIRO heads of climate Dr Penny Whetton:

          “We believe that most of the increase in global temperatures during the second half of the twentieth century was very likely due to increases in the concentration of Carbon Dioxide in the atmposphere” . Very likely indeed! John Nicol

          • >> my Adobe reader claimed it was damaged and refused to open it .. I would like to see it, so if you could email me a copy

            I’ll let you reconsider since the file is 12 MB and that might present problems for your email handler (or not). It appears to be a PHD thesis supervised by Hoyt C. Hottel in 1976.
            “RADIATIVE HEAT TRANSMISSION FROM NON-LUMINOUS GASES. COMPUTATIONAL STUDY OF THE EMISSIVITIES OF WATER VAPOR AND CARBON DIOXIDE.
            by Ihab Hanna Farag”

            Pg 29 is just a long table of values for the emissivity of CO2 at various temperatures for a long list of (path lengths * pressure) in (cm*atm). The relevant lines near the end of the table show values near .2. This data comes from interpolations and application of a basic formula I think in order to capture the main data points from a set of several earlier experiments (a primary source being an earlier Hottel paper).

            The data is a bit outdated. You can get the essence of those data points, as updated over the years from other experiments (eg, by Leckner), in a text book: Modest, Michael F. Radiative Heat Transfer-Second Edition. 2003. Elsevier Science, USA and Academic Press, UK. The book can be found online as a googlebook: http://books.google.com/books?id=lLT-aKLTxkQC&pg=PA826&lpg=PA826&dq=Modest,+Michael+F.+Radiative+Heat+Transfer-Second+Edition.+2003.+Elsevier+Science,+USA+and+Academic+Press,+UK.&source=bl&ots=7k4Sxqi87s&sig=DqLHm-TPB9S-a3ICO2YSyvX-Yko&hl=en&ei=nsfaTvGiFZDAgQf154GpBg&sa=X&oi=book_result&ct=result&resnum=3&ved=0CC0Q6AEwAg#v=onepage&q=paL&f=false

            The figure 10-25 on page 342 essentially captures that data (as updated).

            Further, equation 10.145 allows you to use that canonical chart and modify the value in order to get values at different pressures. The equation is,

            > Ɛcd / Ɛcd_0 = [1 – [(a-1)*(1-PE)/(a + b – 1 + PE)] * e (-c (Log10 (p_CO2_L_m / p_CO2_L_0) )^2) ]

            where the Ɛcd_0 can be looked up from the chart and the other variables and constants are described on those pages (eg, see the example 10.11). Ɛcd is the actual calculated value you want. That ratio essentially will come out to 1.0 if you try at current atmosphere values of p = total pressure in bar (1 bar is about 1 atm), p_CO2 = partial pressure of CO2 in bar (eg, .00038 bar), T = 300K or anywhere near that. The formula, thus, is (as stated in the text) not needed for our current atmosphere, and we can just use the chart or table of values directly. That figure shows values very near .2.

            I also found other references in papers that appear to use a value near .2.

            ..one such example is “Effective Atmospheric Emissivity under Clear Skies” ( http://www.patarnott.com/atms411/pdf/StaleyJuricaEffectiveEmissivity.pdf ) from March 1972 Journal of Applied Meteorology. All 3 curves of Fig 1 on page 352 suggest a CO2 emissivity value of approximately between 0.17 and 0.2 at about 1000 mbar (1 atm). Also, Table 1 on pdf page 6 is for the 1013 mbar case and has CO2 at .19.

            So all I am trying to get at is that the limits of CO2 emissivity appear to be perhaps around .2 and are easily reached within our own atmosphere. That limitation comes from the absorption band percentage relative to the flux spectrum range of the radiation produced from our planet (via blackbody).

      • I can’t give you Barrett’s original paper just now but if you search on the net for Jack Barrett or Barrett and Hug you should get a wide range of options. There is less on the net now than a few years ago and as far as I know, Barrett’s peer reviewed 1985 paper was never available on the net. Another good background to the development of the ideas of global warming is in an historical account by John Coleman, the senior meteorologist and father of the American Weather Channel. He was a contemporary of Keeling (CO2 measurements at Mona Lao and Revelle (The “Father” of Global Warming) Coleman refers to “the great global warming scam” as it all began because people had to present research funding applications with a calim of “national interest” and what could be seen as being more of national interest than the possibility that the world was going to fry!

        • Coleman doesn’t appear to be an expert or to have studied the issue very much at all.

          A rebuttal to one of his speeches is here http://uscentrist.org/about/issues/environment/john_coleman/the-amazing-story-behind-the-global-warming-scam . He doesn’t appeal to science too much (even making a number of basic mistakes) but appears to be hung up on the politics.

          You may want to see the video presentation I also linked to in a different comment https://bbickmore.wordpress.com/2011/11/11/how-to-avoid-the-truth-about-climate-change/ . Barry was once a skeptic, and in the presentation he analyzes deniers/skeptics a little bit.

          Why should I feel confident listening the Coleman describing something he doesn’t seem to understand, while ignoring the consensus view of many experts who do in fact as a whole present fairly compelling arguments (and feel free to attack those particular arguments as much as you want)?

          • I Have had a look at the link you provided which is interesting, but really only refers to the “opinion” of peple who do believe in global waming with some references to statements by climatologists

            I was very interested to note the comment:

            “Climate is studied mostly by climatologists, not meteorologists (see above).”

            What is not acknowledged here, nor by the community, is that “climatologists” ususally if not always, have backgrounds in geography, and do not have cery much scientific training and absolutely know knowledge of the physics involved in both weather and climate, a fact that becomes clear as soon as one begins discussing the science of the green house effect with a climatologist. Most of the people in our Australian Climate Units, are geographers – Andy Pitman, Matthew English, … Some are meteorologists, Will Steffen is a chemical Engineer, and so on. There really is no such thing as a “Climatologist” as until very recently climatology was not a course on it s own and Climatology was based in Geiography departments as a subject, not a full course. It was mainly, and still is , a study of the distribution of climate and the environmental and economic differences between regions having different climates.

            Geographers and climatologists do not have the background in science, in atmospheric physics or any of the necessary mathematics to understand the climate. Meteorologists do having studied to a major in physics and doing a post grasduate course in meteorology – in most meteorolgy training. Hence the claim to denegrate Coleman because of the “weather” vs “Whether” in that article is really a bit rich. The claim by climatologists that their models can predict 100 years out is “different” from meteorologists trying to account for the weather next week is also complete nonsense. The atmospheric processes still involve winds and clouds and all those things which have to be modelled in both purposes. It is also this type of claim which helps to cripple faith in “climatologists”

            Back to Coleman, it is not so much his meteorological analysis, although I do agree with him and to say he has not done such research and is “not a climatologist” is really a bit rich. The main point in his original article which I am not sure is still available, was that the idea of Revelle, an ex-naval oceanographer, “the father of Global Warming”, and Keeling, was purely to get research from the National Funding bodies, and had very little if anything to do with an expectation that CO2 was a cause of future warming. Revelle in fact, is believed to have refuted global warming later in life and his original work and persistence was only carried on by one of his early students. Of course once they were locked into doing this research and received funding, in the US, research funding is also necessary to pay one’s own salarty, so there was not much incentive to say that they could not find a connection between CO2 and global warming. Luckily for them, the earth did warm from 1979 to 1998 and carbon dioxide increased at the same time. So that is largely why the IPCC now hang on to that correlation – the ONLY correlation there is.

            Yes, all of the discussion is about the degree. Sure the earth warmed from 1979 onwards but far less steeply that illustrated in Mann’s Hockey Stick and what Mann did which was so very unforgivable, was that he removed completely, the known warming in earlier years of the millenium which were larger than the most recent.

            The same with the Enhanced Green House Effect. It is a matter of degree and Barrett’s work shows that it will be very much smaller than claimed by untrained (in spectroscopy!) climatologists/geographers.

            I have just received your email regarding the rejected Hottel book, but the one with the attached paper is yet to arrive. Thank you for both and I expect the smaller paper will be OK. I think my server will accept a maximum of 4 MB.

            Regarding a calculation of the down flux from increased CO2, you can see that at http://www.ruralsoft.com.au/co2/downflux.pdf

            John Nicol

            • This is a reply to the entire comment that includes:
              >> What is not acknowledged here, nor by the community, is that “climatologists” ususally if not always, have backgrounds in geography ….

              I apologize for forgetting that Coleman is/was a meteorologist. The write-ups I skimmed (the one I linked and another) showed to me that he was mostly ranting and then making accusations about “false” climatology without making any significant reference to relevant science. The critique pointed that much out.

              The climate models (or so I have read) have origins in weather forecasting models. They rely on similar physics, and I think “we” have had competent software programmers and physicists (and perhaps mathematicians and others as well) involved also with the climate aspects of it over the years.

              That you know a few climatologists who aren’t experts in atmosphere science is far from justifying a claim that all or most who contribute to that field are in a similar position. You should re-evaluate that view that all or most climatologists don’t know science. Certainly, NASA folks dealing with various atmosphere measurements and others designing the satellites have a good idea about much atmosphere science.

              As a bystander, I believe we don’t have to be near human-era all-time temperature global highs for global warming to be real. The sun is a key driver of temperature changes, and it’s very unlikely that our human society, both, reaching significant CO2 release levels and also achieving milestones in technology and science (computers, satellites, modern physics and related sciences, etc), would coincide with the hotter part of the sun cycle across centuries. However, we should pay note if man-related global warming is a real issue because someday (if we assume not today) the sun will be near highs in radiance, and we don’t want to multiply that effect significantly by having neglected the lessons of CO2 and of other enhanced warming effects.

              Coleman seems to discard climate science perhaps almost solely because he had some knowledge of someone who maybe became a poster-boy for the science and whose intentions or knowledge Coleman doesn’t respect. Gore could be an idiot, and it wouldn’t make the science any less real or fake. Many kooks have been right with aspects of their theories on occasion — sometimes even losing interest in the theories as others adopted facets of it. I think some “deniers” are not looking at the science and instead, after assuming it must be fake, are amazed a hoax at such a grand scale could have been perpetrated (by evil organizations and by others jealous at the US).

              Barry can speak better about relevant expertise.

              I think you should pick papers that were used by the IPCC and criticize those papers rather than try to wave them all off without reading them because of a bad feeling you have based on extremely limited experience you have had with “climatologists”. Your story doesn’t convince me one iota. Pick a paper, as Barry has done with Spencer, and do a good critique, and then you have a much better chance of convincing me. I appreciate the head’s up, but I don’t consider Barry Bickmore (to name one person relevant here) as someone who doesn’t know science, yet he is someone who takes climatology seriously.

              I don’t want to be disrespectful, but I haven’t exactly seen stellar logic on your part in discussing the effects of adding CO2. Despite this you claim that “climatologists” can’t hold a conversation on the physics of the atmosphere. Are you one to judge? Or whose word are you taking for this?

              In terms of degree of warming, we have to analyze the science. If you can’t give me more to go on from Barrett or on the unseen high atmosphere temp rises, then we should move on to something else. It would be a shame, but I am not about to take your word on something whose details you aren’t showing me and which disagree with accepted science. If I only had a nickel for every time someone thought something with which I disagreed upon learning the details….

              The “hockey stick” paper appears to be a case where the graphs were open to exaggerations because a particular made up and (later shown to be) bogus statistic was used to justify the graphs. This doesn’t mean we haven’t seen significant rises in the last few decades, but it does mean that paper (which was on the extreme side in claiming current temps are clearly at all time highs and rose *much* faster than ever before) does not pass muster as is. An adjustment would likely fix the problems (and very likely not suggest impending doom). Mann’s paper does not make climate science. If the best some “deniers” can do is fixate on that one flaw and perhaps a few select others, then what are we to think of the thousands of other papers supporting AGW?

              Anyway, until you provide convincing refutation, your claim the IPCC only has correlation on its side is rather weak and almost a useless belief to me and to many others. FWIW (and I am not judging and have not read the paper), this is a new paper that factors out cyclical weather effects to show a steady rise in global average temperatures: http://tamino.wordpress.com/2011/12/06/the-real-global-warming-signal/ It might be bogus, but please provide evidence to show that. Otherwise, I see a graph of continued and steady temperature increases all throughout this past decade, much as was the case in earlier decades.

            • Thank you again JOSE,

              I have to say that I do agree with most of what you have said in regard to the possibilities of some climate scientists having knowledge of the physics and particularly those in the NASA team. However, a number of NASA scientists, do not subscribe to the warming promoted by the IPCC, including for instance the man we are discussing, Roy Spencer, who is a NASA scientist and runs the section on remote sensing of temperature and sea surface heights.

              A number of others from NASA have written papers in fact disagreeing with the IPCC, best known being Ferenc Mikolszci, also IPCC reviewers such as John Christy, Richard Lindzen and Bryan Leyland, to name but a few. The main role played by NASA is to do measurements and collect data. Others do the analysis of that data in many different groups including those I have mentioned in Australia, but among whom I do not know of any physicist per se, but several are meteorologists.

              Most of the papers cited in the IPCC report are related to environmental problems following expected climate change. These are included of course to show how detrimental 1, 2, 3 degrees of warming will be in various environments and for the various types of flora and fauna. If you go through the 5,300 citations only about 100 have anything to say about carbon dioxide and these pay homage to Arrhenius and do not go much further than making a few moderate changes to his hypothesis.

              , As a physicisit who spent 30 years studying the spectroscopy of gases, I find all of these papers sadly lacking in the necessary anaysis such as that presented by Jack Barrett. They simply regurgitate Arrhenius Hypothesis and show how the increase of CO2 COULD increase global warming but none actually shows that it will. It is this fact that makes me so concerned about the lack of understanding of the most important physical featuure behind the global warming hypothesis, the behaviour of carbon dioxide.

              They do not understand the spectroscopic science nor the processes of radiation transfer in the required detail to make the claims they do. This statement may sound arrogant, but I do understand the physics of radiation and have taught that subject at university for most of my life. Reading any of the cited papers on the IPCC list regarding carbon dioxide clearly shows a gross lack of basic understanding. Reading the material presented by Barrett, Hug and others, which is available to the IPCC but which they ignore, allows me to make a comparison. Barrett has it right, the IPCC has it wrong. It is not a matter of well perhaps there is another explanation. The claims by the IPCC relate directly to the work that Barrett had done. But they make use of an assumption which is directly in opposition to the demonstrable facxts, both theoretically and experimentally.

              So untill the IPCC can show me that their models are capable of starting with the known climate in 1930 and reproducing the climate in 2010 to within about 10% (the usual criterion) but I will allow up to 20% error, I will then accept that there is a case for a carbon tax. Since on their own admission,they are nort able to reproduce the currently known climates to any accuracy at all, I will prefer to remain skeptical.

              I can’t get the whole of the paper by Grant Foster and Stefan Rahmstorf you linked me to but only the abstract. I have no reason to believe it is bogus, but it is interesting to note that again they only use the period after1979 when everyone accepts that the climate warmed without a clear reason but everyone does accept that the temperatre has now flattened and is slightly falling. Anyone of consequence that is such as Phil Jones at CRU who accepted in 2008 that there was no statistical indication of an increase in temperature since 1995. A group of climatologists who believe global warming , Foster, Michael E Mann (not the hockeystick Mann) and a couple of others whom I have forgotten, recently wrote a paper which was attempting to show WHY there had been no warming between 2001 and 2008 blaming additional aerosols produced by the Chinese!! So they totally accepted the measurements which show no warming at all over that period and of course 2009 and 2011 have been cooler with a slight increase in 2010.
              Cheers, John

            • This is another long comment. Umm… oops? Anyway, the more interesting bits are (a) I want to research the Arrhenius logarithmic relationship and (b) I want to see details of the model leading to 1/3 back scattering.

              >> http … ruralsoft.com.au/co2/downflux.pdf

              That link leads to a “not found” on this server error message. It did when I tried yesterday and also when I tried this morning. [US Florida time]

              >> [send email of link]

              Let me clarify since one of the key comments I posted here was not displayed for a while.

              One link was to a large 12 megabyte PDF of a paper by Hottel. I eventually send that paper by email but it was rejected apparently because of its large size. You stated you had a limit of 4 MB, so this appears to be a dead end.

              Another link was to a physics text book hosted by google online. This is the link you may not have seen for a while. If you scan down the page, it is the very very large link (on my screen it takes up some 10 lines) and comes from books.google.com domain. I mentioned the author is Michael F. Modest. I think that google book does not require a pdf viewer.

              Then I recently posted a link to a new paper by Foster & Rahmstorf which your email to me suggested you recognized (but did not get the actual paper.. which I also don’t have).

              > I don’t want to be disrespectful, but

              Next, I want to state I was somewhat uncomfortable with the tone of my last email and specifically this section, where I state you didn’t do a “stellar” job explaining CO2 in the atmosphere. The unwritten disclaimer to that little section would have been “in my opinion and as I have not been convinced, for whatever that is worth”.

              >> pay homage to Arrhenius .. regurgitate Arrhenius Hypothesis

              I had recently seen reference to Arrhenius equation and was surprised this was the same person who postulated the logarithmic relationship between temperature and CO2 concentration. It’s interesting this equation is that old and largely has not been explained through more elaborate mechanism — at least I think that is your position and I don’t think Wikipedia contradicted it.

              It continues to be my hope that the physics behind the equations used in the models have a solid foundation. I am definitely curious to research this more.

              [Quoting from your email to me of a comment that may be awaiting moderation] >> In effect, the increased absorption of the upward radiation from the earth, capturing heat energy at a lower level, is compensated for by the corresponding increase in the absorption of the downward radiation so that, as shown in the calculation a maximum and consistent 1/3 ofthe upward radiation can be returend.This is of course for a simple stationary body of air and where the density is independent of height. A more careful analysis using digital techniques which allows for these thaings would show that there is likely to be a reduction in the downwards intensity as CO2 increases, because of the increased rate of upwards convection when the air is heated at lower levels over a smaller volume. However, the static case gives the maximum one can expect not the 333 Wm^-2 shown in Hansen’s diagram as used by the IPCC.

              OK. This sounds different to me than what you were saying last time. So the max backscattering would be 1/3 of the upward. “Cancellation” then would be “partial cancellation”.

              I would still like to see the calculation and/or reference since I think this is a very important feature of “greenhouse effect” to understand, and, naturally, I am skeptical of things sold as simple derivations. I like to understand the limits of simple models. [And yes, I have to research the Arrhenius relation]

              You do mention this uses a simplified model, but it might be even more simplified than you recognize. We have clouds and, generally, we have more than just CO2 to deal with. As Barry (and others) mention, CO2 appears to serve to trigger for other potentially powerful “positive feedbacks”.

              The advantage of the climate scientists is that they are also reacting (and you would say likely over-reacting.. while others would say, not reacting enough) to facts on the (paleo) ground, sure, amid theories that appear not yet to be .. shall we say.. globally cooked.

              >> Cheers, John

              I finally couldn’t resist googling your name (by the way, I am a no one in most professional regards). Are you the chairman of the Australian Climate Science Coalition (as listed on the auscsc about page)?

              > a group of professional people interested in encouraging continued scientific research into the world’s climate and in particular into the effects of increased concentrations of carbon dioxide in the atmosphere. We do not believe that past and current climates are sufficiently well understood to enable projections of future climate changes to be accurately predicted. Our purpose is to exchange scientific ideas and to encourage proper political and social debate on this intriguing subject.

              Certainly sounds good on the surface.

              Hopefully within 20 years, there will be much less resistance by a wide array of scientists to the dominant views of the experts (“experts”, meaning those who most have studied the problem).

              >> but I will allow up to 20% error

              Is this for “surface” temperature (only) and do you mean within 20% Kelvin of the actual temperatures? Do you mean 20% K or C of the delta changes from some reference point? This last would be a much tighter bound and perhaps too tight when you consider that 100 years from now, 3.8 degrees or 4.5 or 2.9 are all things we likely should take seriously.

              Where do the models stand now? I ask in part to see which results you are guiding yourself by, to see what you mean by 20%, and because I don’t know the answer.

              It’s important to keep in mind that a failure to be “very precise” doesn’t mean failure if the approximation says something important.

              >> a case for a carbon tax

              [OK, here I go off my rocker a little. Science discussion, take a break.]

              I think it is smart to consider placing limits on industries of various sorts. It’s a question of how much. I don’t believe for a minute (and who does) that the environment is an infinite reservoir. I also don’t agree that individuals should be able to buy out limited natural resources and use as they wish (at least not to the degree popular vote would be against it, perhaps even requiring a supermajority to bypass a constitutional amendment protecting individual “access” rights). In fact, a quasi “free market” approach to ownership of resources would have the government (or other group, with government-of-people providing oversight and refereeing) collect fees to redistribute proportionally to all of the nation’s “stockholders” — it’s citizens (with further restrictions at international level). The wealthy lease to the poor today, yet the people as a collective should be leasing to the wealthy much more often. Those who are competent succeed but should not then have leverage raised on top of leverage (a feed-forward system where wealth begets more wealth to at least a partial detriment of everyone else). Everyone gets a sort of social safety net from this “dividend”. We also get better opportunity regardless of birth limitations. After all, why should the government-of-people’s military might be used to support the exclusive monopolies of a few people, many of whom inherited their wealth, for what amounts in some cases to a token fee in light of the power the “ownership” bestows upon the principal and in light of the limitations forced on everyone else? There is hardly a real wealth tax being collected (eg, on all sorts of wealth which provide market leverage and power). Heck, the US “once” even took land just because it could or it appeared to be available at the time. We wouldn’t play most games and contests if the rules were that you had to start each game just as the last one ended with one player clearly in the lead. We believe in a reset button of sorts: in giving new opportunity to new players. We should apply that principle a little bit more (not fully, of course) in real life. I think if voters really had power to vote directly on bills, we would have a much fairer system. In fact, the system would apply to everyone: for example, a progressive tax would apply to everyone.

              A carbon tax does appears to be a (recessive) sales tax. I am a supporter of progressive income and wealth taxes because they create the most fair fields of competition (opportunities for more people who aren’t currently a major victor and more fair recognition of work of laborers) and keep greed and threats by past economic winners in check. I also am a huge fan of voluntary cooperation to opening up trade secrets but also to penalties (eg, taxes) to those who benefit from ordered commerce and society yet keep those secrets from the public (ideally, the public would vote with their dollar). Some secrets aren’t that important, but others are the foundation of monopolies and of significant market levers, leading to reduced competition and opportunity.

              >> .. aerosols produced by the Chinese!! So they totally accepted the measurements which show no warming at all over that period and of course 2009 and 2011 have been cooler with a slight increase in 2010.

              Careful when using “they”. Anyone can write a paper. And it is reasonable to consider a wide range of possibilities until something is better “understood”.

              Anyway, I *am* appreciative of this little discussion (as well as this website), and please stop saying thank you Jose in every other comment. You aren’t going to soften me that way 😛

            • >> as shown in the calculation a maximum and consistent 1/3 ofthe upward radiation can be returend

              BTW, this sounds very reasonable under a “simple” model. It states radiation increases, which itself would promote a temperature rise. I am curious in the details (for my sake, although I may eventually research more and pull out a new back of envelope).

              One problem we have is that the PDEs don’t have general solutions and adding any sort of complexity to a system requires computers.

              A second major advantage climate scientists have over skeptics is that the former use computers in order to solve a problem that requires the use of computers. The skeptics are toying around with little formulas and maybe digging in a little bit into some mathematical solutions package that surely isn’t by itself designed to tackle the entirety of a climate science problem. As long as sophisticated computer programming is avoided, they will keep resorting to simplified models, being tempted into inevitable simplifications, and getting burned.

            • That is true Jose, that the modellers have an advantage in using models. But what skeptics are concerned about is how those models are used and the individual pieces of information fed into them in regard to the assumed Greenhouse forcing. Thier use of correct circulation, ocean/air coupling, cloud formation etc may be correct, even though the IPCC report advises that htere are very large uncertainties in these and other variables, and that in general they are not able to claim accurate results for precipitation for instance (IPCC AR4). However, as far as the green house effect and global warming are concerned, you do not need a computer to check whether they have the Physics right. They are prepared to use the hypthesis of Arrhenius and the heat transfer model of Kheil and Trenberth (not Hnasen as I think I mistakenly quoted earlier) who do not use computers either to formulate their theory of warming. It is at this basic level that the skeptics are so concerned. The rest of the analysis by GCMs is attacked because of their inability to demonstrate meaningful results which represent the real climate in say 2008, 2009,2010. We do not need computers to see that they are unable tom do that and untill they can we will not accept that tghey can tell us what will happen in 2100!! So there are two distinct arguments which skeptics follow – the first is the incorrect use of a basic physical forcing which is not necessarily produced by a computer, it is a matter of experiment and calculation or perhaps some simple modelling on a PC which is what I spend a lot of my time doing. The second is the totalinability of the models to demonstrate “truthing” using projections from past years to provide correct simulations of recent years with the same time span as they ckaim they can project into the future. If you know of any work in which this has been achieved, I would be very grateful if you could let me know. If they can dom it often enough, I may even change my mind. Cheers, John

            • A few more notes:

              If the form of Arrhenius’ logarithmic relationship accurately describes some aspect of CO2 growth to first approximation, then resolving boundary conditions and testing different constant values is the sort of work that can lead to a decent model improvement. It need not be totally correct in order to help patch up models. Although, not having more solid theoretical and experimental backing to support the formula is not a long-term solution.

              Support for CO2 models comes from explaining deviations from what solar irradiance has suggested in the past at low CO2 levels. Whether the measured temperature actually goes up or down in the future has much to do with solar irradiance levels then and with weather oscillation cycles. CO2 modelling gains credibility from meeting gaps in predictions. To this end, if the MWP was indeed hotter and the sun had something to do with it, it suggests the future, alongside much extra CO2, stands surely to be hotter than ever experienced by human society. And, unfortunately, we have an awful lot of people to feed and who might be displaced. We also already add lots of other pollutants and pressures on the environment. [It may or may not be a popular topic, but we can’t continue to ignore human population explosion.]

              Very high precision is not a key point of a model that tries to approximate climate values at levels believed to have existed over many millions of years.

            • > but it might be even more simplified than you recognize.

              What I meant was that there appear to be a fair number of complexities to account for beyond the approximation of “simple stationary body of air and where the density is independent of height”.

              Of course, you might understand this very well and other issues that some models may not consider.

            • >> They are prepared to use the hypthesis of Arrhenius and the heat transfer model of Kheil and Trenberth (not Hnasen as I think I mistakenly quoted earlier) who do not use computers either to formulate their theory of warming. It is at this basic level that the skeptics are so concerned.

              I agree with the principle that the physics needs to be right.

              One: Does the public actually know what is in those models? Papers may refer to Arrhenius, but how is it implemented? Do the best models actually use the simple logarithmic relationship or do they fall back to lower level modelling or other equations? I do have a problem with lack of transparency. I certainly don’t blame people for being skeptical about missing details and wanting transparency (assuming this is a problem). I also, for example, am critical of journals which would not provide the papers for free access (although presumably such a journal would still be much more accessible than a closed source computer model).

              Two: Whatever they use, it need not be perfect. Does the skeptic side have derivations or evidence that the logarithmic or any other formula is incorrect and significantly so? If that model is a bit crude, it might still work fine for the range of values involved (assuming the models leverage this relationship).

              Three: In all of these years, has no one honestly taken a reasonably accurate model of the lower atmosphere (even if simplified) and managed to get a logarithmic relationship to fall out of the analysis?

              >> The rest of the analysis by GCMs is attacked because of their inability to demonstrate meaningful results which represent the real climate in say 2008, 2009,2010. We … will not accept that tghey can tell us what will happen in 2100!!

              There is such a thing as a low resolution model that is valuable. I’ll repeat, you can trade off precision in year by year basis in order to come usefully close on a longer term basis.

              You may not see trees in a low resolution picture but still be able to usefully identify the basic forest boundary with high confidence. Such a low res picture would be very useful for forest based questions.

              So, to the extent the climate models come close in identifying long term trends, I am not at all sold on skeptic arguments that the models fail because don’t capture the temp in any given set of years. If you want the weather, fall back to a weather model. We are asking for basic range info 100 years from now. *If* they appear to be able to get the latter within reason, there is no reason we can’t use weather modelling to find out details as we approach the dates. I want low res and high res models, because each type makes trade-offs in order to address more meaningfully a different question.

            • >> the heat transfer model of Kheil and Trenberth

              Someone on WUWT posted a table of radiation values for different altitudes of some location at some date/time. I don’t know the details, including if it was made-up, if it was generated from a model known to match real values, or if it was based off instruments. Two columns gave readings of downward-facing and then also of upward-facing radiation values. If these are measurable quantities (ie, if we can orient a half-plane radiation sensor to cover the top or the lower half plane to actually measure radiation from that direction), then the upward-facing numbers throughout the very first few kilometers of the lower atmosphere were on the order of the Trenberth diagram back-radiation number (ie, were each a reasonably large fraction of the surface radiation value). The upward-facing dropped off quickly after a few kilometers, approaching 0 after tens of kilometers. At ground level, the up-facing were almost identical I think in value to the downward-facing (iirc).

              So if this table was accurate, my interpretation of the Trenberth diagram is that these likely form fairly accurate boundary values that could be used when computer modelling the lower atmosphere. To actually judge the correctness or incorrectness of the implementation of that approach, we would need to have a better understanding of the model details.

              On the surface, I see no fundamental problem with using boundary values of that nature (the models use them a lot I think), and the Trenberth values appeared to be reasonable if based on what could be assumed to be measured atmosphere radiation values in the up and down directions.

            • Jose,

              Could you give me the reference to the WUWT table you mention. The measurements I have seen from ground based and balloon mounted detectors, have indicated the expected intensity of both upward and downward radiation which is as one would expect, in equilibrium with the temperature of the surrounding air. Thus as sun as the sun sets, the earth begins to cool, not being subjected to a K&T type of theoretical downflux. If you read carefully the analysis at the link I sent you where the downflux is at maximum equal to 1/3 of the upward radiation, from layers above the earth you will I am sure understand that the K&T model is not correct. Cheers, John

            • I posted my analysis of upwelling and downwelling graph data from a single location https://bbickmore.wordpress.com/2011/07/26/just-put-the-model-down-roy/#comment-8132 . That comment has 12 links, including one at the end which is to the table from WUWT (which, btw, was generated using MODTRAN).

              The graphs probably show a number of interesting things which I interpret in numerous points. The main point is that back-radiation appears to be a very large fraction of the surface radiation. Note that I interpreted most of the downwelling values in the longwave range, taken with the instrument in the shade, as back-radiation.

  39. johnnicol, you may appreciate these two presentations Barry put together

    How to Avoid the Truth About Climate Change

    Climate Change: What We Know and How We Know It

  40. [Continuing from https://bbickmore.wordpress.com/2011/07/26/just-put-the-model-down-roy/#comment-8032 . Also note that my tone may appear a bit argumentative at times, but that is just me trying to understand and debate. I don’t mind if your tone gets just as feisty or worse (as long as you are reasonably open-minded).]

    >> The IPCC and its modellers use the results from very primitive measurements by Arrhenius in 1895

    Do you have reference to papers or documentation suggesting that is an accurate representation of the limit of the various complex models adopted by the IPCC?

    In the earlier comment, I gave an example of a text book and a recent paper that appear to use approximate CO2 emissivity values of .2. Are you suggesting this is incorrect or are you suggesting that possibly correct value is not used within the IPCC models?

    Generally (and I’ll address a few more points below), can you provide more details as to the modelling that you think is inaccurately used by the IPCC and what would constitute more accurate modelling?

    >> in the wings of the lines the radiation still makes it through to outer space and increased CO2 will cause this radiation to be absorbed into the atmosphere and cause further warming.

    That might be a measured statement (ie, accounting for a minor amount of extra absorption) and may very well be accurate according to quantum mechanics probabilistic implications.

    However (next follows the key point I want to make) …

    >> If one were to increase the CO2 100 fold for instance, all of the absorption by the lines would be complete, but how much more heat energy would be retained in the atmosphere.

    …I think the key is not in how much more radiation gets through the entire atmosphere. We can assume zero extra gets absorbed. The key point would be that more CO2 means the absorption of a given amount of radiation happens lower in the atmosphere. The overall result is that the total amount of re-radiation (backscatter) along the entire atmosphere is greater. This represents more energy from the sun that is in the atmosphere being re-radiated at any given time. If you pack a larger amount of energy and radiation in our (now only very slightly more dense and voluminous) atmosphere, you can end up with more backscatter radiation to the planet and generally a greater average kinetic energy per gas molecule (other molecules are in LTE).. aka a higher atmosphere temp.

    I have seen this argument before, that very little extra radiation otherwise going directly into space would get absorbed by the extra CO2 and so lead to negligible warming, but those arguments seem not to consider that more CO2 would mean absorption of a given amount of radiation happens lower in the atmosphere, leading to more retained energy throughout the atmosphere and closer to the ground. This results in extra backscatter radiation hitting the planet.

    The earth doesn’t care if the radiation supplying it with energy “originated” in the sun a few minutes earlier or instead a few days or years earlier and comes from an indirect source (like a reflective mirror in outer space or re-radiating ghgs or from a massive wool blanket wrapped around the earth). The longer the energy remains in the atmosphere and bouncing back to the earth, the more energized will be all molecules on the planet because they will be hit by more photons of a given energy (Stefan-Boltzmann).

    This idea of containing energy from escaping a certain volume (and for a fairly constant number of particles) while we continue to add more energy is what allows us to engineer many devices that reach very high temperatures here on earth.

    Now, having more energy but also more particles *may* mean that the average KE and temp remains the same or comparable, right? This is why I care about the details of the models. However, such an argument would be different than simply saying that hardly any extra radiation goes directly into outer space from the surface of the earth.

    Briefly, I think we would get “significant” raised temps. More CO2 doesn’t add significantly to volume or pressure at our current levels (CO2 is a tiny amount of total mass of and particle numbers in our atmosphere). A doubling in output essentially leads to near doubling the concentration of energy and radiation near the ground. This means we have, I believe, a capture of a given quantity of earth radiation at near half the distance to the ground (compared with current levels). Re-radiation is isotropic. I think it’s reasonable to surmise (without doing too many calculations or further analysis) that overall backscatter radiation hitting the surface will at least go up noticeably if not actually double.

    >> Jack Barrett [and co] show from their much more refined measurements that a large increase in CO2 can only cause a very small amount of extra heat to be retained

    Yes, this appears to be the argument. Although, I would need to know what exactly is meant by retaining little extra. Do they mean few extra photons would go directly into space from the earth. If so, the argument I’m making applies. If, instead, they mean that backradiation experienced on the earth would be about the same, then that is a different story. If the latter is their conclusion, I would like to know the details of the study since I get the impression that doubling the concentration of radiation by CO2 near the earth would lead to many more photons hitting the earth (perhaps near a doubling as contribution from CO2 backscatter) and consequent higher earth surface temps as dictated by SB.

    • JOSE_X,

      I am enjoying this very civil discussion with you and understand that to make points one sometimes sounds abrupt, but I am not finding that in your case at all. I possibly sound to be coming in very strongly at times but do not mean to be over bearing.
      I am not familiar with the use of “emissivity” for CO2 and I was unable to open the text book reference you gave me – a pdf file which my Adobe reader calimed was “damaged” so would not load it. If you could email me a copy at jonicol18@bigpond.com I would appreciate that. The emissivity is applied to a solid surface such as the land on earth and sometimes to liquid such as the sea. It refers to the rate at which heat is emitted from the surface at a given temperature T, compared with the rate emitted by a “Black Body” at the same temperature. The rate for a black body is 1 x sigma x T^4 and for a body of emissivity, say, 0.7, it is 0.7 x sigma x T^4 where sigma= 5.67 x 10^-8. One can define the emissivity of a “slab of gas” I suppose, but in the case of a gas in thermal equilibrium with the radiation field, the intensity I at the edge of the slab is equal to the radiation density, Rho, (Greek letter) inside the slab multiplied by c, the velocity of light and divided by 6. ( I Watts/m^2 = c x Rho/6.)
      You may not have wanted all that information!
      However, now to your key point. I understand what you are saying and yes it is true that the heat/radiation is absorbed more quickly and at lower levels. But, and this is a crucial point which again is overlooked for reasons I cannot understand by the IPCC. The increased downwards radiation towards the earth is also captured by the carbon dioxide below at a higher rate and it turns out that the two processes exactly cancel out. In fact a reasonably simple calculation shows that the radiation returned to the earth is approximately 1/3 of the radiation emitted by the earth. So if the earth were to lose all of its heat by radiation, which it doesn’t, at an intensity Io, an intensity Io/3 would be returned by the Green House Gases. Of course this is changed by the movement of air, both upwards and downwards by winds, turbulence and convection. Since most of the heat (80%) is transferred into the air by contact with the surface and by evaporation, again ignored mysteriously in any comments by IPCC, much of the heat is taken up by convection and is eventually reradiated by the green house gases because they acquire energy by collisions with the warm molecules of O2 and N2. In general, this energy will be reradiated by CO2 at heights above that where the free radiation has been absorbed by green house gases, and so increased CO2 will allow even less radiation to return to earth.
      These comments do contradict the IPCC. If you look at the IPCC report, AR4 2007, you will see a diagram which purports to show the radiation from the sun coming down, the IR from the earth, and most importantly a field of 333 W/m^2 bearing down on the earth’s surface from clouds and green house gases. This representation due I think to Hansen along time ago, is absolute nonsense. If you stand under a 100 Watt incandescent bulb, from which the light has spread out to more than a square meter, you will easily feel the warmth on your skin. But if you step outside from under the eaves of you house, remaining in the shade but never the less exposed to the sky and presumably the 333 W/m^2 which is shown in Hansen’s diagram, you will feel no warmer.. The point is that the radiation going upwards, the temperature of the air around you and the radiation coming down wards, are all in equilibrium and so whether you are under the eaves or under the sky, you do not receive significantly different heat.
      It is true as you say, that the radiation is trapped by the green house gases. But at the same time the only radiation from the atmosphere that escapes to space thereby cooling the atmosphere and hence the earth, is via green house gases.The other major gases in the atmosphere, Oxygen, Nitrogen and Argon cannot radiate at the relatively low temperature of the atmosphere. In the highest parts of the atmosphere, where water vapour has been removed by condensation, the only one available is CO2. It radiates in fact, from satellite measurements, about 120 Watts per square metre on average from its main emission band. This is nearly half of all the radiation which leaves the earth in equilibrium. On the other hand it is not responsible for absorbing more than about 20% at the absolute most of the radiation and heat energy leaving the earth. In fact I have over stated this by a large factor. (Taking the direct radiation from the earth to the 20% of the whole energy loss of 240 Watts per square metre on average, in equilibrium with the sun, the absorption band of CO2 absorbs about 25% of this or 25% of 20% or 5% of 240 Watts/m^2 which is 12 Watts per square metre. This again is a bit o misleading because of the role of the green house gases in being excited through collisions with heated molecules of their own and other gases. The green hiuse gas molecules then radiate this energy, as yiu said, back towards the earth but also upwards towards the top of the atmosphere and in both cases it is re-absorbed and trapped etc.
      So thre process is complicated and there are many people of course who will argue that I am wrong. However, it is difficult in the extreme, to get people to say why they think I am wrong in the way that you have done and I hope that you will now show me what you think I have said which is still not correct here. I could well be wrong and respect anyone who will tell me why. Most of what you have said is very reasonable and in general is correct. As I said before, the details of the Enhanced Green House Effect are very difficult to analyse and I have done my best here to explain my view. I am not sure that I can easily explain in more detail why Jack Barrett’s analysis is different from the IPCC, whose citations in their reports do not include Barrett or Hug, but mainly Fourier, (1826) Tyndall (1847), Arrhenius (1896) Callendar, (1939) and Hansen (1983) …… All but Barrett use very crude spectral measurements and analysis which do not reflect enough accuracy to show why CO2 would be expected to increase warming.

      As I have found in discussions with the Climate Groups in Australia, Pitman and English at UNSW, Steffen at ANU and DR Penny Whetton, Head of Climate at CSIRO, and others, their main conviction that CO2 is causing warming comes from belief, “We believe …” they say, NOT “We find that because…”. I have often asked them why, but they are unable to give any specific answer beyond quoting results from models. They simply “believe”, because the globe warmed constantly from 1979 to 1997 and CO2 increased constantly over the same period. While they acknowledge that the globe has not warmed statistically nor in fact from 1995 to the present, they are also of the “belief that natural causes have taken over from CO2 for the time being, but warming due to CO2 will eventually return – but they have of course no proof of that.

      I have gone on but hope you will find what I have said at least a little bit interesting and also that it does answer some questions for you. As I said earlier, you will also find plenty of people around who will vehemently disagree with what I have said. Cheers, John

      • >> I am not familiar with the use of “emissivity” for CO2

        I wasn’t either. I believe it is the amount of the radiation that is absorbed on passing through a volume of gas (or liquid) through a path travel length. It behaves largely through a (negative) exponential decay relationship with path length. A zero path length gives 0 emissivity and the value “reverse decays” towards the upper limit (asymptote) that would represent the maximum. For CO2, the max would likely be the fraction that is the entire absorption/emission band of the gas relative to the weighted range of frequencies of the radiation. CO2 presumably can absorb up to near 20% of the radiation leaving the earth (or so I’d guess from the experiment results of Hottel.. although we could use other calculations based directly on absorption band and frequency profile of earth radiation).

        http://www.btinternet.com/~robertjtucker/gas_emissivity.htm
        http://en.wikipedia.org/wiki/Absorption_%28electromagnetic_radiation%29
        http://en.wikipedia.org/wiki/Beer-Lambert_law

        >> If you could email me a copy

        I don’t think I can email the google book, but I’ll email you the Hottel paper.

        … done.

        >> The emissivity .. refers to the rate at which heat is emitted

        Emission and absorption are closely related when it comes to photon emission/absorption. I think that explains the use of “emissivity” above.

        >> The increased downwards radiation towards the earth is also captured by the carbon dioxide below at a higher rate and it turns out that the two processes exactly cancel out.

        You will have to defend this comment.

        Here is what I see, at least near the ground level (other effects cloud this simple picture if we go too high up in the atmosphere). A fraction f of the radiation from the planet going up through height h is captured by a fraction of the CO2. A very large fraction of these molecules will impart “some” of that acquired energy to other nearby gas molecules (in LTE). When spontaneous re-emission occurs isotropically in those cases where this can happen despite the LTE interactions, it seems to me that essentially the same fraction f of the radiation heading downward (“down” would constitute about 50% of the emissions) would be absorbed by CO2 (or other ghg) below instead of hitting the ground. The result is clearly not cancellation because we don’t have 100% of the downward emitted get absorbed by ghg. In fact, such a cancellation argument must otherwise apply for any level of CO2 (not just “extra” CO2 from current levels), leading to no backscattering ever to any significant amount.

        To continue, near enough to the ground, say horizontal slab s1, a very small fraction of CO2 would absorb. A small fraction of these emit spontaneously, but most of that emission (1-f approx = 1) will make it back to the ground. For slabs s2, s3, etc, we have correspondingly lower fractions absorb just within that slab. We can then look at the fraction that gets emitted spontaneously and the odds it will be captured into any other slab. Generally, though, we can see that if a fraction from the ground fn made it to slab sn, then of the emissions from that horizontal slab layer, fn/2 will make it back toward the ground in one shot. There is no cancellation because this result is not 0.

        But the story is not done.

        We repeat this analysis at different time intervals. We see that any given CO2 has a chance of receiving at any given moment that is greater than merely the probability it might receive from the earth. It can receive from other molecules (as well as bounce off other molecules to perhaps acquire energy away from the average value).

        Even if we have that most absorptions by CO2 don’t lead to emissions (because of LTE taking place much faster in time in the lower atmosphere), we have an awful lot of places from which an absorbed photon can originate (it’s not just from the ground). And in each such case there is a real chance that emission can reach the ground in one shot (it depends on where that CO2 molecule is).

        The total energy in the atmosphere at any given time can be modeled to have some contribution that originated in the sun at an infinite number of points in the past (a diminishing chance for an energy chunk that originated a longer into the past). This is why energy storage can lead to lots of internal radiation and corresponding high temps — the radiating energy wasn’t limited solely to the energy that just got added recently.

        >> In fact a reasonably simple calculation shows that the radiation returned to the earth is approximately 1/3 of the radiation emitted by the earth.

        I would like to see that calculation; however, I think it contradicts the idea that, if we add “more” CO2, that that potentially greater backscattering hitting the earth is completely cancelled. Why is it that with our current atmosphere we get some backscattering yet if we add more it all cancels out? That makes no sense without perhaps the help of some sophisticated (Occam’s Razor violating) theory. Assume we had less CO2, perhaps as little as a single molecule, for example, then we reach a contradiction that adding more to reach our current levels (or any other step in between) will result in cancellation.. unless that single CO2 at the beginning led to the full “1/3” backscattering.. which really makes very little sense without some serious help.

        >> Since most of the heat (80%) is transferred into the air by contact with the surface and by evaporation

        Can you provide citation of this result (and maybe some reading that more clearly explains exactly what you mean)?

        >> this energy will be reradiated by CO2 at heights above that where the free radiation has been absorbed by green house gases

        Can you explain why this is the case. Is emission not isotropic? Why is (near enough to the ground, say within a few km) downward emissions apparently being ignored?

        >> and so increased CO2 will allow even less radiation to return to earth.

        So now you don’t just say that we get amazing full cancellation, but we actually get less.

        How was it that we ended up with a habitable planet, again?

        So if we had a single CO2 molecule, what should we expect? And are you proposing some asymmetry with other ghgs? [If not, then I want to know how much back radiation we’d get from a single molecule of a single ghg. Would this be near infinite !!]

        >> If you stand under a 100 Watt incandescent bulb, from which the light has spread out to more than a square meter, you will easily feel the warmth on your skin. But if you step outside from under the eaves of you house, remaining in the shade but never the less exposed to the sky and presumably the 333 W/m^2 which is shown in Hansen’s diagram, you will feel no warmer.

        First, be careful about relying on what you biologically feel. We should be using instruments perhaps. Our bodies generally do a good job identifying large changes in small time. This feeling conflicts with large changes in large time or any other combination.

        Second, when you are under the light bulb, you are also exposed to the atmosphere (CO2 is everywhere.. and all solids around you are also radiating according to gray/blackbody), so you get the radiation from both the atmosphere and also in addition from the bulb. Were you suggesting putting the bulb and your arm into an isolated “0K” environment?

        ..and if you were in the sun with a light bulb, you’d get the atmosphere plus direct sun plus bulb.

        >> But at the same time the only radiation from the atmosphere that escapes to space thereby cooling the atmosphere and hence the earth, is via green house gases. … In the highest parts of the atmosphere.. the only one available is CO2. This is nearly half of all the radiation which leaves the earth in equilibrium.

        OK, I’ll assume correct.

        >> On the other hand it is not responsible for absorbing more than about 20% at the absolute most of the radiation and heat energy leaving the earth.

        We can reconcile generally the idea that 50% can be released via 20% band.

        Thermodynamic contact imparts energy onto CO2 from other types of molecules. Most absorbed photons lead to such interaction rather than spontaneous emissions. This means that regardless of what ghg served as the transducer channel, the energy imparted will be divided up among all gas molecules. Since CO2 dominates in numbers among ghg for much of the middle/upper atmosphere [does it??], eventually spontaneous emissions in that range will be dominated by CO2 molecules. Sure, many of those molecules acquired their excess energy not from a photon absorption but from thermodynamic contact with another high KE molecule. Note also that as we go up contact between molecules stops occurring nearly so much as near the ground.

        • >> Second, when you are under the light bulb, you are also exposed to the atmosphere (CO2 is everywhere.. and all solids around you are also radiating according to gray/blackbody), so you get the radiation from both the atmosphere and also in addition from the bulb. Were you suggesting putting the bulb and your arm into an isolated “0K” environment?

          This question of light bulb radiation and outdoor radiation came up again below. “Upon further review”, I reasoned that we may have less radiation indoor (and the primary source would likely be from solid objects and not from the air) or not, but the key point is temperature. To this end, where radiation doesn’t cover the heat transfer, particle kinetic energy exchanges would. Our skin sensation, I believe, is based on temperature exposure and not radiation exposure.

  41. johnnicol, in reply to https://bbickmore.wordpress.com/2011/07/26/just-put-the-model-down-roy/#comment-8029

    >> the geological evidence of a complete lack of a consistent relationship between temperature and CO2

    Can you be a little more detailed?

    CO2 can both lag and lead.

    I think the evidence of geological CO2 lagging is very strong (so this would be a consistent relationship .. eg, providing evidence consistent with CO2 release from warming oceans and melting ice).

    The theoretical evidence of CO2 leading is what the argument is about.

    Barry’s “what we know and how we know it” presentation I linked earlier covers this a bit and many other things.

    >> The IPCC .. admit

    >> 1. There are huge errors in all of their climate results commensurate with a lack of knowledge of the values of parameters they are using.

    Can you quote? The IPCC report is online. I provided a link into chapter 9 a little earlier, so you can use that link as a starting point.

    You can have little understanding of a parameter or effect yet attribute little weight so that the model doesn’t rely significantly on it. This would presumably be reflected in the confidence rating and error bars of the models and in the IPCC final analysis.

    >> 2. The models used (23 in all) do NOT provide similar values .. “plausible” … 3. The 23 models all provide different values, no two are aloke so only onwee of the models can be correct….

    I covered this above. See the comment including: “It’s sensible to trust experts at large without necessarily trusting each ‘word’ each one produces.”

    >> .. models which show a cooling from clouds and feedbacks are not considered plausible

    Can you provide a reference quote? I’m interested in the context, since it might be that they discard a particular model and not the idea itself.

    We should also consider how many papers support the view of negative feedbacks. Might not uncertainty be somewhat reflected in confidence level assessments? And if most experts think some particular feedback is not plausible, what do you want?

    Arguing over who is an expert is something we can do. Barry and others have hit that topic a lot (eg, see around minute 3 of his youtube presentation video, “what we know and how we know it”, linked in earlier comment).

    The video (“what we know..”, eg, around minute 26) also covers negative feedbacks and states that clouds are recognized to have negative feedback effects. The whole albedo effect is partly attributed to clouds, so I’m not sure what you mean by cloud cooling not being considered plausible by the IPCC.

    >> 3… The errors are therefore at least +- 4 C and if one applies that error to the lowest value, it does in fact eextend to negative values, which is again ignored.

    You may want to pose this “question” directly to Barry in a separate comment. He address statistical details periodically and has some experience.

    Keep in mind that many scientists agree that it is possible that we might be OK, but going by the odds and potential seriousness implied means we should not be sitting back claiming that is some chance everything might be absolutely fine.

    See around minute 38 of the video.

    Keep in mind that the outer bounds of the various error ranges of these model predictions indicate low confidence levels. Statistically, you leverage that when dealing with averages of models and computing the overall confidence levels of the averages. They use standard deviation calculations (which imply very low confidence outside certain ranges) and not just equally weighted +/- bars.

    The IPCC doesn’t subtract that 4 from the very lowest value, and you shouldn’t just assume that would fall within a realistic range in the 95% confidence level.

    >> 4.. the expected heating in the upper tropical regions… has never been found

    Can you provide a quote of what the IPCC confesses in relation to this point?

    In terms of confessions, there is doubt, but that is not the point. It’s a matter of how much doubt the people who have studied the evidence the most have and what are the consequences if they are not that far off.

    • No I cannot provide a quote from the iPCC since the original expectatio would have occurred in much earlier IPCC reports but by 2007 the search for the signature had been given up by many of the teams hwo were searching for it . I do know Dr David Evans, who headed the CSIRO’s efforts and who was a very keen experimentalist with a good team, who believed they would find the heating with the high quality equipment they were using. Dvid became skeptical about 2006 and is now a very ardent critic of the IPCC. There have been some papers trying to explain the fact that it was not found but they were not very convincing and as far as I am aware it has now been all but forgotten. But I will not forget it and neither will David Evans!

      You have made a numnber of good points also above, but I ahve just dropped in a fairly lengthy comment so will have to leave these other things till a little later. I will also revisit the video and try to pick up the points you are making from Barry’s talk. Cheers, John

      • That prediction, whether mistake or not, is the sort of thing that can/should end up improving the models and our understanding of the climate. It would be a shame if everyone simply forgets about it. If more people had access to the source code of the various models (at least the model E from GISS is open source), more people could hunt in there and try to figure out why those predictions were made and what went wrong to try and fix it (assuming the models played a role in the prediction).

        From some recent reading, I assume Mann’s “hockey stick” went too far as called out by McIntyre. But as McIntyre stated on his blog (and maybe elsewhere), and somewhat in agreement with your point about this being mainly a question of “how much”:

        > I didn’t argue that it turned AGW theory upside down, but neither was it a nothing… I said very clearly that if I had been a manager or principal of the next IPCC report, I would have wanted to understand very clearly what, if anything, was wrong with it, and how we could avoid such mistakes in the future.

  42. To keep things clear, I have one comment that apparently is still in moderation from yesterday and which I have referenced in other comments https://bbickmore.wordpress.com/2011/07/26/just-put-the-model-down-roy/#comment-8032 . I also have a recent one from a few moments ago which is not yet up but which I don’t know how long it might be in purgatory https://bbickmore.wordpress.com/2011/07/26/just-put-the-model-down-roy/#comment-8066 .

  43. I just wrote a comment mentioning two other comments that appear to be “awaiting moderation” and that comment itself is now also awaiting moderation. In it I linked locally to those other two comments. I don’t think I used foul language. So I don’t know why it got trapped in limbo???

    I am crossing my fingers as I click Post Comment.

    • Sorry–comments with links automatically get moderated.

      • There is a comment “December 26, 2011
        at 7:36 pm” (aka …#comment-8032) that still shows up on my browser as awaiting moderation. This comment has a hyperlink that is many lines long (I blame google for this). Maybe you missed it or the size of that link is leading to problems.

        I think John should be confused with what I am saying if he can’t see that link and that comment. If that doesn’t get fixed, I will repost it trying to avoid link issues.

        Thanks. [the other “awaiting moderation” comments now appear to be ok, btw]

  44. John Nicol:

    This comment studies the upwelling and downwelling radiation at some point in the US (along California coastline). I generated the graphs, I think, from actual data provided by the United States DoE’s Atmospheric Radiation Measurement (ARM) Program. Details follow, but a main point that I think is suggested (see the very bottom) by the graphs is that atmosphere back-radiation levels are comparable to the earth surface radiation levels. This contradicts the 1/3 max back-radiation calculation you mentioned. [I welcome feedback on my interpretation of the data. You may even want to research a different location and set of graphs that you think may help you argue a different conclusion. Of course, feel free to instead agree that this evidence suggests back-radiation can in fact be very large.] As a related matter, this conclusion appears to support usage by climate models of the Trenberth data (if we assume they use it), at least as limited boundary conditions and as concerns some if not all of the values in the diagram. Your main disagreement with climate models was in part that they relied on “the heat transfer model of Kheil and Trenberth”. You were referring to this http://bobfjones.files.wordpress.com/2011/09/trenberth-cartoon-ex-colose.jpg , right?

    This page explains what ARM is http://en.wikipedia.org/wiki/Atmospheric_Radiation_Measurement

    If you go to their website and sign up for a free account, you can access free data sets. Go to http://www.archive.arm.gov/armlogin/login.jsp and click on “Data Browser” of “Get routine ARM data” section. After signing up, you can go back and pick data sets to browse.

    I picked some location in the US (Point Reyes, California http://en.wikipedia.org/wiki/Point_Reyes ) in August of 2005 (ie, north hemisphere, summer not too far from equator.. although I’m not sure about fog and cloud levels). I looked at 4 key graphs. [August 31 for upwelling and August 11 for downwelling.]

    Each of these graphs I’ll mention comes from a Pyranometer http://en.wikipedia.org/wiki/Pyranometer that measured irradiance in the up or down direction (180 hemisphere view). I think these were located at an altitude of 10 meters, although I don’t remember where I read that info (and it may not apply to all 4 graphs).

    The first two are upwelling (shortwave and longwave): http://www.archive.arm.gov/quicklooks/2005/pye/pyegndrad60sM1.b1/pyegndrad60sM1.b1.200508/pyegndrad60sM1.b1.20050831.000000.png

    The other two of interest were downwelling (sw and lw) and are the right two graphs on this page: http://www.archive.arm.gov/quicklooks/2005/pye/pyeskyrad60sM1.b1/pyeskyrad60sM1.b1.200508/pyeskyrad60sM1.b1.20050811.000000.png . Note that there are two pyranometer measurements for lw overlapping each other, one in red and one in black. Also, these two longwave downwelling measurements were shaded (from the sun, I pressume).

    The bottom numbers are the 24 hours of the day. On that clock scale, I guess that night time is roughly from 3 to 14.

    Notes (general notes and some that I think support the Trenberth diagram):

    — upwelling did not provide the shaded option because these sensors are already facing down and are shaded naturally.

    — upwelling shortwave appears to be 0 during the night and maxing around 150 during the day. I think this averages to the 80 or so and are related to the 23 that Trenberth allocates to reflected radiation on earth’s surface. This sun radiation is mostly shortwave. The earth doesn’t generate much shortwave at all (as suggested by 0 values at night). This means the 100+ daytime values likely are that reflection from the sun. Of course, the Trenberth diagram value is a global yearly average (I think) so is noticeably lower to reflect the colder seasons of the year and colder parts of the planet (like the poles). The sun’s radiation reflected off the earth’s surface is largely shortwave since less longwave arrives and most of that is probably absorbed by the atmosphere.

    — upwelling longwave has higher numbers during the day than the night, by some tens. Some of this might be reflection that occurs during the day but most perhaps higher IR during the hours the planet is exposed directly to the hot sun. Night and day, the range of this surface radiation is in the 340-440 range (at least for this given day in August).

    — upwelling longwave, of course, is not zero during the night since the earth (thankfully) radiates in the longwave range 24×7 (perhaps, as suggested below, in balance with but at a slightly lower level than the actual back-radiation in the atmosphere).

    — downwelling shortwave did not provide the shaded option. I think this is because back-radiation (which is what I think shading measures) at the shortwave range is probably near nil (and hence boring). There is already little sw from earth and further the ghg don’t really work much in that range much (I think).

    — downwelling shortwave is 0 at night. The sun is not shining at night (and sw back-radiation is basically 0).

    — Looking at many days in August for downwelling longwave (the graph shown is for August 11th), I noticed these were essentially flat at the top level possibly with dips such as is seen here from around hours 18-24. I assume either of these extremes has to do with cloud cover. The flatness, day or night, is also notable. We can see examples of other days: http://www.archive.arm.gov/quicklooks/2005/pye/pyeskyrad60sM1.b1/pyeskyrad60sM1.b1.200508/pyeskyrad60sM1.b1.20050812.000000.png http://www.archive.arm.gov/quicklooks/2005/pye/pyeskyrad60sM1.b1/pyeskyrad60sM1.b1.200508/pyeskyrad60sM1.b1.20050820.000000.png and http://www.archive.arm.gov/quicklooks/2005/pye/pyeskyrad60sM1.b1/pyeskyrad60sM1.b1.200508/pyeskyrad60sM1.b1.20050829.000000.png . The flat characteristic indicates that back-radiation is near constant the entire 24 hours (possibly varying because, I assume, cloud cover). GHG, how sweet you are! Combining this with downwelling shortwave graph, I think it suggests this location in California had nearly constant night time temperatures in Aug of 2005.

    — downwelling shortwave is high in the day. The max value approaches 1000, which is consistent with Stefan-Boltzmann calculations for the sun, after being reduced by around a factor of 1 million as the sun’s radiation spreads and thins as it reaches the earth, and to account for some atmosphere attenuation over this location. Eg, see Wikipedia: http://en.wikipedia.org/wiki/Sunlight

    — downwelling longwave (in the shade, remember), the top flat part, is fairly high in the 370 range and is comparable (and smaller) to the daytime upwelling longwave (which also probably includes a little IR reflection from sun).

    — downwelling longwave (at night) .. is a little higher than the nighttime upwelling longwave. They are basically the same, but the small differences might be due to many things. Any ideas?

    — At night, note again, the upwelling and downwelling are in fairly tight equilibrium, with the atmosphere back-radiation (aka, downwelling) apparently taking the lead in keeping the earth surface warm at almost the same radiation level. In the longwave case, we have about 370 (350). In the shortwave, we get about 0 (0).

    As a further “treat” and for comparison’s sake consider the table presented by “Spector” here http://wattsupwiththat.com/2011/10/26/does-the-trenberth-et-al-%E2%80%9Cearth%E2%80%99s-energy-budget-diagram%E2%80%9D-contain-a-paradox/ . It too suggests back-radiation is comparable to the surface radiation levels near to the ground. That table is generated using MODTRAN.

    OK, let’s recap one of the key points, a point which contradicts the calculation for max back-radiation value, which you stated was simple to do and comes out to 1/3 of surface radiation. It appears that back-radiation (which I largely attribute to the values of the shaded downwelling longwave) is definitely comparable to the earth surface upwelling radiation.. at least at the location where these measurements were taken. At night, the atmosphere back-radiation appears to keep the surface warm (it is a little more intense than at the surface). In the day, the atmosphere back-radiation is about 85% of the surface radiation level (not 1/3). Also, the atmosphere back-radiation appears to be rather constant day and night, suggesting there is a lot of “stored” energy in there. [Of course, water was not mentioned and likely plays a role a bit different than both land and air. In particular, the Point Reyes, California measurements were taken right next to the ocean (I think).

    • Small point. I “generated the graphs” is an inaccurate statement. The website allows you to look up data. I went through the steps for that data. That is what I meant with the inaccurate wording.

      This is data researchers use apparently (I think they can pay for current data or custom data or something). The service has been available for a number of years. Accessing the graphs is as simple as clicking on that link. Looking for new specific data, as I did, requires you register (or perhaps consult a very good fortune teller who can divine link names). Registering is relatively easy, although I had to improvise in the second part where it asks for organization you represent and other related data. It doesn’t allow you simply to leave it blank or put in “other” and be done with it.

    • On the 1/3 back-radiation limit:

      It’s possible (??) the 1/3 limit could apply to a CO2-only environment (as a theoretical calculation, of course, and with the other simplifications you mentioned).

      Adding more types of ghg would increase that value, in part because a smaller fraction of the earth’s radiation would go directly into space vs. the CO2-only scenario.

      In the CO2 global warming theories, we already have the higher levels of back-radiation possible since other ghgs exist, but we do want to consider what effect increasing the CO2 would have amid the soup of existing ghg.

      • In other words, the graphs should support some aspect of the Trenberth diagrams and their use (and help explain what the numbers mean) but don’t necessarily contradict the 1/3 calculation if that calculation is, eg, a significant simplification and/or limited to CO2 only environment.

      • I agree that there will be a difference between the calculated1/3 pf upwrds radiation coming down in total. Added to that is the radiation from the airabsolutely immediately in from of the apparatus doing the measurements,reflections of shortwave radiation from clouds and sky. So, no,you don’t expect the measurements to show 1/3Io.On the other hand they are not equal to the average 333 Wm^-2 suggested by K&T but would be expected, plus some, at midday in the tropics, not looking at the sun of course..

        The calculation of 1/3 Io is a demonstration that increased green house gas of any type, will not increase the downward flux radiated backwards from the green house gases themselves as claimed by the IPCC.

        So yes, I agree the measurements will be expected to be different from the 1/3. John

        • Do you understand why I would see that one data point with downwelling lw averages that are at least 333 (and fairly close to it) and at least 85% of the upwelling lw averages (which are not too far from the 396 either) and find (at least at first and maybe second glance) the IPCC position to be more rational and more based on evidence (perhaps significantly more so) than a position that claims back-radiation is limited in the absolute best case to 1/3 of surface radiation and further thinks this model is more applicable than what the IPCC uses?

          AND the skeptic position is based on a value that was claimed to be easy to derive yet no reference has been forthcoming!

          I will try to go look at more data points to see if I adjust my position, but right now the skeptic position you present appears to me to be grounded in something that feels a bit like quicksand and would not be a very useful position upon which to base science or policy.

          OK, let me add this, is it possible that the 1/3 figure you are looking at is relative to the 1000 or so W/m^2 from the sun rather than the 400 or so from the earth? 1/3 of 1000 is about 333.

          In any case, arguably a more important point is whether more CO2 can lead to significantly greater back-radiation, today and tomorrow. The qualitative position I have presented that suggests the answer is yes appears more solid than alleged claims being made on behalf of “experts” (like Barrett) who are not providing me (the public) with the details. [This isn’t an attack on Barrett’s credentials; it’s an attack on “faith science”.]

          I want in future comments (to the extent I persist here) to keep getting back to this point of modelling the atmosphere to calculate rough back-radiation values or make convincing qualitative claims on back-radiation (on way or the other).

    • Point Reyes Winter data points:

      Downwelling:

      http://www.archive.arm.gov/quicklooks/2005/pye/pyeskyrad60sM1.b1/pyeskyrad60sM1.b1.200502/pyeskyrad60sM1.b1.20050219.000000.png and

      Upwelling:

      In comparison to Summer values, there is a noticeable decreased difference in the peak and average values for shortwave, but there is very little difference generally in the longwave.

      So longwave (shade) back radiation year round averages for Point Reyes appear to stay comfortably (maybe something like 15-30% higher) above the 333 value. The surface upward radiation (limiting essentially to upwelling longwave) year round averages appear rather close to the 396.

      North Slope Alaska (nsa) Summer and Winter for the upwelling case:
      http://www.archive.arm.gov/quicklooks/2005/nsa/nsagndrad60sC2.b1/nsagndrad60sC2.b1.200508/nsagndrad60sC2.b1.20050811.000000.png and

      The downwelling longwave case was not available online (“quick look image”) for this location in the dates I picked. This is the key case I want to look at, unfortunately. I did “order” a few files and will report back if I get the data.

      FWIW, the upwelling longwave (ground radiation) in Alaska was similar to the California location in the Summer (it was actually a little higher in some cases, maybe because of some extra reflection from the ground or being away from the ocean??). Meanwhile, the Winter values were clearly lower. Also, the shortwave were lower in both the Summer and Winter (relative to Point Reyes, California).

      I searched for other cold regions and the same type of instrument and was not able to find any more quicklooks (or not able to find the same instrument). I may go and research other data set types and instruments (or not) to see if I find other online images.

      FWIW, I came across this project: http://data.eol.ucar.edu/codiac/projs?SHEBA
      > SHEBA is motivated by the large discrepancies among simulations by global climate models (GCMs) of the present and future climate in the arctic and by uncertainty about the impact of the arctic on climate change. These problems arise from an incomplete understanding of the physics of vertical energy exchange within the ocean/ice/atmosphere system.

    • > Each of these graphs I’ll mention comes from a Pyranometer http://en.wikipedia.org/wiki/Pyranometer

      This is wrong.

      The shortwave graphs I think use a pyranometer.

      The longwave measurements came from this http://en.wikipedia.org/wiki/Pyrgeometer , which “measures the atmospheric infra-red radiation spectrum that extends approximately from 4.5 µm to 100 µm.”

    • > As a further “treat” and for comparison’s sake consider the table presented by “Spector” here http://wattsupwiththat.com/2011/10/26/does-the-trenberth-et-al-%E2%80%9Cearth%E2%80%99s-energy-budget-diagram%E2%80%9D-contain-a-paradox/

      Oops, the link should have been directly to the comment showing the table.

      http://wattsupwiththat.com/2011/10/26/does-the-trenberth-et-al-%E2%80%9Cearth%E2%80%99s-energy-budget-diagram%E2%80%9D-contain-a-paradox/#comment-795457

  45. johnnicol and Jose_X,

    This is an interesting discussion. Might I ask if each of you if know that the 333 W/m^2 designated as ‘back radiation’ in K&T 2009 is not the amount of surface LW flux absorbed by the atmosphere that comes back to the surface, but rather just the total downward LW flux entering the surface from the atmosphere?

    I’ve noticed in numerous discussions this has been a major source of confusion. By deduction, only 157 W/m^2 of the 333 W/m^2 is ‘back radiation’ as defined as that which last originated from the surface emitted LW flux (396 – 70 – 169 = 157).

    In reality, only about half of what’s absorbed by the atmosphere returns to the surface. The other half escapes to space from the atmosphere as part of the 239 W/m^2 leaving at the TOA (169 + 70 + 239). Using Trenberth’s direct surface to space transmittance of 70 W/m^2, actually a little more than half of what’s absorbed by the atmosphere ends up emitted to space.

    • RW, different parts of the atmosphere engage in different levels of back-radiation. This is one reason I provided the table from WUWT, to show that the 333 makes sense only if we look at the space that is rather close to the ground (going up at most 1 or 2 km, I think).

      This point is one reason why I want the public to have access to source code of the models.. to see where in the model (eg, at what altitude) the 333 value is being used.

      I suspect that the Trenberth values are only useful in some simpler models. More complex models likely use data from sources like what I used to get the graphs above. They use different sets of numbers depending on what height of the atmosphere they are modelling and how. Even if they compute averages, like the Trenberth diagram, the more complex models likely use a range of values and not just one average value if they are modelling a large swath of the atmosphere .. I do want the public to gain access to those details in order to verify this sort of thing.

      Think of back-radiation this way, the low atmosphere (say meters above the ground), are bombarded from both below and from above. As you move higher up, some of what was hitting lower gas molecules from below does not easily reach you (at the same power rate). Consequently, you send back down a smaller fraction of radiation. You also get less radiation from above, inductively. In other words, as you extend this analysis upwards, you see that it is inductively logical (I think). The higher up pieces of atmosphere get less radiation from above (and below), in part because those above send less radiation down, which itself happens in part because those above them also send less radiation their way.. in part because those above them also send less.

      Another way to think about this. You are inside fog. A bright light shines from one point. If you stand 10 meters away, you basically see the light strongly. If you stand 100 meters away, you see it weakly. At 1000 meters, you basically don’t see it. OK, now because there is fog all around you, we’ll note that even if you don’t look directly at the light source, you still see reflections of light off dust particles all around you. At 10 meters, you are immersed in light. At 100 you see a very faint glow around you. At 1000 you see basically nothing around you. This surrounding light around you would be analogous to the forward and back radiation. Essentially, near the source, there is lots more radiation coming and going from nearby particles, in contrast to what takes place far from the source. Close to source (to ground), lots of back radiation experienced. Far from source (from ground), little back radiation experienced.

      Note that the Trenberth diagram has the back radiation arrow downward originating from layers that include the label “Greenhouse Gases”. Since ghg are everywhere, we can argue that the diagram might be referring to back radiation as measured at ground level (ie, “underneath” the atmosphere). Similarly the 169 might be radiation as seen by satellites “above” the atmosphere.

      There is ambiguity in that diagram. To properly criticize it, we need to know how the experts interpret those numbers. Again — (except in a few cases, I think) why does the public not have access to the climate model computer source code?

  46. From http://scienceofdoom.com/2010/07/17/the-amazing-case-of-back-radiation/

    > First of all, what is “back-radiation” ? It’s the radiation emitted by the atmosphere which is incident on the earth’s surface. It is also more correctly known as downward longwave radiation – or DLR

    So “back radiation” is in fact defined as the longwave radiation at the ground level but coming from the sky.

    This definition is consistent with the WUWT table values, the 10 m instrument altitude I thought I read for the graphs above, and does suggest that the Trenberth back radiation down arrow does not refer to cloud level but covers the entire greenhouse effect right above the ground.

    • ..further, looking at the Trenberth diagram http://bobfjones.files.wordpress.com/2011/09/trenberth-cartoon-ex-colose.jpg , which can also be seen in black/white here http://climateclash.com/files/2011/01/Petb21.gif , we see that underneath the “333 Back Radiation” is “333 Absorbed by Surface”.

      The meaning now seems clearer. The back radiation measured at the surface is 333.

    • That scienceofdoom article gives interesting info on back radiation.

      For example, from this chart, we see averages year round for locations near the equator, at mid latitudes, and near poles: http://scienceofdoom.files.wordpress.com/2010/07/dlr-many-stations-wild-2001-499px.png?w=500

      This next table compares measurements with theory. The point I want to make here is that the values are for April/May 1999 and have most stations report in the upper half of the 200 range and in the 300s. There are some low 400 and as low as 100 (probably near a pole). [Not sure if these are round the clock averages or mostly in the day or night or what.]

      All of these numbers and graphs suggest that a low 300 global average value for back radiation is reasonable (aka, within the ballpark).

      And we really should be looking at the Trenberth diagram from the 1990s. Here is one http://climateclash.com/files/2011/01/PetB11.gif . It does also have a low 300 value. Specifically, it appears the average back radiation has gone up about 9 W/m^2 in the last decade.

      Look at what the scienceofdoom article says generally about back radiation measurements:
      > However, if you want to look at the surface, the values are much “thinner on the ground” because satellites can’t measure these values (see note 1). There are lots of thermometers around the world taking hourly and daily measurements of temperature but instruments to measure radiation accurately are much more expensive. So this parameter has the least number of measurements.

      So do few reported measurements mean the theory is in doubt? Read what the article says. [Short answer provided: no, using the analogy that we believe the ocean is salty even if few people take and publish such measurements today. It’s boring to publish such measurements.]

      They list resources where such back radiation data has been aggregated (eg, from bits in published papers).

      For example, this resource summarizes a large number of existing back radiation data taken in the 20th century: http://journals.ametsoc.org/doi/abs/10.1175/1520-0477%281999%29080%3C0831%3ATGEBA%3E2.0.CO%3B2 . It is “The Global Energy Balance Archive” published in 1999. A link to the actual pdf is http://journals.ametsoc.org/doi/pdf/10.1175/1520-0477%281999%29080%3C0831%3ATGEBA%3E2.0.CO%3B2 .

      This scienceofdoom article overviews that pdf a little bit and shows various graph plots from it.

    • I was very interested to see again these measurements with which I am familiar and which suggest that back radiation is some how disdinguished as coming from green house gases in the model of K&T.

      However, it is, as I pointed pointed previously, a matter of common experience, that when being inside a room and moving outside under the sky (but not in line with the sun),or being under the eaves of a house and moving out both in the daytime and at nighy, one is not placed immediately in a new hot field of, by the measurements, up to 450 W/m^2. Perhaps your experience is different from mine in this regard?

      This is why the masurements over the arctic for example correspond to a range of temperatures ranging from 199 K to 225 K. Nothing new there. The same goes for the other latitudes.

      Remeber of course that the K&T diagram is an average over a flat receptive disc – half the globe averaged over 24 hours with an influx from the sun of 240 Wm^-2 after taking account of the earth’s albedo of 0.3 (estimated).

      What you are experiencing in the room, under the eaves and under the sky is the same field which is from radiation and scattered radiation from all over the place and above all is in thermal equilibrium with the LW radiation from the ground.

      There is no way, that pointing by a detector towards the sky one can detect the actual position of the source of the radiation measured. It may have come from 10 km or 10 mm away or 10 microns – all you know is that you are measuring radiation and as I said before I accept that. In all of this it is the change which is claimed for increased CO2 which I do not accept and the calculations show mathematically and I believe quite correctly, that the component of that downward radiation, as measured, which is the result of reradiation from distributed green house gases. What happens when the instrument is pointed downwards or sideways? They would measure almost exactly the same intensity at any given time.

      I accept that the measured intensities will be correct and have no reason whatsoever to doubt their accuracy. It is just that do not in my view measure what Hansen and K&T are claiming they measure. Nor do they in any way show that whatever the intensities measure happen to be, that these values will increase because of green house ngases. The calculation at ruralsoft demonstrates I believe that this reradiation by a green house gas at any frequency for a well mixed GHG such as carbon dioxide is fixed. The condensation of water vapour and its cut off height rather than having an exponential decay right to the top of the atmosphere may well alter that, but we are not talking about water vapour,- we are concerned with carbon dioxide. Much of the back radiation measured – in fact most of it will come from water vapour. However,the net radiation just above the ground in both directions will be in thermodynamic equilibrium and will be the same. My little personal experiment from under the eaves substantiates that also.

      I would be interested to hear your interpretation of the measurements and whether you could explain to me how the measurements can be tied directly to green house gas emissions from the sky alone. Also could you explain why I and others, cannot feel the downward heat at say 350 Wm^-2 when I can feel the heat from a 100 Watt bulb which is about a metre away.
      John
      . .

      • >> explain to me how the measurements can be tied directly to green house gas emissions

        I have a comment to RW in moderation (sorry, I like to include links) that addresses this.

        The main point is that the greenhouse gas description in the middle part of the diagram doesn’t mean that the back radiation only comes from greenhouse gases. It says the greenhouse gases are in that region. [Do note that I have been evolving my understanding of that diagram as I read up and get more exposed to the jargon, research, and instruments used by climate scientists.]

        The Trenberth diagrams focus on reception (destination) at a given point and don’t attempt to guess whether a photon came from this gas or that gas or condensation at some altitude or other or came from the sun. The diagram values are an expression of radiation at a given point, period. For 333, that point is on the ground looking up as would be done with an instrument. [It’s an average of approximations.. since some instruments are higher up above ground and ground level isn’t flat in the first place.]

        >> when being inside a room and moving outside under the sky (but not in line with the sun),or being under the eaves of a house and moving out both in the daytime and at nighy, one is not placed immediately in a new hot field of, by the measurements, up to 450 W/m^2.

        I think most of our skin “heat” sensation comes from temperature and not from radiation. Given similar temperatures, it would be consistent that we would experience little difference in sensation when going outside under the same exact temperature.

        Inside and outside there is a very large number of molecules bouncing off our bodies per second helping to maintain thermodynamic equilibrium (LTE), ie, helping to maintain our temperature in balance with the environment.

        As for radiation indoors, if we have material above us blocking the sky, it can be producing a radiation flux much like the planet and as dictated by graybody radiation rules. Some items emit more radiation than others (emissivity value variations).

        We can gain insight specifically of IR radiation in certain ranges by taking an IR camera indoor and viewing the resulting film capture.

        So the answer is that radiation is not what we feel and in fact radiation levels would vary indoors and outdoors depending on the indoor walls, etc. (eg, inside a glass house which itself was inside some material opaque to visible light, we would be exposed perhaps to very little radiation from above/outside and also get little radiation from the ghg in the air above us, which I expect would not be that high due to small volume of a couple of meters max and because LTE interactions dominate).

        Is this explanation of sensations next to light bulb indoor vs the outside adequate?

        I am trying to understand this better as I go along, so do feel free to point out inconsistencies in what I say as you see them.

  47. Jose_X,

    My point is simply that the 333 W/m^2 does not represent how much of the 396 W/m^2 surface LW flux comes back to the surface.

    The point being that not all the downward LW received at the surface is ‘back radiation’ as defined as that which last originated from the surface emitted LW flux. A portion of the downward LW is ‘forward radiation’ from the Sun absorbed in the atmosphere yet to reach the surface (key distinction), and the remaining portion is from the non-radiative flux moved from the surface into the atmosphere (evapotranspiration and thermals), which also radiates in the LW infrared – some of which back in the direction of the surface.

    What matters most in the system is net energy flow, specifically from the surface to the atmosphere, from the atmosphere back to the surface, from the atmosphere to the TOA, and from the surface directly to the TOA.

    The atmosphere is limited to about 50% opacity, because half of what’s absorbed from the surface LW flux escapes to space as part of the flux leaving at the TOA. Trenberth’s depiction obfuscates this, among other things. For example, the non-radiative flux moved from the surface into the atmosphere does not all come back to the surface as LW radiation. What then is the source of energy in the temperature component of precipitation? It’s no where to be found in Trenberth’s depiction.

    • Yes RW. I agree entirely with your summary. 80% of the heat entering the atmosphere is because of contact between air/wind and ground (20%) and evaporation over oceans (60%) The other 20% is by green house gas absorption of which water vapour absorbs about 70 % and CO2 30% (at most). The K&T cartoon, as it is often described by climatologists because it is just a drawing, is very misleading in its perspective of the way in which the ground is heated in the first place. Secondly, it purports to show that increases in GHGs will increase this back-radiation which is not correct because of the increased reabsorption below the level of the soiurce.

      For instance, consider a major sample of air warmed by evaporation over the sea surface and rising by convection. when the air gets to say 3,000 m it is cool enough to condense warming the surrounding air. The warmer air loses some of its extra energy by exciting CO2 by collisions which then radiate this energy upwards and downwards. Because the total air mass above the level is about the same as that below the level, the same amount of energy escapes upwards as downwards.

      If the density of green housegases increases, there will be less direct radiation reaching the ground and less escaping. But in both cases the air well above the ground will be warmed and will be lifted by convection so again reradiation will occur at a higher level providing a further opportunity for radiation to escape, thanks to CO2 – the only significant radiator above the cloud level. Without CO2 the higher air would just get warmer.
      John Nicol .

      • >> 80% of the heat entering the atmosphere is because of contact between air/wind and ground (20%) and evaporation over oceans (60%) The other 20% is by green house gas absorption of which water vapour absorbs about 70 % and CO2 30% (at most). … thanks to CO2 – the only significant radiator above the cloud level

        There is an omission in this story.

        If it weren’t for the greenhouse effect — and let’s remove the water on the planet (but keep other gases and the ozone) — the radiation coming from the ground (if we average day and night and all the surface) would be near 200 instead of 400.

        So, yes, non gh gases would get temperature from the ground, but the air temperature would be much less. We’d be dealing with 200 and not 400.

        Greenhouse effect is real. It does roughly work to the effect of a blanket.

        The back radiation is real. This extra radiation otherwise would just escape into space instead of being sent back at the planet again at some net rate.

        I have to run now, but in a new post I’ll revisit back radiation (no, the higher concentration of ghg below don’t absorb all the new radiation from above), and I’ll try to guess at what things might be like if we add back the H2O to the planet but keep out the other ghg.

    • RW, you may want to quickly skip to the very last subreply at the bottom of this comment if you want a summary that might clear up some misunderstandings over the Trenberth diagram (and them come back and read the rest of the comment).

      >> The point being that not all the downward LW received at the surface is ‘back radiation’ as defined as that which last originated from the surface emitted LW flux.

      I agree. They don’t use that definition. The definition is what you measure on the surface when looking upwards.

      The Trenberth diagram does appear to suggest that and nothing more. [Yes, it took me a while to realize that.] For example, it draws the arrows coming down from the ghg section (which is the atmosphere, ie, which starts right about the ground surface, plain and simple) and also has a connected component below in the drawing which says, “333 Absorbed by Surface”. So Trenberth appears simply to have calculated a global average of radiation sensed at ground level when looking back towards the sky.

      If you read a comment I wrote about an hour earlier (currently, “awaiting moderation”) or read the scienceofdoom article, you should see examples of these back radiation values around the world, and 333 (or actually 324 or a bit higher) does sound like a plausible global average value.

      >> A portion of the downward LW is ‘forward radiation’ from the Sun absorbed in the atmosphere

      The graphs I produced were of measurements taken in the shade; however, I don’t think the back radiation definition cares about that.

      At the surface:

      Emitted by earth surface:
      396 (radiation) +
      80 (evapotranspirations) +
      17 (thermals?)

      +
      Reflected:
      23

      +

      Retained?:
      1

      =

      Radiation coming from air above ground:
      333 (back radiation longwave) +
      161 (sun shortwave absorbed) +
      23 (sun shortwave reflected)

      As the article indicates (and as can be calculated), the sun produces very little IR that makes it to the earth (and even less that makes it to the ground) in comparison to what the earth generates. This can be derived from Planck blackbody spectrum curve.

      If this sun IR is included in the 333, then it wouldn’t be in the 161.

      Note that measurements made with this instrument (such as the longwave graphs I showed) http://en.wikipedia.org/wiki/Pyrgeometer cover the “4.5 µm to 100 µm” region.

      >> ..the remaining portion is from ..(evapotranspiration and thermals), which also radiates in the LW infrared

      If your model uses radiation values to make calculations, then what matters is the radiation value at that point and not at other points. Photon radiation at a given frequency works the same way no matter where the photons came from. The important point, of course, is to make sure we don’t double count. The Trenberth diagrams appear to be on par with measured values.

      We can note that the diagram doesn’t claim or suggest the 333 specifically excludes photons that were just released by condensation above.

      >> What matters most in the system is net energy flow

      Yes, it appears Trenberth focuses on that.

      >> ..half of what’s absorbed from the surface LW flux escapes to space as part of the flux leaving at the TOA. Trenberth’s depiction obfuscates this

      Now that I think I understand better, I would disagree. The 333 is (likely just) LW that hits the ground from above. If you don’t like the “Back Radiation” moniker, then use something else .. like DLR (downward longwave radiation). I think Trenberth is measuring DLR.

      >> What then is the source of energy in the temperature component of precipitation?

      So, the diagram focuses on radiation that exists at a given point. That point is the destination, and that value can be verified by an instrument located at that point; the diagram doesn’t attempt to guess the source of the radiation that falls upon a point.

      To represent sources, btw, would mean deriving from theory a continuous density distribution function in at least 4 variables (time and space, representing the source points). The source might be anything, and the diagram doesn’t attempt to guess it.

      This does all make much more sense to me today/now vs. when I had earlier looked at those diagrams.

      • Jose_X,

        “So, the diagram focuses on radiation that exists at a given point. That point is the destination, and that value can be verified by an instrument located at that point; the diagram doesn’t attempt to guess the source of the radiation that falls upon a point.”

        Yeah, but the diagram shows non-radiative flux from the surface without showing non-radiative flux back to the surface, but lumps the watts of non-radiative flux all in the 333 W/m^2 radiative flux entering the surface from the atmosphere. This is not only incorrect, but highly misleading in many ways.

        • You might assume the 333 includes flux besides back radiation (despite the label on the diagram). If it did, that would be misleading.

          My guess is that “thermals” is the __net__ gain/loss at the surface of the planet via convection (and “evapotranspiration” is total latent heat loss into the atmosphere).

          So heat released by condensation of H2O up high essentially contributes to the temperature gradient which still favors a net rate of convection drawing heat away from the surface of the planet and into the atmosphere.

          Back radiation __could__ be combined with surface radiation for a __net__ radiation flux, but that was not done. The diagram chose to highlight back radiation because it is highlighting the greenhouse effect.

          Let’s note differences between radiation and thermals: Absolute thermal transfers might be very hard to measure and quantify, while radiation is clearly directional and works at a distance so can be measured much more easily by an instrument intercepting along the path. Also, in terms of defining temperatures, directional radiation fluxes (eg, “in” or “out”) have a clean relationship to temperature. I don’t think absolute thermal (convection) values enjoy a similar status or useful meaning.

          • Jose_X,

            The point is of the 326 W/m^2 absorbed by the atmosphere from the surface LW flux, only 157 W/m^2 comes back to the surface and/or is true ‘back radiation’ defined as that which last originated from the surface LW flux (239 + 157 = 396). The remaining 169 W/m^2 escapes to space as part of the 239 W/m^2 flux leaving at the TOA (169 + 70 = 239).

            The non-radiative flux (evapotranspiration and thermals) are just moving energy around still within the thermal mass (which is part atmosphere) so the total atmospheric absorption from the surface LW flux is about 326 W/m^2 and the surface energy balance is what it is – about a net of 396 W/m^2 entering the surface from the atmosphere.

            • Moreover, you can see that half of what’s absorbed by the atmosphere from the surface LW flux, escapes to space (actually more than half using Trenberth’s ‘window’ transmittance of only 70 W/m^2).

            • Can you clarify where the 326 is coming from?

              Also, back radiation (DLR) is defined as longwave that you observe coming from above (possibly shaded, although I’m not sure how significant that would be). The 333 is at ground. If you go up one kilometer, the number is smaller. A few kilometers up, the number is very small (I think tens of W/m^2).

  48. Jose_X,

    “Can you clarify where the 326 is coming from?”

    The surface radiates 396 W/m^2. 70 W/m^2 of this passes straight into space (40 W/m^2 through the clear sky and 30 W/m^2 through the cloudy sky). The difference of 326 W/m^2 is the amount absorbed by the atmosphere (396 – 70 = 326).

    “Also, back radiation (DLR) is defined as longwave that you observe coming from above.”

    I know, but in reality this is not true ‘back radiaton’ but rather just the total downward LW incident on the surface, which has a total of 3 sources – only one of which is ‘back radiation’ as defined as that which last originated from the surface LW.

    “The 333 is at ground”

    Yes.

    • 333 = 78 + 17 + 80 + (396 – 40 – 30 – 169)
      ..[within rounding error of 1].

      These numbers and the picture suggest that the back radiation is everything that doesn’t go into space (-239) of everything that does go into the atmosphere (+396 +78 +17 +80). Note the the parenthesis above was used to highlight the 157 you used. That 157 needs to add the first 3 values above to equal 333 +-1.

      The meaning of going “into the atmosphere” would be all energy (of any form) that is absorbed at some point by a ghg. Any ghg molecule that radiates a photon reaching the ground would have contributed to back radiation.

      Yes, the 333 isn’t just from the power flux of the earth, but it is radiation coming back at the earth from molecules in the atmosphere. Thus back radiation is the total of ghg effect that heats the planet’s surface. It is radiation hitting the ground except that which came directly from the sun.

  49. Talking about oversimplified models, this comment http://www.skepticalscience.com/argument.php?p=9&t=446&&a=15#71061 was just posted and relates to this conversation. It draws attention to zero order models.

    It also draws attention to a complex climate model http://www.cesm.ucar.edu/models/atm-cam/ it says is often used by researchers. As that page shows, that model is open source software, so anyone is allowed to study all the details of that model.

    • >> Talking about oversimplified models, this comment http://www.skepticalscience.com/argument.php?p=9&t=446&&a=15#71061 was just posted and relates to this conversation.

      I worded that a bit oddly. That comment doesn’t relate to this particular discussion at all.

      That link is to a comment on a topic that relates to the recent discussion here (a subject line might read “IPCC climate predictions are *not* based on results from oversimplified climate models”).

  50. Following some links, I came across this: http://rabett.blogspot.com/2009/03/second-law-and-its-criminal-misuse-as.html

    The page uses a simple model and a simple bit of mathematics (Stefan-Boltzmann and simple system of 2 independent algebraic equations) to show why simply adding a shell around a heated object leads to that object being hotter. The shell is an approximation, of course. The shell’s only interesting property is that it can absorb/emit radiation (eg, a blackbody or anything that even approximates it).

    I added a two-part comment at the bottom because I think the simple model could be improved a little bitty more. I describe a modified model that I think is a little bit more reflective of the earth system. I also consider the results when we add more and more shells. Of course, as I state there, to assume the emissivity of new shells are all 1 is clearly not a good model for our atmosphere. However, the point of the webpage is to show with simple math how adding a shell over a radiating body leads to a hotter inner body. Looking at a slice of the earth’s atmosphere as such a spherical shell (and with a much more accurate emissivity value) would lead to the same warming effect (although much more subdued). Adding more such correctly quantified atmosphere shells should approximate the earth’s atmosphere. Each such shell added increases the “greenhouse effect”.

    Also, if you are curious, that page considers a paper http://arxiv.org/pdf/0707.1161v4 that was published in 2009 that claims the greenhouse effect is bogus. Hopefully, I’ll find time to read over it. [Not because I think the paper is accurate, but it would be interesting to know what arguments its authors use.. and it is possible they could make any number of accurate points in there.]

  51. […] […]

  52. […] Just Put the Model Down, Roy […]


Leave a reply to Marco Cancel reply

Categories