Thursday, 26 February 2015

The Statistics of Climate Change

From left to right: Norman Fenton, Hannah Fry, David Spiegelhalter. Link to the Programme's BBC website
(This is a cross posting of the article here)

I had the pleasure of being one of the three presenters of the BBC documentary called “Climate Change by Numbers”  (first) screened on BBC4 on 2 March 2015.

The motivation for the programme was to take a new look at the climate change debate by focusing on three key numbers that all come from the most recent IPCC report. The numbers were:
  • 0.85 degrees - the amount of warming the planet has undergone since 1880
  • 95% - the degree of certainty climate scientists have that at least half the warming in the last 60 years is man-made
  • one trillion tonnes - the cumulative amount of carbon that can be burnt, ever, if the planet is to stay below ‘dangerous levels’ of climate change
The idea was to get mathematicians/statisticians who had not been involved in the climate change debate to explain in lay terms how and why climate scientists had arrived at these three numbers. The other two presenters were Dr Hannah Fry (UCL) and Prof Sir David Spiegelhalter (Cambridge) and we were each assigned approximately 25 minutes on one of the numbers. My number was 95%.

Being neither a climate scientist nor a classical statistician (my research uses Bayesian probability rather than classical statistics to reason about uncertainty) I have to say that I found the complexity of the climate models and their underlying assumptions to be daunting. The relevant sections in the IPCC report are extremely difficult to understand and they use assumptions and techniques that are very different to the Bayesian approach I am used to. In our Bayesian approach we build causal models that combine prior expert knowledge with data. 

In attempting to understand and explain how the climate scientists had arrived at their 95% figure I used a football analogy – both because of my life-time interest in football and because - along with my colleagues Anthony Constantinou and Martin Neil – we have worked extensively on models for football prediction. The climate scientists had performed what is called an “attribution study” to understand the extent to which different factors – such as human CO2 emissions – contributed to changing temperatures. The football analogy was to understand the extent to which different factors contributed to changing success of premiership football teams as measured by the total number of points they achieved season-by-season.  In contrast to our normal Bayesian approach – but consistent with what the climate scientists did – we used data and classical statistical methods to generate a model of success in terms of the various factors. Unlike the climate models which involve thousands of variables we had to restrict ourselves to a very small number of variables (due to a combination of time limitations and lack of data). Specifically, for each team and each year we considered:
  • Wages (this was the single financial figure we used)
  • Total days of player injuries
  • Manager experience
  • Squad experience
  • Number of new players
The statistical model generated from these factors produced, for most teams, a good fit of success over the years for which we had the data. Our ‘attribution study’ showed wages was by far the major influence. When wages was removed from the study, the resulting statistical model was not a good fit. This was analogous to what the climate scientists’ models were showing when the human CO2 emissions factor was removed from their models; the previously good fit to temperature was no longer evident. And, analogous to the climate scientists’ 95% derived from their models, we were able to conclude there was a 95% chance that an increase in turnover of 10 per cent would result in at least one extra premiership point. (Update: note that this was a massive simplification to make the analogy. I am certainly not claiming that increasing wages causes an increase in points. If I had had the time I would have explained that in a proper model - like the Bayesian networks we have previously built - wages offered is one of the many factors influencing quality of players that can be bought which, in turn, along with other factors influences performance).

Obviously there was no time in the programme to explain either the details or the limitations of my hastily put-together football attribution study and I will no doubt receive criticism for it (I am preparing a detailed analysis).  But the programme also did not have the time or scope to address the complexity of some of the broader statistical issues involved in the climate debate (including issues that lead some climate scientists to claim the 95% figure is underestimated and others to believe it is overestimated). In particular, the issues that were not covered were:
  • The real probabilistic meaning of the 95% figure. In fact it comes from a classical hypothesis test in which observed data is used to test the credibility of the ‘null hypothesis’. The null hypothesis is the ‘opposite’ statement to the one believed to be true, i.e.  ‘Less than half the warming in the last 60 years is man-made’. If, as in this case, there is only a 5%  probability of observing the data if the null hypothesis is true, the statisticians equate this figure (called a p-value) to a 95% confidence that we can reject the null hypothesis. But the probability here is a statement about the data given the hypothesis. It is not generally the same as the probability of the hypothesis given the data (in fact equating the two is often referred to as the ‘prosecutors fallacy’, since it is an error often made by lawyers when interpreting statistical evidence).See here and here for more on the limitations of p-values and confidence intervals.
  • Any real details of the underlying statistical methods and assumptions. For example, there has been controversy about the way a method called principal component analysis was used to create the famous hockey stick graph that appeared in previous IPCC reports. Although the problems with that method were recognised it is not obvious how or if they have been avoided in the most recent analyses.
  •  Assumptions about the accuracy of historical temperatures. Much of the climate debate  (such as that concerning the exceptionalness of the recent rate of temperature increase) depends on assumptions about historical temperatures dating back thousands of years. There has been some debate about whether sufficiently large ranges were used.
  • Variety and choice of models. There are many common assumptions in all of the climate models used by the IPCC and it has been argued that there are alternative models not considered by the IPCC which provide an equally good fit to climate data, but which do not support the same conclusions.
Although I obviously have a bias, my enduring impression from working on the programme is that the scientific discussion about the statistics of climate change would benefit from a more extensive Bayesian approach. Recently some researchers have started to do this, but it is an area where I feel causal Bayesian network models could shed further light and this is something that I would strongly recommend.

Acknowledgements: I would like to thank the BBC team (especially Jonathan Renouf, Alex Freeman, Eileen Inkson, and Gwenan Edwards) for their professionalism, support, encouragement, and training; and my colleagues Martin Neil and Anthony Constantinou for their technical support and advice. 

My fee for presenting the programme has been donated to the charity Magen David Adom

Watching the programme as it is screened

64 comments:

  1. Do you have expertise in climate science, mathematics, or both?

    I honestly could not give two hoots about the team you support.

    Are you not embarrassed by this? Perhaps, like most of your colleagues, you see the trough as more important than the truth.

    Take care to avoid any sense of shame over the next few days.

    ReplyDelete
  2. The trough is greater than the truth.

    I've taken the liberty to bring you more thoughts, this time from SimonW on BH:

    I so don't like academics dressing up blindingly obvious facts with hi-falutin' equations to sound clever.

    So the Premiership teams with the highest wage bill will finish nearer the top - who knew? And he is 95% sure (using mathematics, doncha know) that increasing the wage bill by 10% will increase a team's points total by at least ...wait for it...1 point.

    So employing significantly better players over a whole season will have the effect of a least say one more goal scored (to turn a loss into a draw). No Sh*t Sherlock. I doff my cap Sir, what brilliant insight. Large difference in proven most important input, tiny difference in output.

    From there he leaps to say that's how they are 95% sure that over half the warming is man made due to CO2 emissions. Err, did I miss several steps there. Difference in one small part of a chaotic system, large change in temperature.

    "It's as clear as taking the wage bill out of my football predictions". No it flipping isn't. You can observe the link between wages and football performance every year and it can be used to successfully predict into the future. How has that worked out for CO2?

    ReplyDelete
  3. Hi

    Pity about the rif-raff.

    I think if you look more carefully at the attribution studies in IPCC re GHG you'll find that they are more Bayesian than classical (the 95% relates to an expert assessment rather than a null hypothesis, which as you note it can't) and that the attribution statement relates to what happens in the CMIP5 models rather than the real world.

    This last statement might appear a bit churlish but it is an important distinction to reflect on. You may object and say the wages only work in your statistical model too, but the validation of GCMs is nowhere nearly as robust. For example they each model different worlds where within sample the absolute temperatures range over more than 2 degrees (what happens to the ice and steam) and aren't generating results out of sample that come close.

    HAS

    ReplyDelete
    Replies
    1. BTW (it is always bad form to reply to oneself, so forgive me) I see you have an interest in risk assessment, although perhaps not so much in physical processes from your brief bio.

      Climate science intentionally is pretty rudimentary when it comes to understanding risk, and over here in NZ we are having fun and games dealing scientists who don't understand the difference between science's role in helping with likelihoods, and the subsequent risk management processes.

      A local economist has just published a brief piece (http://nzinitiative.org.nz/About+Us/Staff/Bryce_Wilkinson/Opinion+and+commentary+Dr+Bryce+Wilkinson.html?uid=833) that outlines the issue in respect of coastal hazards (although the coastal scientists involved were also international). The piece is kind to the IPCC that is equally part of the problem. All this contrasts with NZ's much better management of siesmic risks, but having said that we actually get earthquakes over here.

      Delete
  4. "This was analogous to what the climate scientists’ models were showing when the human CO2 emissions factor was removed from their models; the previously good fit to temperature was no longer evident."

    I don't thinkt the analogy is very good. Climate modeling is predicated on CO2 being the major driver of the dependent variable, temperature. Other variables are then suitably tuned to maintain this relationship. In the football case you can draw some conclusions from goodness of fit with and without the IV "Wages". To my mind it makes little sense to compare fit in a typical IPCC climate model with and without CO2. In fact it is quite misleading.

    ReplyDelete
    Replies
    1. "Climate modeling is predicated on CO2 being the major driver of the dependent variable, temperature. Other variables are then suitably tuned to maintain this relationship."

      False.

      Climate model sources, such as NASA GISS Model E, are available online. Even better, the published literature upon which it is based is documented, as well.

      I have no idea where you've gotten your notion from, but it's not from the model code itself or the literature upon which it is based.

      Delete
  5. The increase to a 95% confidence level isn't a mathematical calculation and took place against a background of models running warmer than observations.

    ReplyDelete
  6. Using a football analogy - your team is pushing for promotion - you're 90% confident of promotion, then they lose 10 games in a row, so you become 95% confident?

    ReplyDelete
  7. Admit in, Norman, you've been rumbled!
    When will academe realise that not everyone outside the ivory towers are stupid? Many of us are quitre well educated, and can spot BS from a long distance.
    SimonJ

    ReplyDelete
  8. Dear Prof. Fenton -
    I like the football analogy, as both its strengths and weaknesses are instructive. It would be nice if we could run a controlled experiment for a few years, where half the premiership clubs doubled their wages, and the other half halved them, without changing anything else - same players, same managers. We would then be able to make a judgement on whether wages were the key driver for points. I think we might guess that the correlation would diminish, perhaps vanish, though we wouldn't know until we tried. Of course, we can't run that experiment, anymore than we can do a similar job on the climate. Correlation, as we all know, isn't causation. I actually suspect that the IPCC's approach is TOO Bayesian, and that the models are far too well informed ("fitted") to prior data and to prior CO2 hypothesis that the removal of CO2 from the runs is bound to work as a circular proof. Certainly, the models have shown no predictive skill worth writing home about.

    ReplyDelete
    Replies
    1. The nice thing of climate science is that in that case you do have that information. In an attribution study, multiple model runs are compared with each other. Runs with and without CO2. Runs with an without changes in the sun. Runs with and without volcanoes.

      As the program showed with the changes in the 3-dimensional temperature field due to CO2 are different from changes due to the sun. When you increase CO2 you will see a warming at the surface, but a cooling in the stratosphere and you see more warming near the poles than near the equator.

      That is the advantage of having a physical model, which was missing the football example.

      Delete
    2. This is the crux of the model issue , Victor Venema. Your "nice thing is that you do have such information" directly contradicts comment of dhogaza (who seems to be in-the-know model-wise) earlier. Of course the models are "trained" on the information you mention, because the details are far too complex to be derived from basic physical principles (even IPCC accepts this), so the weights of the various influences are varied until some agreement occurs with past observed. Any that don't, don't figure in further development. Trouble is, when the selected models are run forward they give the wrong answers, which in most scientific disciplines would cause a root-and-branch rethink.
      Your statements about the 3D temp profile might also be challenged for detail, and the relatively larger warming of the poles is to be expected in almost any scenario and is certainly not unique to CO2 driver.

      Delete
    3. No, it is not in contradiction, and Victor didn't say that "Of course the models are "trained" on the information you mention". He says the models are run multiple times (without changing the models), only varying things like CO2 concentrations, solar output, with and without volcanic eruptions, etc in order to study the effects of changes on climate.

      Delete
    4. Regrettably it isn't a physical model. They use physical relationships where possible, but need to estimate what happens below their resolutions that range from 0.5° to 4° for the atmospheric component in the CMIP5 coupled models.

      This means things below this level of resolution need to be estimated and this includes much of the interesting thermodynamics and has strange consequences like the height of the Andes (that defines part of the Pacific atmospheric basin) being reduced in the models.

      So where Victor says you can run multiple simulations of the climate, changing parameters, you too can tweak the parameters on your footy models and gain similar quality insights into the real world.

      HAS

      Delete
  9. I would recommend that some respondents read Norman's blog posting again. He is pretty clear that the football example was hastily put together (my understanding is that tight production schedules can often mean that examples are very much invented on the hoof). Yes, in an ideal world the football attribution analysis would have passed peer review, for what it is worth, and have been published in its entirety but this just isn’t how TV works. In any case Norman promises to blog about it in future (yes we are all well aware that correlation is not causation, but this point would have ended up on the cutting room floor).
    Norman rightly points out that the programme fails to cover issues that are of more interest to those of us with a sceptical bent, but his role, as a presenter, was to explain the mathematics used, in as accessible a manner as possible, and NOT necessarily to justify the results or conclusions arrived at by the IPCC or anyone else. Also it is worth bearing in mind that there is a long chain of consultants, script writers and producers who put the material together before it ends up in a presenters hands.
    Personally, I enjoyed the programme, overall, but was disappointed by the tone of the final segment which I felt stepped away from being educational toward being overtly instructional. Again, I suspect this isn't necessarily the view of the final presenter but perhaps of the consulting/production team behind the script.

    ReplyDelete
  10. You have mis-identified the null hypothesis. Climate models are basically complex computer programs and will they will inevitably have bugs and errors. The real null hypothesis is that the output of climate simulations is due to bugs and errors, rather then valid science.

    I don't see anything to suggest how you know the output of these computer calculations are 95% likely to be free of bugs and errors.

    BTW the fact that two computer programs may do the same task, allowing you to compare their output, does not make them error free. You can see this in, for example, web browsers which all do the same job but all have errors - including critical security errors.

    ReplyDelete
    Replies
    1. Peter (and others) regarding the IPCC hypothesis

      Here is what the IPCC report says at the start of the section that discusses the 95% figure:

      “when it is reported that the response to anthropogenic GHG increase is very likely greater than half the total observed warming, it means that the null hypothesis that the GHG-induced warming is less than half the total can be rejected with the data available at the 10% significance level.”

      I agree that expert judgement is also used (one of the points I make clear in my recent book is that frequentist statistics inevitably involves a lot of expert subjective judgment, which is why I find it weird that strict frequentists reject Bayes on the basis that subjective judgment should not be used). Indeed the very next bit in the IPCC report says this and also makes clear that this could be done in a Bayesian way but is not usually:

      “Expert judgment is required in frequentist attribution assessments, but its role is limited to the assessment of whether internal variability and potential confounding factors have been adequately accounted for, and to downgrade nominal significance levels to account for remaining
      uncertainties. Uncertainties may, in some cases, be further reduced if prior expectations regarding attribution results themselves are incorporated, using a Bayesian approach, but this not currently the usual practice.”

      Delete
    2. It was how that assessment of likelihood was done, not the null that makes it essentially Bayesian. It is a statement about expert views. However http://onlinelibrary.wiley.com/doi/10.1002/wcc.142/full gives some backgorund to this debate with http://doi.wiley.com/10.1002/wcc.141 and http://doi.wiley.com/10.1002/wcc.145 as follow-ups.

      HAS

      Delete
    3. @-Pete Austin3
      "You have mis-identified the null hypothesis. Climate models are basically complex computer programs and will they will inevitably have bugs and errors. The real null hypothesis is that the output of climate simulations is due to bugs and errors, rather then valid science."

      That is an easy 'null hypothesis' to refute.
      Irreducibly simple energy equations can use the observed and measured forcings and energy flows to calculate the ongoing influence of CO2. They confirm the role of CO2 as the dominant changing component in the changing climate.

      Then there is the paleoclimate evidence. That shows that climate models can hindcast past observations. So you have two separate lines of evidence for the accuracy of complex climate models in two different contexts.


      @-"I don't see anything to suggest how you know the output of these computer calculations are 95% likely to be free of bugs and errors."

      That complex computer models have bugs and errors that manage to give the right answers both compared to simple formulae and past observation seems unlikely. a good a priori assumption would be that the models are accurately modeling the climate rather than bugs and errors are magically making them correct.

      Delete
    4. izen

      "Irreducibly simple energy equations can use the observed and measured forcings and energy flows to calculate the ongoing influence of CO2. They confirm the role of CO2 as the dominant changing component in the changing climate."

      If only the climate was that simple.

      The problems lie in the feedbacks and these are per medium of both radiation and thermodynamics and involve gases and liquids that go well beyond CO2. The models don't operate at a resolution that allows much of the interesting thermodynamic processes in the atmosphere to be directly modeled.

      "That shows that climate models can hindcast past observations. "

      Have a glance at http://www.clim-past.net/9/811/2013/cp-9-811-2013.pdf. Basically some skill for Last Glacial Maximum, hopeless for the mid-Holocene.

      HAS

      Delete
    5. @-Anonymous/HAS
      "The models don't operate at a resolution that allows much of the interesting thermodynamic processes in the atmosphere to be directly modeled. "

      But they do model it sufficiently accurately to match the energy balance models which use a simple equation and the CO2 forcing to calculate the climate change in response to the CO2 change. Have a look at the Lewis and Curry paper for a simple mathematical confirmation that CO2 is dominant, all that is left to argue about is the price.

      http://judithcurry.com/2014/09/24/lewis-and-curry-climate-sensitivity-uncertainty/

      Delete
    6. @-Anonymous/HAS
      "The models don't operate at a resolution that allows much of the interesting thermodynamic processes in the atmosphere to be directly modeled. "

      The same models are used for weather prediction. Also global weather prediction models cannot model convection and boundary layer turbulence directly and need to parametrise them.

      Models are by definition simplifications. Showing that they simplify something is not a sufficient argument, you need to show that that matters for the problem at hand, in this case for the climate sensitivity of the global climate model.

      Delete
    7. "But they [GCMs] do model it sufficiently accurately to match the energy balance models .... Have a look at Lewis and Curry ..."

      A point of L&C is that the GCMs in the main don't match their simpler energy balance model.

      As to using the ECS estimates as a basis for claiming "CO2 is dominant" strikes me as far too simple a model. Perhaps in a world where nothing changes, but that woudl be easy wouldn't it?

      HAS

      Delete
    8. Victor Venema

      I'm sure you know that modelling regional and local weather over a week or so is a different problem compared with producing multi-decadal models of global climates.

      The problem is the models aren't modelling climate particularly well. They don't do the sensitivity well compared with observations, they don't do absolute temperatures well compared to observations, and they haven't done short-term (multi-year) predictions of global temperatures well compared with observations .....

      Apart from that I'm sure they are fine, and as you say their cousins have great utility in weather forecasting.

      HAS

      Delete
    9. @-Anonymous/HAS

      Yes, I know that weather and climate are different. But it was a response to your claim that thermodynamics was wrong:

      "The models don't operate at a resolution that allows much of the interesting thermodynamic processes in the atmosphere to be directly modeled."

      Your claim was thus not about the parts of the models that is different, but about the parts that are tested every day in weather predictions.

      The rest of your comment is unfortunately vague and just a matter of taste about which one cannot argue. It is always nice when models perform better, feel free to think they do not do something "particularly well". To repeat myself:

      Models are by definition simplifications. Showing that they simplify something is not a sufficient argument, you need to show that that matters for the problem at hand, in this case for the climate sensitivity of the global climate model.

      Delete
    10. Victor Venema

      Actually I didn't say that "weather and climate are different". I said that modelling them are different problems. One of the big problems we face with climates is there is no natural dividing line between it and weather. It isn't like other phenomena (eg atoms to molecules) where there are structural discontinuities that allow obvious simplification available within the reductionist approach being used in them.

      So when weather models run out of puff (the thermodynamics come unstuck) after a week or so, there is no reason to suspect that they will suddenly "come right" (i.e. become tractable) at a longer time scale. At least not using the GCM (weather modelling derived) approach.

      You are correct to say the validation of GCMs is a matter of taste. This again reflects a weakness in the discipline. They are basically unverifiable, but not for the reason that is most obvious - it takes time - but because the industry doesn't design that in.

      But even in the absence of criteria and keeping an eye on the policy purposes these models are being out to, do you think the bias in climate sensitivity compared to observation based estimates is acceptable? do you think an over 2 degree range in absolute surface temps in the CMIP5 models over the forecast base period is acceptable (what's that saying about the physical behavior of the different worlds being modeled)? and do you regard the global temp observations falling outside the near term (out to about 2035) forecast levels when we're about half way through that period?

      They aren't just simplifying things, they are getting them wrong, including sensitivity to CO2.

      HAS


      Delete
    11. HAS: "Actually I didn't say that "weather and climate are different". I said that modelling them are different problems."

      Yes, they are different problems and also their modelling are different problems. For climate modelling you additionally need to model the ocean, sea ice, vegetation, stratospheric chemistry. But weather and climate models both do the thermodynamics in the same way. Temperature changes with pressure in the same way, clouds are created and evaporate in the same way. And so on.

      Your claim was that there was a problem with thermodynamics. I cannot help it.

      HAS: "So when weather models run out of puff (the thermodynamics come unstuck) after a week or so, "

      Weather models do not run out of puff because of thermodynamics, but because of the chaotic nature of circulation.

      HAS: "there is no reason to suspect that they will suddenly "come right" (i.e. become tractable) at a longer time scale. At least not using the GCM (weather modelling derived) approach."

      They do not suddenly become right again. In climate you ask different questions. In weather you need to be able to, for example, predict whether a high pressure area is above you at a certain moment. In climate you study the averages over longer periods.

      Even if we cannot predict the weather for much longer than a week, we do know that next summer it will be warmer than the last week.

      While both are example of the forcing (the amount of energy available) changing, I am not claiming that modelling climate change is just as easy as modelling the seasonal cycle, but this example does show that your argument is wrong. It is possible to say something about the atmosphere after a few weeks.

      Delete
    12. HAS: "But even in the absence of criteria and keeping an eye on the policy purposes these models are being out to,

      Climate models, like all models are build to understand the system being modelled. Thank you for showing your tendency to think in conspiracies.

      HAS: "do you think the bias in climate sensitivity compared to observation based estimates is acceptable?

      The climate sensitivity of climate models fits well to estimates from past climatic changes in paleo data, estimates from volcanoes and climatological constraints (begin about to get out of a snowball Earth).

      There are some authors working on highly simplified climate models tuned by measurements. That is what mitigation sceptics like to call "observation based estimates" because they like the smaller numbers the simplified climate models produce. Normally they would complain about tuning and about simplifications, like you do above about the resolution, well the resolution of these simplified models is about as bad as you can get them.

      HAS: "do you think an over 2 degree range in absolute surface temps in the CMIP5 models over the forecast base period is acceptable (what's that saying about the physical behavior of the different worlds being modeled)?

      Given that you ask the question and do not answer it yourself, I guess you already know that there is no indication that this is a serious problem that would change the climate sensitivity of the models.

      Compared to the daily cycle, the seasonal cycle, strong vertical temperature changes and horizontal temperature changes, two degrees is not that much and no reason to a-priory assume that there is a problem. If you have evidence, please show it to us.

      HAS: "and do you regard the global temp observations falling outside the near term (out to about 2035) forecast levels when we're about half way through that period?"

      I am sure you already know the answers to that standard "question". I will keep it short. Part of that was already discussed in the program, the lack of observations in the Arctic and Africa explains part of that. The rest is perfectly well explained by natural variability and a little less forcing as assumed in the models. And I guess people have already told you more that often enough that there is no statistically significant change in the rate of warming.

      If you were a true sceptic, you would naturally mention that also the observations could be wrong. Mitigation sceptics often make that claim, just not when observations show a little less warming than models.

      Like they also often claim that global warming is not man-made, but large natural; the 1 degree Celsius since 1880. But then when it comes to the last decade and a tenth of a degree, they suddenly know nothing about natural variability any more, then any fluctuation is immediately proof that the models are wrong. Amazing.

      Even if the model predictions were off, I could not care less. I like observations, that is my field. If you want man-made global warming to be wrong, the temperature would have to go down by one degree again and stay there. Just model imperfections will not do.

      Also without global climate models there would be a strong case for mitigation. We do need them for the regional detail needed to make adaptation cheaper. If you have no idea what happens regionally, you have to adapt for any possible change. That is expensive.

      People who are against mitigation and favour adaptation are somewhat inconsistent when they simultaneously claim to trust climate models less than others.

      Delete
    13. What a curious response.

      Because I question the performance of GCMs in their ability to model future climates (I think they perform a useful function in allowing virtual experiments in sample) I seem to have particular views about adaption versus mitigation (I do, but they are grounded in risk management and public policy).

      Here we are talking about GCMs, and while I haven't stated this explicitly implicitly I posing the question whether there might be better use of scarce scientific resources than to continue multiple teams world wide trying to extend modelling techniques that were design to do weather and understand the internal dynamics of the atmosphere.

      Now the techniques being used in GCMs maybe fine dealing with academic studies of climates past (but do read http://www.clim-past.net/9/811/2013/cp-9-811-2013.pdf for an alternative view to your own), nothing much hangs on the results.

      But here we are dealing with models that are being offered as serious input to significant decisions. The standard of proof goes up accordingly, and perhaps more significantly the need to be explicit about the uncertainty.

      What I take from your comment above is that you are very certain that everything is OK with GCMs (the resolution is sufficient, the results are in line with observations, no other form of modelling can compete etc) and that any criticism of them somehow is seen as a conspiracy (in my world your opening remark would be called a non-sequitur).

      My concern like yours should you pause to give it mature reflection is the over reliance on GCMs by the IPCC is causing problems for sensible well considered policy response. Better understanding of the observations and the uncertainty is what is required.

      Delete
    14. What a curious response. I will just note for the record, that HAS has not responded to the clear errors in his arguments about thermodynamics and climate modelling being impossible because weather prediction is impossible. That is a clear indication that an honest discussion like one would have in the scientific community is not possible with HAS.

      I will also note for the record that HAS has completely misrepresented my arguments in his second last paragraph.

      Delete
    15. Just to be clear.

      I used the vernacular when saying the weather models "run out of puff (the thermodynamics come unstuck) after a week or so" and acknowledge this could well have been read as suggesting there was something wrong with the underlying thermodynamic relationships rather than the perturbations in the thermodynamic system and the need to make approximations in describing the system leading to the model ceasing to perform.

      Also I should be clear didn't say climate modelling was impossible because weather prediction was impossible. The point was somewhat more subtle than this and had to do with the problems of modelling complex systems.

      And finally my second paragraph should have been better qualified, but I was attempting to get some acknowledgement that you weren't defending GCMs willy nilly, and you too understood their limitations. it seems you do.

      You may be unused to scientific discussions where protagonists come from a different frame of reference from you, but the issues here are multidisciplinary and progress is going to require some understanding of views and understanding from outside your normal group of mates.

      HAS

      Delete
    16. @-HAS
      "A point of L&C is that the GCMs in the main don't match their simpler energy balance model.
      As to using the ECS estimates as a basis for claiming "CO2 is dominant" strikes me as far too simple a model."

      Have you considered that there is an inconsistency in claiming that the 10% lower estimates provided from energy balance models using observational data invalidates GCMs, but they are 'far too simple' to validate the dominant role of CO2?

      Delete
    17. That the NWP models "run out of puff (the thermodynamics come unstuck) after a week or so" is incorrect. The underlying thermodynamics is correct. What you see in forecasts beyond around T+120 hrs is the swamping of the signal by chaos. This is why forecasts beyond about that time use ensemble prediction to attribute sensitivity to starting conditions and hence the chaos within the system ensuing. An atmospheric state that is overly sensitive to starting conditions can be found in this way. This is done with GCM's also. The current "pause" has thus been averaged out by this ensemble process. The problem has been overwhelmingly the persistent -ve PDO/ENSO cycle in the Pacific. Where this was correctly assimilated in the starting conditions of some runs - then the "pause" was flagged up....

      http://www.reportingclimatescience.com/news-stories/article/climate-models-simulate-global-warming-pause.html

      It is as a result of GCM's that we have learned that the underlying thermodynamics is correct and that the nuances of the movement of the Earth's climate system's heat (~93% resides in the oceans) is where the models cannot yet simulate short-term (<30 years) temperature variations.
      This does not make the "thermodynamics" wrong. All models are wrong, but equally all models are useful.
      Where sceptics go wrong is to assume the thermodynamics is incorrect (read CO2 as a driver) where in fact the models are highlighting the internal variability within the climate system.
      Very, very different.

      Delete
    18. Toneb

      I had clarified the "run out of puff" phrase above. I'd just add that it doesn't mean the lungs have failed, just that they've reached the limit of their practical application.

      Meehl et al does show that for very short periods (3-7 yrs) models initialised with information from the early part of the pause forecast the pause better than the alternate simple model of persistence. I'd be rather surprised if it didn't.

      I also agree that the GCMs are useful for studying the internal dynamics, although we don't have data that confirms your hypothesis that the missing heat is in the oceans.

      Unfortunately the models are being used well outside this limited scope. They are being presented as sufficiently robust to make multi-decadal forecasts for use in very significant policy decisions, with the uncertainty and limitations not being well explained.

      Just one small point in this regard that comes out of Meehl et al. What does the 2020 forecast/projection look like (we have a chance for a live test of the methodology) and should we be using this subset of initialised model runs for our 2030 forecast/projection or revert to the ensemble?

      Delete
    19. Izen

      "Have you considered that there is an inconsistency in claiming that the 10% lower estimates provided from energy balance models using observational data invalidates GCMs, but they are 'far too simple' to validate the dominant role of CO2?"

      There are two separate ideas in play here.

      The first is simply the observation that CO2 sensitivities from GCMs are biased upwards compared to the simpler energy balance models.

      The second is that the ECS doesn't (necessarily) tell us what dominates in determining observed global temps.

      HAS (also the previous one)

      Delete
  11. My only question to you Norman is why you would allow yourself to be exposed as one of the BBC's useful idiots.

    Surely your reputation is more important than a few minutes 'fame' on a minor TV channel, or did you simply underestimate the knowledge of the public on climate science and modelling.

    ReplyDelete
  12. Richard Mallett3 March 2015 at 13:45

    The global temperature has increased (according to HadCRUT4) by 0.94 C since 1850 (which seems to be the start point that Mann and others use when they talk about the 2.0 C increase) or by 0.89 C (according to GISS and NCDC) since 1880.

    The average of HadCRUT4, NCDC and GISS has increased by 0.83 C since 1880.

    Therefore, if something terrible is going to happen when global temperatures rise by 2.0 C above the 1850 level (which is yet to be demonstrated) then we have until 134 * 2.0 / 0.83 = 323 years from 1850 = AD 2173 to find a solution.

    There is no reason to talk about the anthropogenic effect since 1950, when Mann and others talk of 1850 as being the start of the industrial era. Since 1880 we have had cooling to 1911, warming to 1944, cooling to 1976, warming to 1998, and steady temperatures since then.

    If the 2.0 C increase starts in 1950 (when we were in a cooling phase) instead of 1850, then we have even longer to find a solution.

    ReplyDelete
    Replies
    1. Richard Mallett, you could repeat your silly computation be starting in the year 1000 and conclude that we need at least another 1000 years until the temperature doubles again.

      Thoughtless extrapolation, like you do, is dangerous. You need a physical understanding of the problem to do it right.

      Delete
    2. Richard Mallett4 March 2015 at 04:59

      The point was that I started in 1850 at the start of the industrial era. Since then, the temperature has increased, decreased, or stayed the same for long periods of time, in step with the PDO, suggesting that the PDO (or what drives the PDO) is the major factor. What's left is a trend of 0.65 C per century during the industrial era, which is likely to be anthropogenic.

      Delete
  13. You ask

    "there has been controversy about the way a method called principal component analysis was used to create the famous hockey stick graph that appeared in previous IPCC reports. Although the problems with that method were recognised it is not obvious how or if they have been avoided in the most recent analyses."

    Mostly because the calculations have been done with the agreed best methods and redone with different data or more data and it pretty much comes out the same. You could google that. That blogs continue to debate a 17 year old paper should tell you something. You are mistaking the noise for the signal.

    ReplyDelete
    Replies
    1. When I do a Google the most recent paper I turn up is http://content.csbs.utah.edu/~mli/Economics%207004/Marcott_Global%20Temperature%20Reconstructed.pdf which has the uptick, and yes as you say it comes out much the same as Mann - it was an artifact of incorrect use of core tops as modern proxies (see http://business.financialpost.com/2013/04/01/were-not-screwed/).

      HAS

      Delete
    2. The [fake] controversy about the way a method called principal component analysis (PCA) was used to create the famous hockey stick graph of 1999 dates from 2005, by which time the authors of the original graph had already changed to a regularized expectation–maximization (RegEM) method which did not require this PCA step.

      The original graph was featured in the IPCC 2001 report, the next report in 2007 showed it in the context of graphs using other methods and data, which all came to essentially the same result. A robust outcome.

      Anonymous HAS, you really should try looking at credible scientific sources, not the notorious financialpost. The Marcott reconstruction and the various Mann reconstructions all show proxy temperatures in relation to the uptick of modern thermometer records. Which is right.

      The Marcott paper explicitly highlights the uncertainties in trying to use long term proxies [the core tops] for recent temperatures, much as the original Mann, Bradley and Hughes 1999 "hockey stick" study strongly emphasised the "uncertainties, and limitations" of their proxy reconstruction for earlier periods.

      Delete
    3. dave

      You did understand what McKitrick said about Marcott, and that Marcott et al subsequently admitted? In term you might understand they say the uptick isn't robust.

      HAS

      Delete
  14. Norman Fenton, you may be interested in the work of Andreas Hense (publications) and Seung-Ki Min, they work on attribution in a Bayesian framework.

    ReplyDelete
  15. EXTRAORDINARY SERIES of postings at www.climateaudit.org, the deservedly well- trafficked website of the courageous and tenacious Canadian statistician Steve McIntyre, is a remarkable indictment of the corruption and cynicism that is rife among the alarmist climate scientists favored by the UN’s discredited climate panel, the IPCC.
    In laymen’s language, the present paper respectfully summarizes Steve McIntyre’s account of the systematically dishonest manner in which the “hockey-stick” graph falsely showing that today’s temperatures are warmer than those that prevailed during the medieval climate optimum was fabricated in 1998/9, adopted as the poster-child of climate panic by the IPCC in its 2001 climate assessment, and then retained in its 2007 assessment report despite having been demolished in the scientific literature.
    It is a long tale, but well worth following. No one who reads it will ever again trust the IPCC or the “scientists” and environmental extremists who author its climate assessments.
    At some time or another, most people will have seen the hockey stick – the iconic graph which purports to show that, after centuries of stable temperatures, the second half of the 20th century saw a sudden and unprecedented warming of the northern hemisphere – a warming caused, we were told, by humankind burning fossil fuels and releasing carbon dioxide into the atmosphere –.
    http://scienceandpublicpolicy.org/images/stories/papers/monckton/monckton_what_hockey_stick.pdf

    ReplyDelete
    Replies
    1. @-McGlock9
      Any claim that the medieval warm period was warmer than the present has serious problems with the fact that retreating ice is now exposing buried plants, and Otzi the ice-man, who have not seen the sun since the Holocene optimum or even the Eemian.

      If the MWP was warmer than now, Otzi would have rotted away then and the dead moss and trees found under retreating ice-caps would not be from times long before the claimed MWP.

      Delete
  16. Frequentists may find that ClimateBall is a better analogy:

    climateball.wordpress.com

    Objectively speaking, it goes without saying.

    W

    ReplyDelete
    Replies
    1. Norman,
      Are you sure the 95% was produced by true statistical techniques. Judith Curry seemed to think otherwise.

      Yesterday, a reporter asked me (Judith Curry) how the IPCC came up with the 95% number. Here is the exchange that I had with him:

      Reporter: I’m hoping you can answer a question about the upcoming IPCC report. When the report states that scientists are “95 percent certain” that human activities are largely to cause for global warming, what does that mean? How is 95 percent calculated? What is the basis for it? And if the certainty rate has risen from 90 n 2007 to 95 percent now, does that mean that the likelihood of something is greater? Or that scientists are just more certain? And is there a difference?

      JC: The 95% is basically expert judgment, it is a negotiated figure among the authors. The increase from 90-95% means that they are more certain. How they can justify this is beyond me.

      Reporter: You mean they sit around and say, “How certain are you?” “Oh, I feel about 95 percent certain. Michael over there at Penn State feels a little more certain. And Judy at Georgia Tech feels a little less. So, yeah, overall I’d say we’re about 95 percent certain.” Please tell me it’s more rigorous than that.

      JC: Well I wasn’t in the room, but last report they said 90%, and perhaps they felt it was appropriate or politic that they show progress and up it to 95%.

      Reporter: So it really is as subjective as that?

      JC: As far as I know, this is what goes on. All this has never been documented.

      .

      Delete
  17. I thought you presented the segment well, overall.

    I would have liked to see the qualifications and caveats emphasised a bit more. You had many teams' histories to test your models on; there is only one climate record: if you had developed your model based only on one team's results over one decade, could you have had confidence in its results? - that kind of thing. But that's the nature of the TV beast, I guess.

    Your punch line that the model doesn't work without the most critical factor of salaries could have had more bite. A little more time could have been spent on "now if I remove manager's time at club, I get this effect, and ...." and a little less on the Spurs flavour interviews and sitting in a cafe.

    (I was also pestered my the mischievous thought that if your model has skill superior to an informed observer, you out to be filthy rich from cleaning out the bookies.)

    The fundamental weakness of the segment was in the number itself: 95%. This number is not observed. It is not calculated from stated premises to be 0.9537 and rounded. It is, in the broad sense of the word, based on a political decision. Some guys in a room said "how confident are we?" and came up, in AR5 WG1 Ch10 Exec. Summ. with the label "very likely", which the IPCC standardises as 95%+, in the same way I might say to my partner that I'm "95% confident" I unplugged the iron, just as we arrive at our holiday destination!

    Yes, I know there was a long and complicated road that led to that room, but in the story of getting to your 95%, the lack of a mathematical route to the number caused a bit of a disconnect for me.

    ReplyDelete
    Replies
    1. God, I hope you're not a scientist in receipt of taxpayers funding.

      There was no mathematical route to the 95% number. That's why it was not discussed, it's purely subjective.

      Delete
    2. See comment above for an explanation of what goes on. AR4 90% certain, all models fail over next 5 years, climate alarmist scientists 95% certain AR5. Go figure.

      Delete
    3. John, no I am not a scientist in receipt of taxpayers funding. I might have gone that way, but instead became a corporal of commerce whose only interaction with the public trough is to be ritually bled into it every month. You may label me a Computer/Math geek with slightly cynical lukewarmer tendencies if you like. I am a regular reader and very occasional commenter at His Grace's, whence I was directed here.

      My general sense of the programme was that the payoff graph in the first segment needed error bars, and the summary graph in the third, from which we read off 1 Trillion, needed ... something else - Unicorns, Sea Serpents, Astrological symbols might be too much, but something in that direction. But since our host has graciously allowed free comment here, it seems fair to confine my observations to his section.

      But both of the other segments had a deterministic process that led to a number, or a small range of numbers.

      I think what I'm really interested in is why 95% was chosen as one of the three numbers for the programme, since I don't see how one can justify an algorithm that derives it, and WG1 offers none. (My preferred number would have been 3, coincidentally both the central, though no longer IPCC-approved likeliest, value for ECS and the factor by which their likely (66%+) estimates differ: "medium confidence that the ECS is likely between 1.5°C and 4.5°C".

      And on looking at Chap 10 again, I see I misquoted in my original. The actual statement is "It is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010." with the footnote "Extremely likely: 95–100%". So in fact, "95%" is not the number used by the IPCC: they offer instead the range 95%-100%. I never noticed that before. http://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter10_FINAL.pdf page 869.

      Delete
  18. Norman, you use the mid troposhere hot spot as proof that climate models match reality. I was wondering who it was that lead you to believe they match, as many experts in the field would dispute that pretty fundamental point. http://wattsupwiththat.com/2013/07/16/about-that-missing-hot-spot/

    Bloke down the pub

    ReplyDelete
    Replies
    1. Norman did not, see my response to Ann Ceely below.

      Delete
  19. Watching the programme, I assumed you were looking at data for confirmation. From your blog above, you were only looking at the models' output - which is a big criticism from sceptics.

    When you put up the model output of co2 'fingerprint' - you should have said that the data doesn't support this. It's known as the 'Missing Hotspot'. 4 Refs for you:-

    www.climatedialogue.org/the-missing-tropical-hot-spot/ (Oct 2014)
    "Climate models show amplified warming high in the tropical troposphere due to greenhouse forcing. However data from satellites and weather balloons don’t show much amplification."
    http://www.nzz.ch/wissen/wissenschaft/der-fehlende-hotspot-in-der-hoehe-1.18056931
    "... the view that the climate models have a real problem in the region near the equator appears to becoming accepted"
    Reexamining the warming in the tropical upper troposphere: Models versus radiosonde observations (Seidel et al Oct 2012)
    HAC-Robust Trend Comparisons Among Climate Series With Possible Level Shifts.
    (McKitrick and Vogelsang, Jul 2014)

    ReplyDelete
    Replies
    1. Ann Ceely, the "tropical hotspot" is seen in models for any kind of warming, not just warming by greenhouse gasses. Judith Curry also just made this wrong claim and quickly corrected herself in the comments. (The post itself is still wrong, so that gullible people are still mislead and will keep on repeating this mistake that is common among mitigation sceptics, which shows how much they care about getting things right.

      What is a signature of greenhouse gasses is that the temperature in the stratosphere drops while the temperature at the surface increases. Exactly as we see.

      I can also highly recommend the introductory statements made at Climate Dialogue on the tropical hotspot. Did you read it? It also makes the case that the hotspot is associated with any warming.

      The text of Sherwood mentions many model model deficiencies, including many the mitigation sceptics do not like to talk about because they cannot be spun into a story about models running hot. The contrary.

      Delete
    2. If the hotspot is seen in models for any kind of warming, then it's absence indicates either that the models are wrong or that there is no warming in which case the models are still wrong.
      https://wattsupwiththat.files.wordpress.com/2015/03/aps_figure-page352.png

      Bloke down the pub

      Delete
  20. @ Victor Venema 3 March 2015 at 18:53

    These are computer models, and thermodynamics equations are such that they cannot be modeled by computers without using approximations and running the risk of missing vitally important "fixed points" i.e.abrupt changes.

    ReplyDelete
    Replies
    1. How often do I have to say that I know hat models are not perfect? It is a fairy tale of the mitigation sceptics that scientists would claim that.

      Models are simplifications by definition. Models are tools to understand the problem you are studying.

      Yes, it is very well possible that the models will miss something, whether gradual or abrupt. We are taking the climate system outside of the region for which we have observations. I am sure that there will be many surprises. If we could predict climate change perfectly, I would not worry too much about it, then it would just be costly. I worry about the uncertainty monster.

      Delete
    2. Ann Ceely:

      "running the risk of missing vitally important "fixed points" i.e.abrupt changes."

      Not necessarily but, if true, those abrupt changes could be extremely unpleasant, and their possible existence provides no comfort to a rational person.

      Delete
    3. Victor

      "How often do I have to say that I know that models are not perfect?'

      But I think we could be forgiven for think they seem to be in your world.

      One commentator said:

      "It would be nice if we could run a controlled experiment for a few years, where half the premiership clubs doubled their wages, and the other half halved them, without changing anything else - same players, same managers. "

      And you responded:

      "The nice thing of climate science is that in that case you do have that information."

      and went on to extol the virtues of physical climate models.

      The uncertainty monster doesn't seem to be something you have been trained to worry about.

      HAS

      Delete
    4. HAS:

      "The uncertainty monster doesn't seem to be something you have been trained to worry about. "

      What is odd is the perception that the Uncertainty Monster implies there is nothing to worry about. Denialists like yourself pretend that uncertainty cuts one way only, justifying inaction.

      Delete
    5. We weren't discussing action or lack of it, we were discussing the view that models can on the one hand be imperfect and the other be substituted for information about the real world.

      The dig about the uncertainty monster may have been a bit unkind, however Victor did rather set himself up by suggesting there was one set of uncertainty keeping him awake at night (models failing to warn about completely unknown risks) while completely denying the existence of the well known risk in modeling (particularly GCMs) of thinking their output is real.

      HAS

      Delete
  21. The politically motivated selection of '95%'does not seem to be based on computations, Bayesian or otherwise. How could it be? We do not have an adequate model of historical temperatures with which to make such computations. On the other hand, '95%' is bigger than the previously deployed '90%' and so conveys what is politically required and at the same time benefits from sounding 'statistical'. You missed an opportunity to expose all of this to public ridicule. Is that why you were chosen to take part?

    ReplyDelete