Well-estimated global warming by climate models

By Stephan Lewandowsky
Professor, School of Experimental Psychology and Cabot Institute, University of Bristol
Posted on 20 July 2014

Has global warming “stopped”? Do models “over-predict” warming? There has been much recent talk in the media about those two questions. The answer to the first question is a fairly clear “no.” Global warming continues unabated.

To illustrate, for the coverage-bias-corrected data published by Cowtan and Way last year, the temperature trend for the last 15 years up to and including 2013 is significant—in the same way that the trend was significant for the last 15 years in 2000 and in 1990. So from any of those vantage points, the Earth has warmed significantly during the preceding 15 years.

One thing that has changed since 2000 is that more heat is now going into the oceans—rather than the atmosphere—and at an accelerating pace. Or as Dana Nuccitelli put it recently:

“The rate of heat building up on Earth over the past decade is equivalent to detonating about 4 Hiroshima atomic bombs per second. Take a moment to visualize 4 atomic bomb detonations happening every single second. That's the global warming that we're frequently told isn't happening.”

Let’s turn to the second question: Have models over-estimated the rate of warming? This question has a more nuanced but quite fascinating answer.

We begin by noting that the observed global temperature increase remains comfortably within the 95% envelop of model runs, as shown in the figure below, which is taken from a recent Nature Climate Change paper by Doug Smith.

Now, arguably, the observed temperatures for the last decade or so are tending towards the lower end of the model envelope (note, though, that this figure does not plot the coverage-bias-corrected data from Cowtan and Way, which would raise the final observed temperatures and trends slightly).

Does this then mean that the models “over-predict” warming?

Not exactly.

To understand why the answer is no, we need to consider three issues.

First, it will be noted that occasional brief excursions of observed temperatures outside the 95% model envelope are not unusual new—and indeed the most recent excursion occurred when the Earth warmed faster than the models. This is the result of natural variability and represents short-term disturbances that do not affect the underlying long-term trend.

Second, we need to consider the expected relationship between the models’ output and the observed data. This is a profound issue that is routinely overlooked by media commentators, and it pertains to the common confusion between climate projections and climate forecasts. Climate forecasts seek to predict the climate over a certain range, taking into account natural variability and—similar to a weather forecast—modeling the evolution of the climate from a known starting point and taking future internal variability into account. For example, the UK Met Office publishes decadal forecasts, which are explained very nicely here.

Climate projections, by contrast, seek to describe the evolution of the climate in the long run, irrespective of its current state and without seeking to predict internal variability. The figure above, like all figures that show model output to the end of the century, plots projections rather than predictions. Because projections have no information about the phase (sequence and timing) of internal climate variability, there is no expectation that any particular projection would align with what the Earth is actually doing. In fact, it would be highly surprising if global temperatures always tracked the center of the model projections—we expect temperatures to jiggle up and down within the envelope. To buttress this point, recent work by Mike Mann and colleagues has shown that warming during the most recent decade is well within the spread of a model ensemble.

Finally, we need to consider the reasons underlying natural variability, both in the models and in the planet’s warming trend. One of the major drivers of this variability involves the El Niño – La Niña oscillation in the Pacific, which determines how much heat is taken up by the oceans rather than the atmosphere. La Niña conditions favour cooler temperatures whereas El Niño leads to warmer temperatures. The animated figure below from Skepticalscience  illustrates this nicely:

 

The figure clarifies that internal climate variability over a short decadal or 15-year time scale is at least as important as the forced climate changes arising from greenhouse gas emissions.

Those three issues converge on the conclusion that in order to meaningfully compare model projections against observed trends, the models must be brought into phase with the oceans. In particular, the models must be synchronized with El Niño – La Niña.

The evidence has been mounting during the last few years that when this synchronization is achieved, the models capture recent temperature trends very well.

At least four different approaches have been pursued to achieve synchronization.

One approach relied on specifying some observed fields in the climate models while they are “free” to evolve on their own everywhere else. For example, Kosaka and Xie showed than when the El Niño-related changes in Pacific ocean temperature are entered into a model, it not only reproduced the global surface warming over the past 15 years but it also accurately reproduced regional and seasonal changes in surface temperatures. Similarly, Matthew England and colleagues reproduced observed temperature trends by providing the model with the pronounced and unprecedented strengthening in Pacific trade winds over the past two decades—and the winds in turn lead to increased heat uptake by the oceans.

A second approach involved initialization of the model to the observed state of the planet at the beginning of a period of interest. Meehl and Teng recently showed that when this is done, thereby turning a model projection into a hindcast, the models reproduced the observed trends—accelerated warming in the 1970s and reduced rate of surface warming during the last 15 years—quite well.

The third approach, by Gavin Schmidt and colleagues, statistically controlled for variables that are known to affect model output. This was found to largely reconcile model projections with global temperature observations.

The fourth approach was used in a paper by James Risbey, myself, and colleagues from CSIRO in Australia and at Harvard which appeared in Nature Climate Change today.

This new approach did not specify any of the observed outcomes and left the existing model projections from the CMIP5 ensemble untouched. Instead, we select only those climate models (or model runs) that happened to be synchronized with the observed El Niño - La Niña preference in any given 15-year period. In other words, we selected those models whose projected internal natural variability happened to coincide with the state of the Earth’s oceans at any given point since the 1950’s. We then looked at the models’ predicted global mean surface temperature for the same time period.

For comparison, we also looked at output from those models that were furthest from the observed El Niño - La Niña trends.

The results are shown in the figure below, showing the Cowtan and Way data (in red) against model output (they don't differ qualitatively for the other temperature data sets):

The data represent decadal trends within overlapping 15-year windows that are centered on the plotted year. The left panel shows the models (in blue) whose internal natural variability was maximally synchronized with the Earth’s oceans at any point, whereas the right panel shows the models (in gray) that were maximally out of phase with the Earth.

The conclusion is fairly obvious: When the models are synchronized with the oceans, they do a great job. Not only do they reproduce global warming trends during the last 50 years, as shown in the figure, but they also handle the spatial pattern of sea surface temperatures (the figure for that is available in the article).

In sum, we now have four converging lines of evidence that highlight the predictive power of climate models.

From a scientific perspective, this is a gratifying result, especially because the community has learned a lot about the models from those parallel efforts.

From another perspective, however, the models’ power is quite distressing. To understand why, just have a look at where the projections are heading.

 

Udpate 21/7/14, 9am: The date in the post was initially misspelled and should have read July rather than June.

Bookmark and Share

21 Comments


Comments 1 to 21:

  1. Thanks Dr. Lewandowsky! Very informative and certainly explains the situation in very clear language.
  2. Paulthompson3131 at 10:12 AM on 21 July, 2014
    That last sentence is like the end of a horror movie. It is so refreshing to get information on climate change from scientists directly rather than the mainstream media. This is a well written and informative article. The conclusion is especially frightening because it is based on logic and evidence and a sound scientific methodology.
  3. Yet another patently alarmist article from somebody who is not a climate scientist or qualified to speak on this topic. Better to listen to somebody more rational and qualified, such as Professor Judith Curry.

    Better also to accept the conclusions in the IPCC that there indeed exist a "pause" or "hiatus" which climate models have miserably failed to predict:

    "Models do not generally reproduce the observed reduction in surface warming trend over the last 10-15 years. - IPCC AR5"
  4. From respected climate scientist Professor Judith Curry: http://judithcurry.com/2013/10/30/implications-for-climate-models-of-their-disagreement-with-observations/
  5. @Backslider The lead author on this paper is Dr James Risbey a climate scientist with the CSIRO. Professor Lewandowsky has demonstrated a more than casual acquaintance with statistics in a substantial body of peer reviewed publication (and I believe also lectures in statistics) and Professor Oreskes is Affiliated Professor of Earth and Planetary Sciences at Harvard University.

    You claim of lack of expertise comes from serial climate clown Anthony Watts who notably has zero qualifications of any sort.

    The fact that you resort to an ad hominem attack without addressing the paper itself gives us all we need to know about you.

    And Judith Curry like Roy Spencer is happy to make big claims on her blog. Get back to us when she has the courage to publish her claims in a peer reviewed journal where they can be subject to scrutiny from her peers rather than being used to rev up climate science deniers on her blog.
  6. @MikeH

    Get back to us when she has the courage to publish her claims in a peer reviewed journal where they can be subject to scrutiny from her peers

    I guess 151 peer-reviewed papers between 1983 and 2011 are not enough;
    http://curry.eas.gatech.edu/onlinepapers.html

    How many papers has Dr James Risbey authored / co-authored? 34* between 1996 - 2013. Dr Curry authored / co-authored 46 papers before Risbey authored his first.

    *Two of these are Op-Ed articles in The Conversation!*
    http://www.marine.csiro.au/~ris009/pubs.html
  7. BTW MikeH, did you know that yesterday (or today, depending on time-zones) July 20th is the 45th anniversary of Apollo 11? You know, when man first stepped foot on the moon......something that us sceptics also think is fake according to Lewandowsky and Cook et-al.

    "That's one small step for [a] man, one giant leap for mankind"
  8. Dana Nuccitelli via Stephan Lewandowsky;

    The rate of heat building up on Earth over the past decade is equivalent to detonating about 4 Hiroshima atomic bombs per second. Take a moment to visualize 4 atomic bomb detonations happening every single second. That's the global warming that we're frequently told isn't happening.”

    According to SkS’s own trend calculator, there has been ZERO warming so far this century (Jan 1st, 2001). As of this post, that is 14 years and 6 months.

    (June 2014)
    GISTEMP – Trend: 0.022 ±0.157 °C/decade
    NOAA – Trend: -0.003 ±0.145 °C/decade
    HADCRUT4 – Trend: -0.009 ±0.141 °C/decade
    BEST – Trend: 0.064 ±0.384 °C/decade
    NOAA (land only) – Trend: 0.063 ±0.264 °C/decade
    RSS – Trend: -0.060 ±0.252 °C/decade
    UAH – Trend: 0.054 ±0.252 °C/decade
  9. Steve Metzler at 01:28 AM on 22 July, 2014
    And BruceC, the internet equivalent of a dumber than a bag of hammers playground bully, weighs in with his take on the paper that he hasn't read. Nor would he understand it even if he read it, going by those last 3 comments.

    Here's a clue for you, BruceC: ENSO is a stochastic (i.e. *random*) process. GCMs can model it, but their model runs cannot be expected to conform to what is happening in the real world *except by chance*. And that is what Risbey et. al. have found: model runs that include ENSO parameters that *happened* to match real world ENSO conditions after 1950 pretty closely. And when they match the ENSO conditions like that, the projected surface temps match the real world ones pretty closely too.
  10. StephenBialkowski at 05:42 AM on 22 July, 2014
    Congratulations on an excellent paper. The selection of model trajectories based on past El Nino oscillation was brilliant application of chaos theory. Kudos
  11. Dr. Lewandowsky, the last paragraph of the abstract reads:

    "These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns."

    As shown in figure 5 of the study, there is virtually no match between simulated and observed spatial patterns in the Pacific and the trends have the wrong sign in most other regions of the world.

    The models failed per your own criteria.

    Could you explain how it is then possible to conclude the models provide reasonable estimates of observed trends?
  12. Steve Metzler at 22:41 PM on 23 July, 2014
    Moru H., hi,

    I haven't read the paper yet either, but I have read some detailed analysis by those that have.

    There's a danger of jumping to conclusions based on the necessarily terse info in the abstract. In the full paper, it becomes clear that the Pacific spatial trend was only of interest to the researchers in a very small region, namely the Niño 3.4 region in the middle of the Pacific as depicted by this diagram:

    http://www.secondpagemedia.com/jadblog/wp-content/uploads/2012/07/el-nino.png

    It's a pretty small area of the globe defined as the intersection of the Niño 3 and 4 regions. If you now examine Figure 5 with that in mind, you can see there's a pretty good match between the 'best' models and observations. What's happening in the rest of the globe is irrelevant.
  13. ManicBeancounter at 22:33 PM on 24 July, 2014
    The Dana Nuccitelli link needs correcting.
    Moderator Response: thanks fixed.
  14. Hi Steve Metzler,

    You said: "I haven't read the paper yet either, but I have read some detailed analysis by those that have."

    Same here. I think there are some general misconceptions about basic facts.

    The PDO index does not contain data from the tropical Pacific south of the 20N, it can/does not represent any changes or patterns in the Nino 3.4 region and vice versa.

    The authors started from a clear premise. Climate models can, when appropriately tested, simulate fundamental aspects of the observed climate system. In order to design such a test, they had to make a choice for one fundamental aspect and a region, the models can be compared to. They chose the Pacific; based on the argument that the observed changes of spatial trend patterns there, represent leading modes of natural variability. The conceptual design of the test is rock-solid and would be unaffected whether or not any subsequent choice is appropriate.

    The authors argued for ENSO as a leading mode in the Pacific region, represented by the Nino 3.4 index. The only workable approach to ensure the simulations were in-phase with ENSO, was to compare the indices computed from model output and observations.

    A defining feature of the climate system is the changes in patterns over space and time. Figure 5 was the perfect choice to make that behavior visible and to compare the model outputs to observations from real world.

    For the recent period (1998-2012), the authors describe the spatial trend patterns of simulations in phase with ENSO, as broadly consistent with observations. They point to the eastern Pacific as an example.

    The model simulations best in-phase with ENSO do not show a PDO-like pattern of cooling there, a distinct feature of a negative PDO. The pattern is clearly visible in the observations over the period, but it does not exist in the simulations.

    You can see that in the chart linked below.

    http://tinypic.com/r/dpa55i/8

    Their conclusion remains untrue, as long as the authors do not acknowledge and correct the mistake.

    Do you honestly find a convincing match in the Nino 3.4 region?
  15. Steve Metzler at 08:45 AM on 26 July, 2014
    Moru H., hi again,

    I must admit to being more than a little confused by what you wrote above in comment #14. You appear to be sending mixed signals. For instance, you said:

    The PDO index does not contain data from the tropical Pacific south of the 20N, it can/does not represent any changes or patterns in the Nino 3.4 region and vice versa.

    And yet, the Niño 3.4 region is the *only* region the authors looked at with regards to correlating observations with model runs, as selection criteria for their 'best' and 'worst' models.

    So why do you then proceed to drag the PDO into the equation? I agree that from 1998 - 2012, the PDO was in a textbook cold phase as evidenced by the chart you linked to. But the sphere of influence of the PDO is well to the north of the Niño 3.4 region. You just said so yourself! And, you can clearly see that the Niño 3.4 region isn't even covered by those two PDO examples you included. Have a look at the image I linked to from my comment #12. The Niño 3.4 region is directly to the west of Ecuador, and South America isn't even in your PDO charts.

    Do you honestly find a convincing match in the Nino 3.4 region?

    Yes. And apparently so do the authors of the paper, or else they wouldn't have bothered to publish.
  16. Hi Steve,

    And yet, the Niño 3.4 region is the *only* region the authors looked at with regards to correlating observations with model runs, as selection criteria for their 'best' and 'worst' models.

    That maybe explains it. You're confusing what was done with what the authors have said about the results. Have you seen a reference to the discussion of figure 5?

    From Risbey et al 2014:

    The composite pattern of spatial 15-year trends in the selection of models in/out of phase with ENSO regime is shown for the 1998-2012 period in Fig. 5.

    The models in phase with ENSO (Fig. 5a) exhibit a PDO-like pattern of cooling in the eastern Pacific, whereas the models least in phase (Fig. 5b) show more uniform El Niño-like warming in the Pacific. The set of models in phase with ENSO produce a spatial trend pattern broadly consistent with observations (Fig. 5c) over the period.

    This result is in contrast to the full CMIP5 multi-model ensemble spatial trends, which exhibit broad warming26 and cannot reveal the PDO-like structure of the in-phase model trend.


    It would have been hardly worth publishing that models can't simulate observed spatial trends or that the ENSO indices computed from models and data can be in-phase.

    It is only relevant what the simulations show when they are in-phase with ENSO. The conclusion is, they provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.
  17. Steve Metzler at 19:32 PM on 26 July, 2014
    Moru H., hi,

    OK, now I see what you are getting at. Since only the Niño 3.4 region was considered by the authors, and the PDO influence is outside that region, then nothing can really be said about the PDO. In other words, they are stretching their claims.

    *However*, nothing in a GCM happens in complete isolation. Cells in a region are coupled with each other, and have influence on each other. The conditions that produce an El Niño depend on what's happening in both the ocean and atmosphere in the broad region surrounding the Niño 3.4 area. So according to me, what they are trying to say is: "Hey, look. The models that were out of phase with ENSO made a mess of the whole eastern Pacific, rendering it as a mottled pink and red; whereas, the models in phase with ENSO *broadly* replicated the prevailing PDO conditions for the period".

    In that context, I think Figure 5 is useful, and bolsters the findings of the paper rather than detracting from it.
  18. Hi Steve,


    I'm amazed. You've warned of the dangers of jumping to conclusions on necessarily terse info, but over the course of this discussion you did basically just that.


    There is no argument that authors couldn't have possibly made any statements about spatial trend patterns outside the Nino 3.4 region without some kind of basic spatial correlation analysis. The point is that they did and say they found a good visual match with a PDO-like pattern that is neither observed nor has it anything in common with the accepted definition of a PDO in its negative phase.

    "(...)the models in phase with ENSO *broadly* replicated the prevailing PDO conditions for the period".


    It's clear the authors couldn't tell the difference between positive and negative PDO, but i thought you could.

    If the results were reported honestly, the authors would made have clear that models in-phase with Nino 3.4 trends could not reproduce the observed state of the Pacific during the period.


    "In that context, I think Figure 5 is useful, and bolsters the findings of the paper rather than detracting from it."

    It's hard to follow your logical twist here. Not being able to tell one state of the Pacific from another bolsters their conclusions?
  19. StephenBialkowski at 01:10 AM on 4 August, 2014
    Moru H, your claim of author dishonesty is itself dishonest. The authors showed the data and presented their interpretation. It would have been dishonest to not show the data and make absolute claims.

    You used their data and presented your interpretation. Everyone is able to examine the data and assess for themselves.

    The authors do not claim perfect modeling and point out several influences that might affect forecasts. Model results are limited and the model run that happens to match all ocean currents is unlikely. But what they show is that selecting model runs based on some coincidence with ocean current phase increased model forecast accuracy.

    I read a quote on an economics blog site regarding anti-environmentalism Lies kill. It is a far simpler task to teach people to spot lies than it is to elevate everyone in the world to the intellectual level of graduate scientists. In this case, the lie is in implying that the authors were dishonest.
  20. A new opinion has surfaced - from someone in a field of psychology close to Lewandowsky's own

    Social psychologist Jose Duarte thinks both of Lewandowsky's anti-sceptic papers are fraudulent and must be withdrawn by their authors.

    http://www.joseduarte.com/blog

    Perhaps Prof Lewandowsky could give us his considered opinion on this piece.
  21. vulnerability found sql
Comments Policy

Post a Comment

You need to be logged in to post a comment. Login via the left margin or register a new account.