All posts by Stephan Lewandowsky

Cash for comments: The public has a right to know

“In 1954 the tobacco industry paid to publish the ‘Frank Statement to Cigarette Smokers’ in hundreds of U.S. newspapers. It stated that the public’s health was the industry’s concern above all others and promised a variety of good-faith changes. What followed were decades of deceit and actions that cost millions of lives”—so reads the opening paragraph of a recent peer-reviewed paper on the history of how Big Tobacco “played dirty” by injecting lavish amounts of money into a public-relations campaign aimed at undermining the scientific evidence linking tobacco smoke to adverse health impacts.

Science and vested interests

There is considerable evidence that a similar playbook is being followed by fossil fuel interests in their campaign against the well-established scientific fact that the Earth is warming from greenhouse gas emissions associated with human economic activity. 

An important element of the tobacco industry’s activities involved the recruitment of medical scientists who, in direct exchange for cash or through indirect funding, would downplay the harms from tobacco in public. Those activities are described in encyclopedic detail in Robert Proctor’s book Golden Holocaust.

The sources and targets of money that flows into the scientific process must therefore be of considerable public interest. It is not surprising that the recent events surrounding Dr. Willie Soon, an astrophysicist affiliated with the Smithsonian and Harvard University, have caused such a stir.

In addition to astrophysics, Dr. Soon proclaims expertise in climate science, although his work has not withstood scientific scrutiny. He is also an “expert” on polar bears who has accused the U.S. Geological Survey of being “unscientific.” He also claims expertise on mercury poisoning, having used the Wall Street Journal as a platform to assuage fears about mercury-contaminated fish because, after all, “mercury has always existed naturally in Earth’s environment.”  

Is there a common thread behind astrophysics, polar bears, climate, and mercury in fish? The New York Times has revealed it to be funding from the fossil fuel industry. If you are wondering about the mercury in fish, remember that coal-fired power plants emit loads of mercury.

Dr. Soon has apparently failed to disclose this funding in many of his publications, in seeming violation of the journals’ policies. Dr. Soon also denied receiving funding from vested interests during U.S. Congressional testimony in 2003. 

I believe that this recent high-profile case, and the history of interventions by vested interests, illustrates the importance of complete funding disclosures by scientists when they publish, present, or testify about their research. 

Declaring funding

This brings us to the recent requests for disclosure of funding that Congressman Grijalva issued to a number of scientists who have testified in front of Congress, including Dr. Pielke Jr. and Dr. Curry. 

I cannot find fault with this request. Seeking confirmation of funding is not a “witch hunt.” It is in the public’s interest to know who has funded the research underlying an expert’s testimony to Congress.

I was initially concerned that the Congressman’s requests for disclosure seemingly went beyond funding-related documents, thereby straying uncomfortably close to potential harassment, which I have opposed in the past and which I continue to oppose irrespective of the identity of the source and target.

I was therefore sympathetic to scientific voices who objected to Congressman Grijalva’s requests on those grounds. Fortunately, it has now become clear that the Congressman’s requests pertain to sources of funding only and he has qualified his request for correspondence, which several scientific organizations have—rightly, in my view—considered to be an over-reach. 

Congressman Grijalva is to be commended for his responsiveness, which contrasts favourably with the unwillingness of other politicians to discontinue their witch hunts of climate scientists.

In my view, his request for funding sources is sound and very much in the public interest.

The EU Science Advisor: Greenpeace and Climate Denial

After being rumored for some time, the E.U. has now abolished the post of Chief Scientific Adviser. I have been following this from a distance, and although there may be some nuances that I am unaware of, my first reaction is that I am in agreement with Mark Lynas, namely that

“This is a dark day for science in Europe. Instead of having scientific advice at the heart of European policymaking, the Juncker Commission clearly wants to remove any person who might bring inconvenient scientific truths to the top EU table. Sadly, this is all too consistent with European moves to back away from evidence-based policymaking – if you can’t change the science you muzzle the scientists or keep them out of the room when powerful people are taking decisions.”

It is also clear that this move was undertaken in response to a concerted lobbying effort by various groups who opposed the position by noting:

“…The post of Chief Scientific Adviser is fundamentally problematic as it concentrates too much influence in one person, and undermines in-depth scientific research and assessments carried out by or for the Commission directorates in the course of policy elaboration.” [Emphasis added]

The language may sound familiar: Opposition to a scientific position by claiming that the position undermines science. Straight out of the playbook of the Merchants of Doubt. Straight out of the tobacco industry’s strategy to call their opposition to medical research “sound science”—which of course it was not.

So who was doing the lobbying, and who claimed that a Chief Scientific Adviser undermines scientific research?

It was Greenpeace.

Greenpeace and a number of other environmental organizations that co-signed a letter to the President-elect of the European Commission, Mr. Jean-Claude Juncker.

Bob Ward succinctly summarized the implications of this:

Indeed.

Does this mean there can be no public debate about GMOs, and that “science”, however it is best defined in this instance, should have the final word?

No, far from it, in the same way that a disembodied appeal to “science” cannot solve the risk from climate change. Ultimately, we have to make decisions about policies and those decisions require debate with input from all stakeholders and the public—whether they are “skeptic” about climate change or GMOs or not.

The problem arises when the politics—and their offshoot, so-called “skepticism”—seek to influence or deny the science rather than addressing the policies that deal with the scientific evidence.

It’s one thing to argue against Monsanto or against a carbon trading scheme, but quite another to seek to muzzle inconvenient scientists or to get rid of an advisory position altogether.

It’s one thing to oppose corporate profits but quite another to destroy a field experiment involving GMOs, as Greenpeace has done in Australia.

Unfortunately, at least at first glance, in the GMO arena Greenpeace and its allies seem to have taken a page from the playbook of operators whom they would implacably oppose on other issues, such as climate change.

The irony is distressing, but it also underscores that, psychologically, cognitively, and politically, science denial is denial is denial is denial. Wherever it happens to be pointing.

Naomi Klein in Oxford

Naomi Klein spoke in Oxford a few days ago on invitation of COIN, the Climate Outreach Information Network in the UK. Her talk was in the Sheldonian Theatre, the official ceremonial hall of the University of Oxford—a ceremonial building indeed that added much to the enjoyment of the evening. Naomi drew a large crowd—of more than 800, so I have been told—and the event was very interesting indeed.

Dr. Adam Corner, the COIN Research Director, has already offered his thoughts on the event and Naomi Klein’s latest book, This Changes Everything. He “found the ambition of the book (to radically curb the excesses of ‘extractivist’ growth-based capitalism) compelling, but the means by which these ends could be achieved disappointingly lacking in inspiration.”

I cannot comment on the book overall as I have not read it yet beyond the early chapters, but for me the evening in Oxford raised the following questions, issues, or resolutions:

  • Capitalism is a subject for discussion. The topic of a conversation can often tell us more about the world than the arguments that are launched for or against a particular position. For example, the fact that climate scientists now vigorously debate whether one extreme weather event or another might be attributable to climate change—as for example in a recent special issue of the Bulletin of the American Meteorological Society—tells us that climate change is no longer a distant threat but that it is happening all around us. Why? Because 10 or 20 years ago no one would have seen fit to have that debate. The fact that we have the debate tells us all we need to know—climate change is happening now. Likewise, the fact that the words such as “capitalism,”  “inequality,” and “social justice” have found their place in mainstream public discourse tells us all we need to know—we are at the cusp of a paradigm shift.
  • What’s unrealistic is to ignore reality. If the solution to climate change really involves the demise of the neoliberal paradigm, isn’t that an extremely unrealistic endeavor? Isn’t climate change bad enough without having to trade in a one ideology for another one? Klein’s rejoinder to those legitimate questions was quite insightful, I thought: What’s unrealistic isn’t the replacement of neoliberalism with something else, what’s unrealistic is to ignore the physical reality of this planet which has become a central pillar of neoliberalism.
  • How to move forward? Like Corner, I am ambivalent about what follows from Klein’s analysis or indeed what she has to recommend. Perhaps paradoxically, I can agree with several seemingly incompatible positions: On the one hand, I have enough faith in markets to believe that pricing of externalities, or Pigovian taxes, may be sufficient to yield a transformation to a low-carbon economy. It wasn’t that long ago (in a nearly-geological time scale at least) that I was a principal of a software company at a time when MS-DOS 2.0 and an 8087 numeric co-processor were considered a breakthrough—the comparison with today’s information technology is so breath-taking that it augurs well for the speed with which markets could decarbonize the modern economy if given the incentive to do so. On the other hand, I have little faith in the current assortment of political “leaders” and their ability to introduce the legislation and leadership required to make markets function for the benefit of future generations. Opposition to neoliberalism, and its ultimate demise, may therefore be the only way in which climate mitigation can be achieved. On that view, any action that nibbles away at the prevailing neoliberal paradigm and its underlying fundamentalist view of free markets may indeed be considered climate activism.
  • To Change Everything we Need Everyone. Yes.

 

The Australian’s Disappearing Comissar

Update 3 October 11:33: Apparently there are two versions of the AP story, the earlier of which contained no mention of the Australian heatwave. This information was revealed by Seth Borenstein in a Twitter exchange which is recorded here. If The Australian relied on the earlier version, they would not have removed anything, and the remainder of this post is therefore irrelevant to their reporting in this instance. We now look forward to The Australian updating their AP report to reflect the elements that are of such obvious importance to Australia.

 

Graham Readfearn reports on The Australian’s coverage of the recent research that examined the link between climate change and recent weather extremes. The Australian calls itself the “heart of the nation”, although it is not always clear what nation this is referring to. It is unlikely to be Australia given that The Australian reprinted an AP piece on the extreme-weather research. That is, they reprinted most of it, with the exception of the following:

The report seeks to find how much and how man-made warming has influenced the weather, said NOAA research meteorologist Martin Hoerling, an editor of the report.

The influence on Australia’s hottest year in more than a century is glaring, the report’s editors said.

“It’s almost impossible” to explain Australia’s hot 2013 without climate change, said Peter Stott of the United Kingdom’s meteorology office, another report editor.

Yes, they left out anything relating to Australia. This editorial strategy can be readily expressed in pictorial form:

The Arctic Sea Ice Bucket Challenge continues with Rich Pancost

Having been challenged by Shapingtomorrowsworld’s Stephan Lewandowsky, the Director of the Cabot Institute has risen to the challenge. Full details and the video are here, and the screenshot below provides an idea of the size of the event:

Rumour has it that Prof Mat England is working on his response to the challenge.

(Sorry, the Cabot video is not on YouTube and hence cannot be shared easily).

“Libertarian ideology is the natural enemy of science” Always?

The Guardian carried an interesting and incisive piece yesterday under the headline “Libertarian ideology is the natural enemy of science.” From gun control to health care to climate change, there are indeed many arenas in which scientific evidence clashes with libertarian (and conservative) worldviews: To illustrate, even though the data show that if you are a victim of an assault, you are between 4 and 5 times more likely to be fatally shot if you had a gun available than if you didn’t have a gun, this evidence is generally dismissed by American libertarians and conservatives. They also dismiss the fact that after Australia introduced stringent gun control in 1996, accelerated declines in firearm deaths were observed.

So not only do libertarians dismiss the problem, they also ignore the solution.

Similarly, the relationship between right-wing politics and the rejection of the overwhelming scientific evidence on climate change is established beyond much doubt, and we all know that this often involves an element of conspiratorial thinking: a particularly colorful illustration of this tendency erupted this week in Australia.

Yesterday’s Guardian piece comes hard on the heels of other bad scientific news for conservative and libertarian ideology: During the past week, we learned that conservatives show higher levels of psychopathic personality traits than non-conservatives, and we also learned that lower intelligence in childhood is associated with increased prejudice later in life, with the mediating variable being greater endorsement of right-wing socially-conservative attitudes.

Ouch.

One likely reason that this relationship with intelligence is observed is because right-wing ideologies offer simplistic and well-structured views of society, which makes them particularly attractive to people who find uncertainty and ambiguity overwhelming. Or, as Herbert Wray put it, “smart people are more capable of grasping a world of nuance, fluidity and relativity.”

Let’s explore that nuance and relativity a bit more.

For people on the political left it may be tempting to use the recent results to equate conservatism with low intelligence, psychopathy, racism, anti-scientism, or nutty conspiratorial thinking. Although those links can be justifiably drawn on the basis of existing data, this package is a bit too neat and simple and smacks itself of the simplification purportedly associated with right-wing ideologies.

So let’s look at the other side of the ledger and look at “the right” in the nuanced manner that is so cherished by the political left:

  • We need to differentiate between libertarianism and conservatism: Although the two are often lumped together (as in the Guardian piece and as I have thus far in this post), and even though the constructs are often highly correlated, recent research has begun to differentiate between libertarians and social conservatives. This differentiation can be crucial, as for example in one of my recent studies that examined attitudes towards vaccinations.
  • Although conservatism is typically associated with more dogmatism overall, the picture changes when people’s “belief superiority” is probed with respect to specific hot-button issues. Belief superiority refers to the belief that one’s position is more correct than someone else’s, and in a recent study it has been found to be associated with political extremity rather than one particular ideology. Specifically, very liberal people were as convinced of the superiority of their views on government help for the needy as highly conservative people were convinced of the superiority of their view regarding voter identification. Less politically committed individuals on both sides of the spectrum showed a more moderate preference for their own opinion.
  • There is some evidence that even racial prejudice might have more to do with an implicit presumption of attitudes than race per se. That is, the greater prejudice against African Americans that is routinely and reliably observed among conservatives might at least in part result from the attitudes that African Americans are presumed to hold—for example, African Americans are known to predominantly endorse affirmative action and welfare, two issues on which conservatives hold strongly opposing views. Thus, black skin does not trigger prejudice, but black skin signals likely attitudes that, in turn, trigger prejudice. To illustrate, in one recent study, when conservatives and liberals had to rate their impressions of an African American who either endorsed or rejected welfare, what was found to matter was the match of attitudes and not race: Liberals’ negative impressions of a conservative African American were indistinguishable from conservatives’ negative impressions of a liberal African American, and correspondingly, both groups’ enthusiasm for African Americans of their own conviction was also indistinguishable. Importantly, this symmetry can co-exist with greater overall prejudice among conservatives—as indeed it did in this study!—which is measured without providing information about a specific person’s attitude, thereby forcing people to rely on inferred attitudes of a target group.
  • Finally, and most relevant to the role of scientific evidence in society, there is the large body of work by Dan Kahan and colleagues which shows that liberals are as susceptible to cognitive shortcuts and biases as their conservative brethren—except that those biases are expressed in different directions. For example, liberals and conservatives will misinterpret hypothetical data on gun control to suit their own biases with equal flourish. (This work remains to be reconciled with the notion that conservatism is associated with lesser cognitive functioning. I am not aware of the existence of any reconciliation, and I consider this issue unresolved and in need of further research).

What, then, is the relationship between scientific evidence and political attitudes? Is libertarian ideology the natural enemy of science?

The answer has three parts: First, conservative and libertarian ideology is undoubtedly at odds with much scientific evidence. Large bodies of solid scientific evidence are being rejected or denied on the basis of ideology, arguably with considerable detriment to society. Second, there is little doubt that liberals and progressives are equally capable of rejecting scientific evidence that challenges their worldviews, using the same well-understood processes of motivated cognition as their conservative brethren. Third, one of the most wicked problems ever to have confronted humanity, climate change, is not being addressed at present because the solutions involve challenges to conservative and libertarian worldviews. Those worldviews are not natural enemies of science; they are enemies of science because of the particular historical context in which conservative cultural cognition expresses itself at the moment.

Responding and Adapting to Climate Change: A Meeting at the University of Bristol

“Uncertainty, uncertainty, uncertainty … so why should we bother to act?”

Who hasn’t heard politicians or media personalities appeal to uncertainty to argue against climate mitigation? And indeed, why should we interfere with the global economy when there is uncertainty about the severity of climate change?

Some 20 leading experts from around the world will be meeting in Bristol late in September to discuss the implications of scientific uncertainty on the proper response to climate change.

This is particularly crucial because in contrast to the widespread public perception that uncertainty is an invitation to delay action on climate change, recent work suggests that scientific uncertainty actually provides an impetus to engage in mitigative action. Specifically, the greater the scientific uncertainty, the greater are the risks from climate change.

This conflict between people’s common perceptions of uncertainty and its actual implications is not altogether uncommon, and there are many situations in which people’s risk perception deviates from best scientific understanding.

The Bristol meeting brings together scientists and practitioners with the goal of (a) developing more effective means to communicate uncertainty and (b) to explore how decision making under uncertainty can be better informed by scientific constraints.

To address the scientific, cultural, health, and social issues arising from climate change requires an in-depth and cross-disciplinary analysis of the role of uncertainty in all of the three principal systems involved: The physical climate system, people’s cognitive system and how that construes and potentially distorts the effects of uncertainty, and the social systems underlying the political and public debates surrounding climate change.

The results of the meeting will become publically available through scientific publication channels, with the details to be announced closer to the time of the meeting. In addition, two attendees at the meeting will be presenting public lectures at the University of Bristol:

Friday 19 September, 6:00-7:30 pm. Dogma vs. consensus: Letting the evidence speak on climate change.

In this Cabot Institute public lecture, we are pleased to present John Cook, Global Change Institute, University of Queensland, and owner of the Skeptical Science blog, in what promises to be a fascinating talk.

In 2013, John Cook led the Consensus Project, a crowd-sourced effort to complete the most comprehensive analysis of climate research ever conducted. They found that among relevant scientific articles that expressed a position on climate change, 97% endorsed the consensus that humans were causing global warming. When this research was published, it was tweeted by President Obama and received media coverage all over the world, with the paper being awarded the “best article” prize by the journal Environmental Research Letters in 2013. However, the paper has also been the subject of intense criticism by people who reject the scientific consensus. Hundreds of blog posts have criticised the results and newspapers such as the Wall Street Journal and Boston Globe have published negative op-eds. Organisations who deny or reject current science on human-caused climate change, such as the Global Warming Policy Foundation in the UK and the Heartland Institute in the US, have published critical reports, and the Republican Party organised congressional testimony against the consensus research on Capitol Hill. This sustained campaign is merely the latest episode in over 20 years of attacks on the scientific consensus on human-caused global warming. John Cook will discuss his research, both on the 97% consensus and on the cognitive psychology of consensus. He will also look at the broader issue of scientific consensus and why it generates such intense opposition.

 

Tuesday 23 September 2014, 6 pm to 7.30 pm. The Hockey Stick and the climate wars—the battle continues…

In this special Cabot Institute lecture, in association with the Bristol Festival of Ideas, Professor Michael E Mann will discuss the science, politics, and ethical dimensions of global warming in the context of his own ongoing experiences as a figure in the centre of the debate over human-caused climate change.

Dr. Michael E Mann is Distinguished Professor of Meteorology at Penn State University, with joint appointments in the Department of Geosciences and the Earth and Environmental Systems Institute. He is also director of the Penn State Earth System Science Center. He is author of more than 160 peer-reviewed and edited publications, and has published books include Dire Predictions: Understanding Global Warming in 2008 and The Hockey Stick and the Climate Wars: Dispatches from the Front Lines in 2012. He is also a co-founder and avid contributor to the award-winning science website RealClimate.org.

 

Readers interested in attending the talks should register for John Cook here and for Michael Mann here.

The Joys of Statistical Mapping

Statistical maps, which display the geographic distribution as well as the magnitude of a variable of interest, have become an increasingly common tool in data analysis. From crime rates to forest fires, it is now common to represent the geographical distribution of a variable of interest by coloring a map in proportion to its magnitude. Many different ways to represent magnitude exist, and as I showed together with colleagues some 20 years ago, different plotting techniques can give rise to very different impressions for identical data. Issues such as granularity (i.e. size of area for data display; states vs. counties) or choice of color (i.e., red vs. yellow or purple shading) can affect people’s perception and accuracy.

In the climate arena, maps are routinely used to display temperature anomalies. Typically, shades of red represent positive anomalies (i.e., above-average temperatures) whereas blue is used to represent negative anomalies (below average.) This opposing-colors scheme arguably works very well in drawing the reader’s attention to particularly warm or cool areas on the globe.

James Risbey and I published a paper last Sunday with colleagues that used a set of maps in one of the figures to show the modeled and observed decadal trends (Kelvin/decade) of Sea Surface Temperature (SST). The observations, as shown in Figure 5c in our paper, are shown again below in a virtually identical format. The figure was created with MATLAB, using a white color around the zero trend and a high-resolution colormap with a low resolution contour interval:

These data present an opportunity to explore the impact of subtle graphical choices on the observer’s perception of the data.

The next figure shows the same data, also plotted with MATLAB, but with no white zone in the red-blue color bar, and with a coarse colormap that matches the contour interval used.

It will be noted that this figure “runs hotter” than the one we published in Figure 5c, because some very small long-term trends are “forced” into a pink band because the white (“neutral”) choice is no longer available.

And one more figure drawn with MATLAB: This time with a white zone around (near-) zero trends, but with no contouring.  This shows the raw trends better, but the white zone and the high resolution colorbar start to change the look quite a bit.

Finally, let’s try another software. The figure below was plotted using Panoply, using the same contour interval and matching colorbar resolution as in the first figure above, which is nearly identical to Figure 5c in our paper.

 

What conclusions can we draw from these comparisons?

The figures change the appearance of the data considerably. It follows that one should apply considerable caution when comparing figures between different publications or different research groups: Visual differences may not reflect differences in the data but differences in the subtle ways in which the graphs were produced, not all of which can be identified from inspecting the figure alone.

A second conclusion that we can draw is that regardless of the specific map being used, they all show warming during the last 15 years in the Northern and central Pacific, accompanied by cooling in the Western and Eastern Pacific. As we argued in the paper, this spatial pattern is washed out when the full CMIP5 model ensemble is considered. When models are selected with respect to how well they are synchronized with the world’s ocean, then those that are in phase with the Earth’s natural variability capture the spatial pattern of ocean heating better than the models that are maximally out of phase.

Well-estimated global warming by climate models

Has global warming “stopped”? Do models “over-predict” warming? There has been much recent talk in the media about those two questions. The answer to the first question is a fairly clear “no.” Global warming continues unabated.

To illustrate, for the coverage-bias-corrected data published by Cowtan and Way last year, the temperature trend for the last 15 years up to and including 2013 is significant—in the same way that the trend was significant for the last 15 years in 2000 and in 1990. So from any of those vantage points, the Earth has warmed significantly during the preceding 15 years.

One thing that has changed since 2000 is that more heat is now going into the oceans—rather than the atmosphere—and at an accelerating pace. Or as Dana Nuccitelli put it recently:

“The rate of heat building up on Earth over the past decade is equivalent to detonating about 4 Hiroshima atomic bombs per second. Take a moment to visualize 4 atomic bomb detonations happening every single second. That’s the global warming that we’re frequently told isn’t happening.”

Let’s turn to the second question: Have models over-estimated the rate of warming? This question has a more nuanced but quite fascinating answer.

We begin by noting that the observed global temperature increase remains comfortably within the 95% envelop of model runs, as shown in the figure below, which is taken from a recent Nature Climate Change paper by Doug Smith.

Now, arguably, the observed temperatures for the last decade or so are tending towards the lower end of the model envelope (note, though, that this figure does not plot the coverage-bias-corrected data from Cowtan and Way, which would raise the final observed temperatures and trends slightly).

Does this then mean that the models “over-predict” warming?

Not exactly.

To understand why the answer is no, we need to consider three issues.

First, it will be noted that occasional brief excursions of observed temperatures outside the 95% model envelope are not unusual new—and indeed the most recent excursion occurred when the Earth warmed faster than the models. This is the result of natural variability and represents short-term disturbances that do not affect the underlying long-term trend.

Second, we need to consider the expected relationship between the models’ output and the observed data. This is a profound issue that is routinely overlooked by media commentators, and it pertains to the common confusion between climate projections and climate forecasts. Climate forecasts seek to predict the climate over a certain range, taking into account natural variability and—similar to a weather forecast—modeling the evolution of the climate from a known starting point and taking future internal variability into account. For example, the UK Met Office publishes decadal forecasts, which are explained very nicely here.

Climate projections, by contrast, seek to describe the evolution of the climate in the long run, irrespective of its current state and without seeking to predict internal variability. The figure above, like all figures that show model output to the end of the century, plots projections rather than predictions. Because projections have no information about the phase (sequence and timing) of internal climate variability, there is no expectation that any particular projection would align with what the Earth is actually doing. In fact, it would be highly surprising if global temperatures always tracked the center of the model projections—we expect temperatures to jiggle up and down within the envelope. To buttress this point, recent work by Mike Mann and colleagues has shown that warming during the most recent decade is well within the spread of a model ensemble.

Finally, we need to consider the reasons underlying natural variability, both in the models and in the planet’s warming trend. One of the major drivers of this variability involves the El Niño – La Niña oscillation in the Pacific, which determines how much heat is taken up by the oceans rather than the atmosphere. La Niña conditions favour cooler temperatures whereas El Niño leads to warmer temperatures. The animated figure below from Skepticalscience  illustrates this nicely:

 

The figure clarifies that internal climate variability over a short decadal or 15-year time scale is at least as important as the forced climate changes arising from greenhouse gas emissions.

Those three issues converge on the conclusion that in order to meaningfully compare model projections against observed trends, the models must be brought into phase with the oceans. In particular, the models must be synchronized with El Niño – La Niña.

The evidence has been mounting during the last few years that when this synchronization is achieved, the models capture recent temperature trends very well.

At least four different approaches have been pursued to achieve synchronization.

One approach relied on specifying some observed fields in the climate models while they are “free” to evolve on their own everywhere else. For example, Kosaka and Xie showed than when the El Niño-related changes in Pacific ocean temperature are entered into a model, it not only reproduced the global surface warming over the past 15 years but it also accurately reproduced regional and seasonal changes in surface temperatures. Similarly, Matthew England and colleagues reproduced observed temperature trends by providing the model with the pronounced and unprecedented strengthening in Pacific trade winds over the past two decades—and the winds in turn lead to increased heat uptake by the oceans.

A second approach involved initialization of the model to the observed state of the planet at the beginning of a period of interest. Meehl and Teng recently showed that when this is done, thereby turning a model projection into a hindcast, the models reproduced the observed trends—accelerated warming in the 1970s and reduced rate of surface warming during the last 15 years—quite well.

The third approach, by Gavin Schmidt and colleagues, statistically controlled for variables that are known to affect model output. This was found to largely reconcile model projections with global temperature observations.

The fourth approach was used in a paper by James Risbey, myself, and colleagues from CSIRO in Australia and at Harvard which appeared in Nature Climate Change today.

This new approach did not specify any of the observed outcomes and left the existing model projections from the CMIP5 ensemble untouched. Instead, we select only those climate models (or model runs) that happened to be synchronized with the observed El Niño – La Niña preference in any given 15-year period. In other words, we selected those models whose projected internal natural variability happened to coincide with the state of the Earth’s oceans at any given point since the 1950’s. We then looked at the models’ predicted global mean surface temperature for the same time period.

For comparison, we also looked at output from those models that were furthest from the observed El Niño – La Niña trends.

The results are shown in the figure below, showing the Cowtan and Way data (in red) against model output (they don’t differ qualitatively for the other temperature data sets):

The data represent decadal trends within overlapping 15-year windows that are centered on the plotted year. The left panel shows the models (in blue) whose internal natural variability was maximally synchronized with the Earth’s oceans at any point, whereas the right panel shows the models (in gray) that were maximally out of phase with the Earth.

The conclusion is fairly obvious: When the models are synchronized with the oceans, they do a great job. Not only do they reproduce global warming trends during the last 50 years, as shown in the figure, but they also handle the spatial pattern of sea surface temperatures (the figure for that is available in the article).

In sum, we now have four converging lines of evidence that highlight the predictive power of climate models.

From a scientific perspective, this is a gratifying result, especially because the community has learned a lot about the models from those parallel efforts.

From another perspective, however, the models’ power is quite distressing. To understand why, just have a look at where the projections are heading.

 

Udpate 21/7/14, 9am: The date in the post was initially misspelled and should have read July rather than June.