Category Archives: Stephan Lewandowsky

The Inescapable Implication of Uncertainty

In a previous post, we saw that uncertainty is not your friend. In a nutshell, if there is uncertainty, things could be worse than anticipated as well as better.

The previous post encapsulated this in a figure, which I show again below because it serves as a departure point for our next installment of our uncertainty analysis:

The figure shows synthetic distributions of climate sensitivity (if you don’t know what that is, begin by reading the previous post). The means of the four distributions in the above figure are identical but their standard deviations differ considerably, from .49 in the top left to 2.6 in the bottom right. The spread of each distribution characterizes the extent of uncertainty surrounding the mean estimate of 3 degrees.

The consequences of this increase in uncertainty are as obvious as they are inescapable: As uncertainty increases, the probability of a truly catastrophic outcome (defined as climate sensitivity exceeding the highly conservative 5°C mark; indicated by red lines in the figure) increases from a small .07% in the top left panel to a concerning 14% in the bottom right. In other words, increasing the standard deviation of our distribution fivefold, from .5 to 2.5, increases the likelihood of catastrophe by a factor of 200. (The word “catastrophe” has been over-used in the climate arena, but a 5°C increase would be truly catastrophic by most accounts, and most climate scientists are very concerned even about the possibility of considerably lower increases.) See update below.

The first conclusion about the climate system therefore has to be that the greater the uncertainty, the greater the potential for catastrophe.

That said, another aspect of the above figure appears to offer comfort at first glance. As uncertainty increases, from the top left to the bottom right panel, the proportion of the distribution towards the lower end of sensitivity also increases as a trivially necessary consequence of keeping the mean constant. The proportion of low estimates (< 2°C) reaches a whopping 42% when the uncertainty is greatest (bottom-right panel in the figure). This seemingly invites an alternative interpretation: With nearly half of all estimates of sensitivity below the ostensibly-safe “guardrail” of 2°C, perhaps one could legitimately ignore the upper tail, however fat it gets with increasing uncertainty? See update below.

Whether this gamble is advisable might at first glance appear to be a matter of subjective preference—some people may legitimately think that a 40%+ chance of being safe outweighs a 14% likelihood of catastrophe. As it turns out, however, we can push our analysis further and show that the lower end of the climate sensitivity distribution does not offer the comfort it implies at first glance.

From Sensitivity to Cost

To understand the reasons for this, we must first relate climate sensitivity to the likely damage associated with climate change. Common sense dictates that greater sensitivity translates into greater cost: For example, if sensitivity is low and sea levels rise by only 2 cm, damage will be relatively minimal (i.e., we lose a few meters of beach). If sensitivity turns out to be higher, and sea levels rise by 60 cm, the damage is considerable (i.e., we need to build dams and levees or move people out of harm’s way, all at great cost).

What is even more important than the fact that damage cost increases with climate sensitivity is to ascertain the functional form of that increase: Will a .5°C increase of sensitivity from 2.5°C to 3°C incur the same additional cost as an increase from 3°C to 3.5°C? What about an increase from 4.5°C to 5°C?

It turns out that the precise form of this damage function is subject to debate. However, what does not appear to be subject to debate among economists is the fact that the damage function is convex (e.g., Nordhaus, 2010; Tol, 2011; Weitzman, 2010). “Convex” means that the rate at which damages are increasing with increasing climate sensitivity is itself increasing. This is illustrated in the figure below, using a highly convex (quadratic) function for illustrative purposes.

Consider first the top panel of the figure (Panel A). The panel itself contains three smaller panels, and the largest one in the top-right quadrant displays the cost function used for this example. The horizontal panel at the bottom shows the climate-sensitivity distribution from before, with the mean (3°C) highlighted by a vertical blue line. The left vertical panel shows the distribution of expected damage costs.

The damage-cost distribution was obtained by taking each observation in the sensitivity distribution and “reflecting” it onto the vertical axis using the convex damage-cost function. Units on the damage-cost axis are omitted because the figure is not seeking to convey actual dollar values (although economists believe that they can compute those values for future anticipated warming).

The most important message from the above figure arises from comparison of panels A and B.

The only difference between the two panels is the degree of uncertainty associated with climate sensitivity: The mean sensitivity is identical, but the spread (standard deviation) of the sensitivity distribution is greater in Panel B (standard deviation 2.5) than in Panel A (standard deviation .5).

Now consider the consequences of increasing uncertainty on damage costs: Although mean sensitivity is the same across both panels, the average expected damage increases with uncertainty—the mean damage in Panel A is lower than in Panel B. The comparison is made easy by comparing the lengths of the two vertical double-headed arrows, which point to the mean damage in each panel. It is quite clear that the arrow is longer—representing greater expected cost—in Panel B than in Panel A.

In a nutshell, if we expect X°C warming, the expected damage cost associated with that degree of warming is a function not (only) of X but also of the uncertainty associated with our estimate of X—and the greater that uncertainty, the greater the expected damage.

Not only is uncertainty not your friend, but greater uncertainty translates into greater expected loss.

And There is More

And this is just the beginning, because there are a few more points worth making about the above figure: First, not only does increasing uncertainty about climate sensitivity increase the mean expected damage cost (i.e., best mean prediction), but it also increases the uncertainty around that expected damage cost—and that increase in uncertainty is particularly dramatic. Of course, uncertainty surrounding expected damages is highly relevant because it must be taken into account when assessing the total expected risk from climate change. To illustrate, whereas expected (mean) damage increases by “only” around 50% between the two panels, the associated standard deviation (uncertainty) of the damage increases 10-fold.

Another point worth making about the figure is that greater values of climate sensitivity likely translate into quicker evolution of the climate, all other things being equal (e.g., Bahn et al., 2011, Fig. 2). In other words, greater uncertainty about sensitivity (Panel B) not only translates into greater expected damage, but that damage is also likely to arrive sooner rather than later because the rate of temperature increase is greater with greater sensitivity. (This must not be confused with the fact that greater sensitivity may entail a longer asymptotic response time of the climate system; e.g. Hansen et al., 1985. That is, with greater sensitivity the climate system warms more quickly, but because it ultimately reaches a higher temperature, getting to that asymptote may take longer than with lesser sensitivity.)

This is an important point to bear in mind because if the greater damage were delayed, rather than accelerated, economists could claim that its absolute value should be temporally discounted (as all economic quantities typically are; see Anthoff et al., 2009). But if greater damage arrives sooner, then any discounting would only further exacerbate the basic message of the above figure: Greater uncertainty means greater real cost.

To sum up, uncertainty is no one’s friend. Greater uncertainty means things can be worse than you think. And greater uncertainty means you’ll pay more for the damages arising from climate change than if there were less uncertainty. In fact, you may end up paying much more than anticipated.

Uncertainty is no one’s friend.

The next post in this series we will examine how uncertainty affects the likely cost of mitigation.

Update 28/3/12: It has been drawn to my attention that the 5°C cutoff for an outcome to be labeled “catastrophic” was too conservative; that is, temperature increases considerably less than that would be associated with outcomes that most people would consider catastrophic. Conversely, limiting temperature increases to 2°C may not be “safe.”

I do not necessarily disagree, but those issues are not central to the point made here: Wherever one places a cutoff above or below the mean, the implications of the fat tails are identical, and it does not matter where exactly the “catastrophe” lurks. The crucial fact is that greater uncertainty translates into greater likelihood of catastrophic (or bad or terrible) outcomes, all other things being equal.

References

Anthoff, D.; Tol, R. S. J. & Yohe, G. W. Discounting for Climate Change Economics: The Open Access Open Assessment E-Journal, 2009, 3.

Bahn, O.; Edwards, N. R.; Knutti, R. & Stocker, T. F. Energy policies avoiding a tipping point in the climate system Energy Policy, 2011, 39, 334-348.

Hansen, J.; Russell, G.; Lacis, A.; Fung, I.; Rind, D. & Stone, P. Climate Response Times: Dependence on Climate Sensitivity and Ocean Mixing Science, 1985, 229, 857-859.

Nordhaus, W. D. Economic aspects of global warming in a post-Copenhagen environment Proceedings of the National Academy of Science, 2010, 107, 11721-11726.

Weitzman, M. L. What is the “Damages Function” for Global Warming – and What Difference Might it Make? Climate Change Economics, 2010, 1, 57-69.

Tol, R. S. J. The Social Cost of Carbon Annual Review of Resource Economics, 2011, 3, 419-443.

 

Providing Context for GM Foods

The debate on the regulation of GM technology should be placed into a broader context.

The rate of food production needs to double by 2050 to feed the planet’s growing population (UN: 2009). Factors such as a changing climate and expanding cities will lead to the diminution of our arable land and fresh water supply (FAO: 2009). Key natural resources that are critical for growing the food for the nutrients we need are becoming more and more scarce (FAO: 2009). At the same time, many people in developed nations are increasing their demand for more specialised food-types (Ipsos-Eureka: 2010). New markets for specialty foods calling for distinct production methods continue to grow and change, placing demands on the agriculture industry to adapt.

Regardless of your preference for low-GI sugar, organic apples, free-range chicken, cereal high in resistant starch, bio-dynamic bread or whatever happens to be the best bargain at the time; it is clear that there is increasing pressure on our farmers not only to produce better food in greater quantities, but to produce that food using a broad range of agricultural production systems. It should therefore be evident that blundering on in this context without a clear strategy for Australia’s agriculture industry is akin to a child embarking upon a debut attempt at making soufflé, without a recipe, and a rapidly decreasing supply of eggs. 

I will argue that such a strategy, whatever its form, should aim for a regulatory system that facilitates the coexistence of different agricultural production systems. On a practical level, for the purposes of this post, coexistence requires facilitation of the ability to maintain product integrity in the Australian seed and grain supply chain to ensure that consumers get what they pay for, be it GM, conventional or organic.

With this broader context in mind, I will use this post to explore the relationship between misinformation and regulation, focusing on the debate surrounding the adoption of GM technology in Australia and the overarching issue of coexistence of different agricultural systems. I will first outline the state of media discourse surrounding coexistence, using the example of a dispute in Western Australia where some key factors have come to a head. I will argue that the media coverage of this incident and the broader concepts involved has constructed false dichotomies.  Borrowing from Habermas’ discourse theory of law, I will contend that the illusion of a divided public constitutes a “social power” (Habermas: 1996) from which the process of law making should be kept free. Stated otherwise, false dichotomies skew understanding of actual public perceptions of the wider issues involved in the GM debate, and in turn skew the regulatory environment, taking us further from a solution to the complex problems discussed above.

Media Controversy

The public discourse surrounding regulation of different agricultural production systems has fallen easily into the simplistic characterisations of nature versus human, environmentalist versus corporation, organic versus modern agriculture. These are false dichotomies of the most dangerous kind. Fictitious oppositions like those above have contorted both the law and the discourse surrounding agricultural coexistence.

The threatened litigation between farmers Steve Marsh and Michael Baxter in Western Australia is the most recent opportunity Australians have had to garner real discussion on the issue of agricultural coexistence. However, this is not how the incident has been constructed; as the sensationalists would have it, Marsh v Baxter is the most recent battle in a war between organics and biotechnology.

While it is difficult to find two articles covering the case that contain the same facts, the essence of the situation is that Mr Marsh was decertified by his organic certifying body after the unintended presence of Roundup Ready canola, allegedly caused by movement via wind of plants from the Baxter farm was confirmed on his farm. Mr Baxter was found by the Western Australian Department of Agriculture to have complied with all relevant regulations.

An analysis of the coverage of this case in the media reveals, with some notable exceptions, the construction of a war (see e.g. “Organic farmer to sue over GM contamination” ABC News, 13 January 2010; Kate Matthews, “Redman digs in on GM canola”, The Countryman, 10 February 2011). It was reported (see e.g. Alan Dick, The Land, 27 January 2011) that anti-GM lobby groups, along with the organic certification body that removed Mr Marsh’s status, the National Association for Sustainable Agriculture Australia (NASAA), were backing Mr Marsh. Subsequently, the Pastoralists’ and Graziers’ Association (PGA) announced their support for Mr Baxter. Funds for donations were set up by backers to help with each party’s legal costs. According to the coverage, neighbours have been turned against each other, teams have been picked and the supremacy of one ideology over another rests on the outcome of this litigation (see e.g. Elizabeth Farrelly, “Danger lies on the GM Food Road” The Sydney Morning Herald, 10 February 2011).

While most of the media constructs the debate as a war between David organics and Goliath biotechnology, the landscape of public opinion in reality is far more varied and affected by an extensive range of contingencies (Cormick: 2011). According to a 2007 study conducted for the Department of Innovation, Industry, Science and Research on Australian public attitudes towards GM crops and foods; the percentage of Australians ‘strongly opposed’ to GM foods is only 7% while those ‘strongly for’ stand at 9% (Ipsos-Eureka: 2010). These figures reveal a significant contrast between the polarised construction of the debate and the reality of public perception.

Policy and Media

A study conducted for the European Commission in 2001 found that policy decisions are often made according to lawmakers’ perceptions of public attitudes, rather than a comprehension of public attitudes in actuality (European Commission: 2001). Such an approach to policy can only skew outcomes, and in the case of GM technology, has resulted in a confused regulatory environment which not only fails to consider, but actually acts to undermine coexistence.

In the coverage of the Marsh and Baxter dispute, there hasn’t been one particular mainstream media outlet running a clear campaign for or against any single farming system (as was recently shown to be the case with the carbon pricing policy debate in the Australian Centre for Independent Journalism’s 2011 “A Sceptical Climate” report (Bacon: 2011)).

However, the portrayal of the difficulties of regulating the coexistence of different farming systems as a “war” between two distinct and incompatible ideologies has undoubtedly had an effect on the state of the regulatory environment. This influence, I would argue, is particularly evident in the implementation of respective state moratoria on GM crops in 2003, many of which are still in place today, and the current organic standards for tolerance of GM on organically certified farms. Both of these regulatory mechanisms undermine the capacity of different agricultural production systems to coexist. Future regulatory changes need to be developed in an environment free of the influence of a constructed polarised debate (see Habermas: 1996).

Solutions

If there is a solution to our food security woes and the debate on agricultural practices, it doesn’t lie in an ideology that excludes all others, nor is it contained in a single farming system, chemical, or technology. We need to re-frame the discussion of GM technology in terms of a holistic approach to agriculture, with a view to formulating the best approach for Australian farmers and consumers. This means realistic regulations that allow farmers information about and access to the tools that will help them to meet the production demands they face, as well as a well-thought out strategy for the facilitation of mutual acceptance of a range of agricultural systems. This will help us to avoid divisive conflict and threats of litigation like that between Steve Marsh and Michael Baxter. It will also allow Australia to take up the unique opportunity presented by our economic and geographic situation, to lead the world in a holistic approach to the threat of food insecurity.

This is not a debate about having to choose one agricultural system over another. It is about recognising that farmers make choices in their production systems and identifying how we can work together to uphold those choices. The challenge of coexistence of different crops, production systems and pest management in agriculture and the supply chain is not new. Different agricultural productions systems have been successfully practiced around the world in proximity to each other for many years. Any debate on the issue needs to be science based and include all the farming practices that help provide the food, feed and fibre for our world.

Both legislators and the media have a role to play in facilitating this much-needed change. A focus on peer-reviewed studies of the complicated issues in science and law that lie behind a dispute like that of Marsh and Baxter will go a long way towards rationalising the discourse This will decrease the likelihood that the regulatory system will continue to be unduly influenced by misinformation.

References

UN General Assembly Panel Discussion, “Food Production Must Double by 2050 to Meet Demand from World’s Growing Population” available at http://www.un.org/News/Press/docs/2009/gaef3242.doc.htm

Food and Agriculture Organisation, “How to Feed the World in 2050” (2009), available at http://www.fao.org/fileadmin/templates/wsfs/docs/expert_paper/How_to_Feed_the_World_in_2050.pdf

Dr Craig Cormick, “Understanding the Target Audience for Better Communication” in Mariechal J. Navarro and Randy A. Hautea (eds) “Communication Challenges and Convergence in Crop Biotechnology” (ISAA:2011).   

European Commission (2001) “Public Perceptions of Agricultural Biotechnologies in Europe research project”.

Ipsos-Eureka Social Research Institute (2010) “Public Attitudes Towards Biotechnology”, available at http://www.innovation.gov.au/Industry/Nanotechnology/PublicAwarenessandEngagement/Documents/AustBioAttitude2010.pdf

Wendy Bacon, “A sceptical climate – media coverage of climate change in Australia”, Australian Centre for Independent Journalism 2011, available at http://imlweb04.itd.uts.edu.au/acijds/investigations/detail.cfm?ItemId=29219

Jurgen Habermas, Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy” (William Rehg trans., 1996), 21-22.

These views are the author’s own and do not reflect the views of her employer, CropLife Australia or the Australian National University.

Uncertainty is not your Friend

The Australian Future fund is tasked with delivering high risk-adjusted returns on public funds, such as the Australian Government’s budget surpluses, in order to cover the Government’s unfunded superannuation liability arising from entitlements to public servants and defence personnel.

The Chairman of the Future Fund, David Murray, recently suggested on national TV with respect to climate change that “if we’re not certain that the problem’s there, then we don’t – we shouldn’t take actions which have a high severity the other way.”

This attitude towards uncertainty is not atypical: numerous news commentators have cited uncertainty about the severity of climate change in support of their stance against taking any mitigative action.

In a nutshell, the logic of this position can be condensed to “there is so much uncertainty that I am certain there isn’t a problem.” How logical is this position? Can we conclude from the existence of uncertainty that there certainly is no problem?

This conclusion appears inadvisable for a number of reasons that will be examined in this series of three posts. To foreshadow briefly, there are three reasons why uncertainty should not be taken as a reason for inaction on climate change:

  • Uncertainty should make us worry more than certainty, because uncertainty means that things can be worse than our best guess. Today’s post expands on this point below, by showing that in the case of climate change, uncertainty is asymmetrical and things are more likely to be worse, rather than better, than expected.
  • In the second post, I will show that it is a nearly inescapable mathematical constraint that greater uncertainty about the future evolution of the climate necessarily translates into greater expected damage cost.
  • Finally, the presence of uncertainty does not negate the urgency of mitigative action. There may be uncertainty about our carbon budget—that is, the amount of greenhouse gasses we can emit before we are likely to exceed temperature increases that are deemed “safe”—but the implications of there being a budget are that delaying mitigative action will necessarily end up being more costly later.

Uncertainty means things could be worse than anticipated

What does uncertainty mean in the context of climate change? Although the word uncertainty has been used in many different contexts, a good definition is to equate uncertainty with imprecision in our best estimate of the future evolution of the climate.

In particular, I focus on the likely “sensitivity” of the climate, which refers to the expected increase in global temperatures in response to a doubling of CO2 from pre-industrial levels. According to best current estimates, climate sensitivity is anywhere between 2 degrees (Celsius) and 4.5 degrees, with a best estimate of about 3 degrees. Because an eventual doubling of CO2 from pre-industrial levels (~275 ppm) cannot be ruled out, given today’s concentration of ~392 ppm and the on-going increase of 2 ppm per year, this estimate of climate sensitivity is also a good estimate of the temperature increase from pre-industrial levels that we may experience towards the end of the century.

Without going any further, we can already draw one conclusion from this fact: If our best guess of climate sensitivity is 3 degrees, and the uncertainty range is 2-4.5, then things could be worse than expected. We expect 3 degrees but might get 4.5—of course, we could also get as “little” as 2, but we are ignoring the vast majority of possible outcomes if we assume (or hope) that we will “only” get 2 degrees.

So clearly, uncertainty means that things could be worse than anticipated.

But the problem does not end there. There are two additional aspects of uncertainty that we need to consider.

First, we must consider the distribution of climate sensitivity estimates. We know that there is a “best” (mean) estimate, and we know that there is a range of most likely values. But it turns out that climate scientists can do better than that: they can provide a distribution of possible values of climate sensitivity which attaches a probability of occurrence to a range of possible values.

The figure below shows the distribution of climate sensitivity, taken from a paper by Roe and Baker (2007).

For present purposes, the most crucial aspect of the figure is its asymmetry: It has a “fat” upper tail and a fairly skinny lower tail. That is, there is little likelihood that climate sensitivity will turn out to be more than 1 degree less than the best estimate of around 3-3.5 degrees (i.e. there is little chance of it being below 2), but there is a considerable chance of it being more than 1 degree above that value—note how it is not inconceivable that it might be as high as 5 or even 6 degrees.

There are some very good—and intellectually fascinating—reasons for why the climate sensitivity distribution is asymmetric and has a fat upper tail, but they are beyond the present scope. Interested readers may want to consult Roe and Baker (2007) or Roe and Armour (2011). (For a critique of the work by Roe and colleagues, see Zaliapin & Ghil, 2010, 2011.Their critique does not affect the presence of fat tails, only the generality of the reasons underlying its occurrence).

So uncertainty doesn’t just mean that things could be worse than anticipated—in the case of climate, chances are that things will be worse rather than better than anticipated.

But remember, the problem does not end there. We have to consider one more aspect of uncertainty in climate sensitivity.

This final consideration concerns the effects of the magnitude of uncertainty. All other things being equal, should we be more worried by greater uncertainty or less worried? If scientists had really down-played uncertainty—as some commentators have insinuated—what would the effects be? What if uncertainty is actually greater than scientists think?

The answer is fairly straightforward, and it is illustrated in the figure below using a statistical Monte Carlo simulation.

The four panels in the figure contain synthetic (i.e., simulated) data that are sampled from a “lognormal” distribution—so if you took the log of the values and then plotted them, they would take on the familiar bell shape. I used a lognormal distribution because it has the fat-tail property that we know is an attribute of climate sensitivity.

To make my point, I ensured that the mean of the four distributions is identical (around 3, with a tiny amount of deviation introduced by the simulation). However, the standard deviations (spread) of the distributions differ considerably, from .49 in the top left to 2.6 in the bottom right. The spread of each distribution characterizes the extent of uncertainty surrounding the mean estimate of 3 degrees.

What are the consequences of increasing uncertainty? The consequences are indicated by the red line at the 5 degree mark in each panel. This represents a temperature increase that would indeed be nothing short of catastrophic in light of suggestions that any temperature rise beyond 2 degrees would be extremely difficult to cope with (Richardson et al., 2009).

It is obvious from the figure that the probability of a truly catastrophic outcome increases with uncertainty; from a small .07% in the top left to a concerning 14% in the bottom right. That’s a 200-fold increase in the likelihood of a catastrophic outcome when uncertainty increases from .49 to 2.6.

So uncertainty means things could be worse than anticipated.

Uncertainty in climate evolution means things are likely to be worse, rather than better, than anticipated.

And the greater the uncertainty, the more likely we could be faced with some very serious consequences.

Unlike the Chairman of the Australian Futures Fund, the real uncertainty professionals know this. Which is why Munich Re, a reinsurance behemoth, has identified climate change as one of the greatest risks facing mankind.

References

Richardson, K. et al., (2009). Climate Change: Global Risks, Challenges & Decisions. Synthesis Report of the Copenhagen Climate Congress. (University of Copenhagen).

Roe, G. H. & Armour, K. C. (2011). How sensitive is climate sensitivity? Geophysical Research Letters, 38, L14708.

Roe, G. H. & Baker, M. B. (2007). Why Is Climate Sensitivity So Unpredictable? Science, 318, 629-632.

Zaliapin, I. & Ghil, M. (2010). Another look at climate sensitivity. Nonlinear Processes in Geophysics, 17, 113-122.

Zaliapin, I. & Ghil, M. (2011). Reply to Roe and Baker’s comment on “Another look at climate sensitivity” by Zaliapin and Ghil (2010). Nonlinear Processes in Geophysics, 18, 129-131.

Climate and Lent

A few years ago I was standing next to a colleague preparing for Sunday church. Someone had just come in and asked us, as the leaders of the service, to ‘pray for rain’ for farmers who, at that stage, were experiencing a protracted drought. Now while I generally encourage people to pray for whatever they want, my colleague was insightful when he later quipped to me, ‘rather than praying for rain we ought to be praying for repentance’. He’d hit the nail on the head in the sense that prayer is really, first and foremost, about changing the human mind and heart rather than trying to change the mind of God (as if God arbitrarily interferes with nature anyway).

Ash Wednesday, celebrated this week on 22nd Feb, is the formal beginning of the 40 day Christian season of Lent, with a focus on repentance, fasting and personal change of life. There’s a common misunderstanding that to repent is all about feeling terribly guilty and (metaphorically) beating one’s self up. Furthermore the nature of ‘sin’ gets reduced to personal/private morality (‘naughty’ things that I do, in the quaint language of the old English Book of Common Prayer). It’s easy to forget that the deeper understanding of ‘sin’ in Christian tradition (also clearly articulated in the Book of Common Prayer) is about a profound rift in relationship – in the fabric of relationship between humans, God and earth.  In Christian thought, all other ‘sins’, whether of commission or omission, can be tracked back to loss of relationship – a lack of caring about the source of our life, our relationships with other people, and our relationship as humans with the earth (out of the ‘stuff’ of which we are created).

To ‘repent’ is not so much to feel guilty and stop eating chocolate for a few weeks, but to have a radical change of mind – to turn around and orient ourselves correctly to the true nature of things. For Christians, to ‘turn to Christ’ is to turn to one who completely empties himself, in love, towards others – to have communal concern over self-concern.  Hence sin, and repentance, is a deeply communal and justice based process.  I believe that we in the church need to spend less time hammering on about issues of private morality (not that these are unimportant) and spend a lot more attention recalling ourselves to the far more challenging issues of communal justice and responsibility. This of course leads us on a direct course to, amongst other issues, how we care for the planet .

This is precisely the point being made by the many church leaders who made climate a key ‘repentance’ issue for Ash Wednesday (22/2/12) services in English churches. As Archbishop Rowan Williams has said, “For the Church of the 21st century, good ecology is not an optional extra, but a matter of justice. It is therefore central to what it means to be a Christian.”  To which is added, by the Bishop of London, Richard Chartres,  “In the 21st century in an interconnected world, practising love of neighbours means that we are committed to mitigate the effects of climate change which will fall disproportionately on the poor and vulnerable in the world and those least able to adapt to rising temperatures and sea levels.”

My hope is that Lenten reflections in churches might be raised to a higher level of conversation. Since repentance always starts with an honest facing up to reality we might begin by standing by our brothers and sisters in the scientific community who help us to see what’s actually going on. We must speak out against outrageous and typically uninformed attacks on scientists.  Having heard the honestly presented data, we must act. For example, a great Lenten ‘discipline’ would be to calculate our household (personal households as well as churches) carbon and environmental footprints and take some significant actions towards reducing our footprints – there are a number of on-line calculators to help with this. Being in Perth I have used the ‘Carbon Neutral’ web site to help me get a handle on our family’s energy consumption.

Taking this kind of action gives a new dimension to the practice of ‘fasting’. It means not just fasting from food (which is good for body and soul) but also fasting from our typically lavish, and immoral, consumption of energy.  Whilst this process is, more than likely, going to require personal financial investment (also good for the soul), the process doesn’t have to be about misery – in fact it can be a fun and energising household project for families. For example, simply taking to the bike and public transport instead of the car opens up all kinds of possibilities, from physical fitness to having the time to read new books. The possibilities are only limited by our own creativity. Happy repenting!

From the Debunking Handbook to “Widerlegen: aber richtig!”: Die deutsche Übersetzung

Das “Debunking Handbook,” daß John Cook und Stephan Lewandowsky for einigen Monaten auf Englisch produziert haben, ist inzwischen mehr als 465.000 mal heruntergeladen worden. Unter Anderem ist das “Handbook” von Richard Dawkins und Al Gore auf deren Internetseiten empfohlen worden.

Das “Handbook” ist jetzt auf Deutsch übersetzt worden, und wir machen es nun hier zum herunterladen verfügbar. Der deutsche Titel ist “Widerlegen: aber richtig!

Wir bedanken uns sehr bei unseren ehrenamtlichen Übersetzern, Bärbel Winkler und Oliver Marchand, für ihre sehr detaillierte und ausführliche Arbeit.

Wir freuen uns daß die deutsche Version schon vor ihrem Erscheinen in den deutschsprachigen Medien ein Echo gefunden hat. Zum Beispiel berichtete die Süddeutsche Zeitung am 1. Februar über unsere Arbeit sehr ausführlich.

Wir zitieren den Anfang dieses Artikels da er klar stellt warum das Widerlegen heutzutage so besonders wichtig ist: “Manche Irrtümer enden tödlich. Der Irrglaube, dem der ehemalige südafrikanische Präsident Thabo Mbeki aufsaß, kostete laut mehrerer Studien etwa 365 000 Menschen vorzeitig das Leben. Sein Fehler führte dazu, dass Kindern, Frauen und Männern Knoblauch, Rote Bete und Vitamine empfohlen wurden, um ihre HIV-Infektion zu behandeln. Wirksame antiretrovirale Medikamente lehnten Mbeki und seine Gesundheitsministerin Manto Tshabalala-Msimang ab. Diese Arzneien hätten das Leben vieler Menschen verlängert und verhindern können, dass zigtausend schwangere Frauen das HI-Virus an ihre Babys weitergaben. Doch Mbeki hatte Internetseiten sogenannter Aids-Leugner Glauben geschenkt. Darauf behaupten die Anhänger dieses Mythos, dass Aids nicht durch das HI-Virus ausgelöst werde. Es hagelte Kritik. Wissenschaftler weltweit liefen Sturm. Auf Konferenzen und in Publikationen wurde Mbeki mit Argumenten und Fakten überschüttet. Er ließ sich nicht abbringen. Sämtliche Argumente prallten an ihm ab.”

Genau darum dreht es sich: Das Festklammern an unsinnigen Behauptungen kann gewichtige Konsequenzen haben—vom Klimawandel bis zu AIDS ist es extrem riskant, den überwältigen wissenschaftlichen Beweisstand zu ignorieren.

Wir hoffen daß “Widerlegen: aber richtig!” dazu beitragen kann die öffentliche Debatte auf einer mehr faktentreuen Basis zu gestalten.

An English post about the German Debunking Handbook has been posted on Skeptical Science.

Getting to the Truth: Faith or Evidence?

Peter Boghossian is an expert on critical thinking, and in this video lecture he explores the relative merits of faith and evidence as tools to understand ‘the truth’.

 

Jesus, the Easter Bunny, and Other Delusions: Just Say No! from Philosophy News on Vimeo.

Here is what others have said about the lecture:

“In this talk, sponsored by The Freethinkers of Portland State University and published by philosophynews.com, Dr. Boghossian argues that faith-based processes are unreliable and unlikely to lead one to the truth. Since our goal as knowers is to have more true beliefs than false ones, faith, as a process for getting to the truth, should be abandoned in favor of other, more reliable processes. The talk was followed by a question and answer session from the audience.” Paul Pardi, philosophynews

“Dr. Boghossian has constructed a fascinating critique of faith as a reasoning process, orthogonal to religion, and orthogonal to the arguments of Sam Harris who has emphasized the consequences of religious faith. Boghossian’s approach, building from the axiom that people seek to maximize true belief, and minimize false belief, is direct and blunt, and yet encourages contemplation of faith by those who rely upon it as a reasoning process — independent of specific ideas which may be deeply personal to religious practitioners. Approaching faith from this angle is an original contribution to the global discourse, and complementary to The End of Faith arguments from Sam Harris. Peter Boghossian also employs some humor, inviting his audience to jettison faith.” Gary Longsine

Unexpected connections: Income inequality and environmental degradation

Ensuring that natural resources are consumed and waste is produced at sustainable rates represent major contemporary challenges. Recognition of these challenges resulted in the endorsement in 2000 of environmental sustainability as one of the Millennium Development Goals (MDGs) to be achieved by 2015. However, by 2003 global rates of consumption and waste production were estimated to be at least 25% higher than the capacity of the planet to provide resources and absorb waste (Kitzes, et al., 2007) and this rate may have risen as high as 50% by 2007 (WWF, 2010). A vital aspect of achieving sustainability is widespread social change, yet the current theoretical knowledge of societal transformation processes is limited. In order to improve nations’ environmental performance, a better understanding of socioeconomic and behavioural forces driving such unsustainable development is required.

There is general agreement that economic growth, as measured by per capita national income, affects environmental quality (Constatini & Martini, 2010). Papanyotou (1994) has described this relationship by an inverted U-shape curve (the Environmental Kuznets Curve (EKC)). From this perspective, as the economy expands, environmental damage increases up to a threshold, above which the society demands and can afford environmental remediation and natural resource protection. However, research into the extent to which the EKC succeeds in describing the relationship between economic growth and environmental damage has produced conflicting results. Shafik (1994), for example, found that a number of variables vary monotonically with income – and in different directions; some, such as access to safe water and sanitation show improvement, while others, such as CO2 emissions and waste generation indicate deterioration. There is evidence that the richest countries’ carbon emissions and resource consumption are continuing to increase beyond sustainable levels (Moran, et al., 2008) and some developing countries appear to perform better, environmentally speaking, than developed ones (Emerson et al., 2010). It appears that economic growth may not be ultimately or necessarily beneficial (Lawn & Clarke, 2010; Magnani, 2000) and that other factors make a major contribution to environmental outcomes. In particular, it appears that income distribution rather than average levels of income may be a critical determinant of environmental degradation (Jun et al., 2011).

Income inequality has been extensively correlated with health and social problems: life expectancy, obesity, mental health, drug use, educational performance, teenage births and violence to name a few (Wilkinson & Pickett, 2010). Environmental degradation appears to be another side effect of economic inequality and analyses show there is a negative correlation between income inequality and environmental sustainability (Andrich et al., 2010; Baland et al., 2007; Butler, 2002; Dorling, 2010a, 2010b; Holland et al., 2009; Mikkelson et al., 2007): the higher the income inequality the worse the environmental indicators such as waste production, meat and water consumption, biodiversity loss and environmental composite indices (e.g. ecological footprint).

While research on the relationship between inequality and various social and environmental indicators is extensive, the pathways and mechanisms underlying these linkages are yet to be established. It may be, as Coburn (2004) has argued, that income inequality is itself the result of the basic social, political and economic characteristics of a given society which also affect the quality of the environment. Characteristics such as the prevailing economic ideologies, cultural values like individualism and materialism, as well as attitudes toward consumption, work and the importance of protecting the environment are all likely to influence national policies which affect both income distribution and environmental quality. Breznau’s (2010) analysis of survey data from five countries showed that those with egalitarian values are the most likely to support government services designed to reduce inequality. Conversely, those who support “economic individualism”[1] and neo-liberal economic policies are the least likely to favour such a role for government (Arikan, 2011). Such attitudes also appear to affect support for pro-environmental behaviours and expenditures. Generally speaking, those who hold pro-market/ individualist worldviews do not endorse pro- environmental values and behaviours; the reverse holds true for people who maintain pro-environmental values (Heath & Gifford, 2006; Steg & Sievers, 2000). Significantly, for understanding the relationship between inequality and environmental degradation, these values appear to be reflected in actual government expenditure; countries which have higher scores on “economic individualism” have lower levels of overall government spending per capita (Arikan, 2011).

Arguing from an economic perspective, Magnani (2000) has speculated that income inequality reduces pro-environmental public spending via a ‘relative income effect’ which causes shifts in the preferences of those with below average incomes in favour of greater consumption of private goods instead of public ones. Using data on public expenditure on environmental protection between 1980 and 1992 in 19 OECD countries, she showed that wider income inequalities were associated with lower environmental expenditure.

Alternatively, Wilkinson and Pickett (2010) argue that inequality heightens consumerism that makes it harder to contain economic activity within sustainable levels and show evidence from a number of sources on savings, debt, bankruptcy rates, spending on advertising and working hours that indicate that inequality does increase the pressure to consume. According to Wilkinson and Pickett, two mechanisms can be identified by which social inequality leads to greater consumerism: (1) people in more unequal societies are more stressed and anxious, and consumption provides a way for  people to escape from those pressures, and (2) the consumption by the rich reduces everyone else’s satisfaction with what they have, by showing it up as inferior, as less than the best; a great deal of what drives consumption is status competition, which, intuitively, is more intense in unequal societies.

Boyce et al (2007) have proposed that the unequal distribution of wealth and power within countries leads to greater environmental damage by undermining the collective action required for environmental protection. Likewise, it has been shown that more equal societies are more socially cohesive and have higher levels of trust which foster public-spiritedness (Wilkinson & Pickett, 2010). Similar findings have been reported on the relationship between social capital and environmental outcomes (Dulal et al., 2011), presumably because lack of trust and poorer social capital erode the capacity for collaborative action.

It is possible that by inducing lower expenditure on environmental protection, increasing consumerism or reducing collective action, economic inequality may affect nations’ environment performance. To date, there has been a lack of empirical analyses establishing the potential causal links between income inequality and environmental degradation.  We are currently undertaking research which we hope will fill in some of these gaps in our understanding.

References

Andrich, M. A., Imberger, J., & Oxburgh, E. R. (2010). Raising Utility and Lowering Risk through Adaptive Sustainability: Society and Wealth Inequity in Western Australia. Journal of Sustainable Development, 3(3), 14-35.

Arikan, G. (2011). Economic Individualism and Government Spending. World Values Research, 4(3), 73-95.

Baland, J.-M., Bardan, P., & Bowles, S. (Eds.). (2007). Inequality, cooperation, and environmental sustainability. Princeton: Princeton University Press.

Boyce, J. K. (1994). Inequality as a cause of environmental degradation. Ecological Economics, 11(1994), 169-178.

Boyce, J. K., Klemer, A. R., Templet, P. H., & Willis, C. E. (1999). Power distribution, the environment, and public health: A state-level analysis. Ecological Economics, 29(1999), 127–140.

Boyce, J. K., Narain, S., & Stanton, E. A. (Eds.). (2007). Reclaiming nature: environmental justice and ecological restoration. London: Athem Press.

Breznau, N. (2010). Economic Equality and Social Welfare: Policy Preferences in Five Nations International Journal of Public Opinion Research, 22(4), 458-484.

Butler, C. D. (2002). Inequality and Sustainability. Doctoral thesis, Australian National University, Canberra. Retrieved from https://digitalcollections.anu.edu.au/handle/1885/46233 

Coburn, D. (2004). Beyond the income inequality hypothesis: class, neo-liberalism, and health inequalities. Social Science & Medicine, 58(2004), 41–56.

Constatini, V., & Martini, C. (2010). A Modified Environmental Kuznets Curve for sustainable development assessment using panel data. Int. J. Global Environmental Issues, 10(1/2), 84-122.  

Dorling, D. (2010a). Is more equal more green? London: University of Sheffield.

Dorling, D. (2010b). Social Inequality and Environmental Justice. Environmental Scientist, 19(3), 9-13.

Dulal, H. B., Foa, R., & Knowles, S. (2011). Social Capital and Cross-Country Environmental Performance. Journal of Environment & Development, 20(2), 121–144.

Emerson, J., Levy, M., Esty, D. C., Kim, C., Mara, V., & Srebotnjak, T. (2010). 2010 Environmental Performance Index. New Haven: Yale Center for Environmental Law and Policy.

Heath, Y., & Gifford, R. (2006). Free-Market Ideology and Environmental Degradation : The Case of Belief in Global Climate Change. Environment and Behaviour, 38(1), 48-71.

Holland, T. G., Peterson, G. D., & Gonzalez, A. (2009). A Cross-National Analysis of How Economic Inequality Predicts Biodiversity Loss. Conservation Biology, 23(5), 1304-13013.

Jun, Y., Zhong-kui, Y., & Peng-fei, S. (2011). Income Distribution, Human Capital and Environmental Quality: Empirical Study in China. Energy Procedia, 5(2011), 1689–1696.

Lawn, P., & Clarke, M. (2010). The end of economic growth? A contracting threshold hypothesis. Ecological Economics, 69(2010), 2213–2223.

Magnani, E. (2000). The Environmental Kuznets Curve, environmental protection policy and income distribution. Ecological Economics, 32(2000), 431-443.

Moran, D. D., Wackernagel, M., Kitzes, J. A., Goldfinger, S. H., & Boutaud, A. (2008). Measuring sustainable development — Nation by nation. Ecological Economics, 64, 470-474.

Mikkelson, G. M., Gonzalez, A., & Peterson, G. D. (2007). Economic Inequality Predicts Biodiversity Loss. PLoS ONE, 2(5), e444. doi: 10.1371/journal.pone.0000444

Pandit, R., & Laband, D. N. (2009). Economic well-being, the distribution of income and species imperilment. Biodiversity and Conservation, 18(2009), 3219-3233.

Papanyotou, T. (1994). Empirical tests and policy analysis of environmental degradation at different stages of economic development. Pacific and Asian Journal of Energy, 4(1), 23-42.

Shafik, N. (1994). Economic Development and Environmental Quality: An Econometric Analysis. Oxford Economic Papers, 46(1994), 757-773.

Steg, L., & Sievers, I. (2000). Cultural theory and individual perceptions of environmental risks. Environment and Behaviour, 32(2), 250-269.

Wilkinson, R., & Pickett, K. (2010). The Spirit Level: Why Equality is Better for Everyone. New York: Penguin Books Ltd.

WWF International. (2010) Living planet report 2010. Gland: WWF International.  Retrieved September 29, 2011, from http://wwf.panda.org/about_our_earth/all_publications/living_planet_report/

 

 


[1] economic individualism—a cultural orientation that emphasizes the values of individual autonomy, self-reliance, and achievement and is associated with support for capitalism and laissez faire market economics and preference for a limited role of government in the economy

Disasters that Come and Go—But They Will be Back

Who hasn’t heard the phrase “in today’s dollars”? We all know that this refers to the price of goods being adjusted to reflect the passage of time.

For example, if we want to know whether cars have become more expensive since the 1950s, it would make very little sense to compare the sticker price of Holden‘s classic HR in 1966 ($2,167) to today’s Commodore (approximately $36,000) without adjusting for inflation. In fact, the 1966 HR would have cost $24,091 using 2010 dollars, the latest year that the Reserve Bank Inflation Calculator will handle.

Still, that’s about $10,000 less than today’s Commodore, so have cars become more expensive?

Comparing prices and costs across time is therefore not entirely trivial and it is easy to be led astray, in particular because any such adjustment is affected by a number of considerations that are not always spelled out. The preceding example adjusted for inflation but it did not adjust for “how much car” you are getting now, as opposed to 45 years ago. And when you consider the evolution of the quality of products over time, even a low-end Barina at $13,990 probably offers more value now (USB and iPod compatibility, wow!) than an HR did in 1966.

But how would one quantify the value of that USB and iPod compatibility? How much “more car” can you purchase now compared to 1966? This is a tricky issue and there are some ingenious solutions but I do not believe anyone would claim that it has been resolved altogether.

And that was the easy part.

Now let’s consider an issue that has caused considerable public and scientific debate, namely whether there is a “signal” from climate change in the recent acceleration of extreme weather events.

Are we experiencing climate change and is the number of recent freak weather events a sign of this increasing climate instability?

This question has no easy answer at the moment.

Let’s begin by counting the number of weather-related disasters around the globe during the last 30 years: They are represented by the increasing function in the figure below (taken from Neumayer & Bartel, 2011). There is no doubt that this number is increasing—in fact, the frequency of weather-related disasters has nearly tripled in the last 30 years.

The figure also contains a (virtually) flat line, which represents the number of geophysical disasters—that is, things like earthquakes and volcanic eruptions that are not affected by a changing climate. The fact that this line is nearly flat tells us that the increasing disaster count isn’t an uninteresting consequence of better accounting or reporting. If geophysical disasters had also been increasing dramatically, then that would suggest that the trend was likely due to factors other than climate change, such as better detection of events as technology improved or better reporting as more developing countries had the funds to keep accurate records.

Case closed? Climate change obviosuly led to an increasing number of weather disasters around the globe?

It is tempting to draw this conclusion but that would be premature. There is another way of looking at these data.

Partialing out Wealth: The Intriguing World of Normalization

Rather than merely counting events, insurance companies and other professional risk analysts are interested in finding out whether insured losses from disasters have increased over time. Considering losses has the additional advantage that unlike counting of disasters, there is no argument over what constitutes a “disaster”—whereas if you count events you need to know what counts as an event; how many uprooted trees does it take for an event to be a bona fide disaster?

However, when we turn to measuring losses, this is where the problem of “today’s dollars” and how to make comparisons between the qualities of a 1966 HR and today’s Commodore rears its ugly head in full complexity.

Not only do we have to use today’s dollars to look at disaster-related damage over time, but we have to account for other factors that might increase insurance losses for reasons unrelated to climate change. For example, more people may move into attractive areas—such as floodplanes or coastal strips—that are also disaster prone. Property values may rise faster than inflation, and as people get wealthier, more precious and costly objects are put in harm’s way. Insured disaster losses may increase over time for those reasons alone and without any additional contribution from climate change.

The challenge is to account for those other factors by statistical means, thereby removing them from the analysis to permit identification of the net effect of climate change on insured losses. This process is known as normalization.

It turns out that after normalization, most of the time—but not always; Barthel and Neumayer (2012)—there is no longer an increase in disaster-associated losses over time. If we adjust the earlier losses to today’s figures, similar to the way in which we expressed the price of a 1966 Holden HR in today’s dollars, then most researchers have thus far failed to find a climate-associated trend in the data (see Bouwer, 2011, for a review).

This is illustrated in the figure below, also taken from Neumayer and Bartel (2011), which shows temporal trends for Global losses from non-geophysical disasters after normalization, using two different normalization techniques. Neither trend line is significantly different from zero.

This is good news, of course, because it means that thus far the insurance industry does not have to cope with additional losses from climate change. (And we don’t have to cope with higher premiums.)

But is it the whole story?

No. And this is where things get to be particularly interesting.

We Don’t Just Get Wealthier—We Also Advance Technologically

Because the moment one begins to “normalize” temporal trends by adjusting for relevant variables, intellectual rigor demands that one include all such relevant variables. That is, it is insufficient to include only variables related to increasing risk exposure (e.g., growing wealth, people moving into disaster-prone areas): One must also consider variables that decrease exposure to risk over time.

For example, today’s building standards render homes far more resistant to wind damage: Your grandfather’s fibro shed may have easily collapsed under a hefty breeze but your beach-front double brick structure is likely to withstand anything short of Hurricane Rambo. Likewise, your great-aunt couldn’t call in the cavalry when her farm was threatened by bush fire, whereas today you might get assistance from a pair of helicopters shortly after reporting a fire.

Most normalization research to date has not accounted for those variables because they are extremely difficult to quantify. (And most researchers have been at pains to point that out; e.g., Neumayer & Barthel, 2011, pp. 23-24.)

This failure to include relevant variables has drastic implications, because as noted by Nicholls (2011), it means that the absence of a loss trend rests on the assumption that there have been no improvements—zero, none, zip—in our ability to issue advanced warning of weather hazards, in our building codes, in our fire fighting abilities and so on. (Anthes et al., 2006, and Trenberth, 2010, have made similar points; though see Pielke et al., 2006).

In effect, normalization research to date largely rests on the oddly inconsistent pair of assumptions that (a) we have built up enormous wealth during the 20th century but (b) did so without any technological advance whatsoever.

So where does this leave us?

The issue of normalization of disaster losses is tricky and cannot be considered resolved. We urgently need research to quantify and account for the effects of better mitigation technology that has—so far—kept losses from disasters manageable compared to historical precedents.

Absent such quantification we can be sure of only one thing: If those variables had been accounted for during normalization, the observed loss trends would have been greater—how much greater is unknown at the present time, but greater for sure.

We also know that the raw numbers show a dramatic increase over the last few decades, and although it would be inadvisable to attribute all of that to climate change (after all, a hidden variable such as the number of leprechauns that are born on 17 March every year might also be responsible), it seems equally inadvisable to dismiss that trend altogether.

Evaluating Risk and Thinking About the Future

As usual, therefore, we are left with some uncertainty. The normalized trend of disaster losses is certainly underestimating the true state of affairs, and equally, the raw number of disasters comes with its own set of problems that may, if anything, overestimate the trend.

In light of that uncertainty, what could (and should) we conclude?

First, we must acknowledge that risk judgments and tradeoffs are inherently subjective and subject to preferences. Some people don’t get on planes because the risk is too great (for them) while others not only get on planes but jump out of them because (for them) that’s a risk worth taking.

However, that doesn’t mean that evaluating risks is a free-for-all game without any rules: For example, just because there is uncertainty doesn’t entitle one to be certain that there isn’t a problem.

This is a crucial point, so let’s revisit it without the double negatives: uncertainty cannot imply that a problem is certainly absent. So appealing to uncertainty about trends in disasters to conclude that we don’t have a climate problem constitutes a fallacy of reasoning.

And in the context of weather-related disasters, a balanced assessment of risk would probably consider the following additional facts:

  • meteorological records have identified an increasing number of “fire risk” days in Australia. This trend is not subject to normalization because it simply looks at the conditions that might give rise to further disaster losses.
  • the most recent IPCC assessment regarding extreme events has judged it virtually certain that heat extremes will increase, and there is every expectation that heavy precipitation will occur more often in the future as well.
  • the same report predicts with high confidence that there will be further coastal inundation, and instability in mountain regions including landslides.
  • oh, and did I mention that the number of weather-related disasters has tripled in the last 30 years?

It would be unwise to conclude that we definitely don’t have a problem.

References

Anthes, R. B.; Corell, R. W.; Holland, G.; Hurrell, J. W.; MacCracken, M. C. & Trenberth, K. E. (2006). Hurricanes and Global Warming—Potential Linkages and Consequences. Bulletin of the American Meteorological Society, 87, 623-628.

Barthel, F. & Neumayer, E. (2012, in press). A trend analysis of normalized insured damage from natural disasters. Climatic Change.

Bouwer, L. M. (2011). Have Disaster Losses Increased Due To Anthropogenic Climate Change? Bulletin of the American Meteorological Society, 92, 39-46.

Neumayer, E. & Barthel, F. (2011). Normalizing economic loss from natural disasters: A global analysis Global Environmental Change, 21, 13-24.

Nicholls, N. (2011). Comments on “Have disaster losses increased due to anthropogenic climate change?” Bulletin of the American Meteorological Society, 92, 791-793.

Pielke, R. A. J.; Landsea, C.; Mayfield, M.; Laver, J. & Pasch, R. (2006). Reply to “Hurricanes and Global Warming—Potential Linkages and Consequences.” Bulletin of the American Meteorological Society, 87, 628-631

Trenberth, K. E. (2010). Fixing the Planet? Science, 330, 1178-1179.

 

Anthes, R. B.; Corell, R. W.; Holland, G.; Hurrell, J. W.; MacCracken, M. C. & Trenberth, K. E.
Hurricanes and Global Warming—Potential Linkages and Consequences
Bulletin of the American Meteorological Society, 2006, 87, 623-628

Debunking Handbook: update and feedback

When we published the Debunking Handbook, I have to admit, we completely underestimated the impact it would make. A few days after the launch, it suddenly went viral with over 150,000 downloads in a single day. This week, it just ticked over 400,000 downloads. We always planned that the Handbook would be useful not just for climate myths but for communicators having to deal with any type of misinformation. Nevertheless, it was surprising to see the Handbook featued on websites as diverse asRichard Dawkins and Silobreaker. A website devoted to debunking MLM myths saw it as “useful when debating with brainwashed members of MLM organizations”. A Muslim forum speculated that it “Should be useful when engaging people who believe lies about Islam”. Currently, several educators are looking to integrate it into their curriculum.

Here are some excerpts from reviews of the Debunking Handbook:

“I simply cannot believe that John Cook of Skeptical Science and psychologist Stephan Lewandowsky managed, in just 8 pages, to create something as magnificent as their new Debunking Handbook… As someone who teaches science communication, I’m going to recommend Cook’s and Lewandowsky’s handbook to as many folks as I can find.”
Chris Mooney, Desmogblog

“…a must-read summary of the scientific literature on how to extract pernicious myths from people’s minds and restore fact-based knowledge.”
Brad Johnson, Think Progress

“I have to say that Cook and Lewandowsky have done a great job of clearly and succinctly outlining the challenge(s) and providing actionable paths forward to deal with them… In short, consider the Debunking Handbook a must read and a must keep reference.”
A Siegel, Daily Kos

There were a few criticisms also. A science communicator from the University of Western Australia (a colleague of Stephan Lewandowsky) pointed out that in our case study debunking on Page 6, we weren’t practising what we preach by using a graphic that emphasised the myth rather than the core facts. Fair point. So we’ve updated our example debunking and also made a minor tweak to the text on The Overkill Backfire Effect. Click here to download the updated Debunking Handbook.

This blog post was cross-posted from Skeptical Science.

Estivation Renovations

Whereas most species of bears hibernate, Australians indulge in estivation instead, preferring slightly cooler beaches to their offices, especially when university air conditioning is turned down or off.

We at Shaping Tomorrows World have therefore not posted anything for about 3 weeks, having instead focused on an upgrade of the site, which is now ready for 2012.

Here is the list of new features:

  • The login field on the left has been moved up to facilitate leaving comments.
  • Authors’ names now link to their bio pages.
  • All posts by the same author are now accessible from their bio page.
  • The Our Authors button at the top (in the yellow ribbon) now allows instant access to a data base of all our contributors, and from there to each author’s posts. We find this new feature particularly useful.

We look forward to resuming regular posts shortly. For overseas visitors, January 26 is Australia Day and it marks the end of the semi-official estivation period. After that the air condiotioning is back on and academics gradually ramp up to their ~60-hour work weeks.

We wish all our readers a happy 2012.