Putting a price on carbon: Why not a carbon tax?

In Australia there has never really been a debate about the merits of particular policy instruments available to governments – price-based or quantity-based ones – to mitigate the effects of climate change.

Climate change mitigation involves reducing GHG emissions, reducing the rate and magnitude of global warming. Many of the impacts of climate change can be reduced or delayed by mitigation. The main economic requirement for effective mitigation is to put a price on carbon.

At the moment we don’t generally pay for the current and future costs of our emissions. Thus, Lord Stern refers to a climate change ‘market failure’. As one economist puts it,

we need to correct this market failure by ensuring that all people, everywhere, and for the indefinite future face a market price for the use of carbon that reflects the social costs of their activities. Economic participants [governments, firms, people] … need to face realistic prices for the use of carbon if their decisions about consumption, investment, and innovation are to be appropriate.

Qantity or price-based instruments?

The question is whether to rely on quantity-based or price-based instruments. A price-based instrument is a carbon tax. A tax sets a price on carbon, and emitters choose how much to emit; an emissions trading scheme (ETS) sets a total quota for emissions; emitters – the market – work out the price.

An ETS is a quantity-based instrument, the most common example of which is a cap-and-trade system. The Carbon Pollution Reduction Scheme (CPRS), a proposed Australian ETS which was to have begun on 1 July 2011, was never implemented. And national ETS legislation – the Clean Energy Act – has been repealed with effect from 1 July 2014.

Given the failure of emissions trading at the national level and its (now) near-absence at the state level in Australia, we set out here how a carbon tax could work and some of its advantages. New research confirms additional carbon tax advantages.

A carbon tax could begin at a relatively low level (so as to avoid disruption) and would increase steadily, and predictably, over time, providing incentives to affected corporations to lower emissions, and encouraging those corporations to use energy more efficiently – encouraging the move to lower emissions technology.

And while there are a number of points at which to impose a carbon tax, there is some agreement that the most simple, efficient way is for it to be introduced as close to the source of the fuel as possible – that is, as far upstream in the energy supply chain as possible.

One result of an upstream approach is that increased costs would be passed along by suppliers and would be borne, ultimately, by consumers; they would be passed into downstream prices of electricity, for example.

It is argued by those on both the left and the right that a carbon tax would provide government revenue which could then be used to reduce or offset other forms of taxation, primarily corporate and personal income taxes, thus making a carbon tax ‘revenue neutral.’ Revenue from a carbon tax could also be used to subsidise alternative fuel industries and projects.

Arguments for a carbon tax

Arguments that can be made for the imposition of a carbon tax, both in isolation and as against an ETS, include the following:

 1.         Taxation is a proven instrument. Countries have used taxes for centuries, and their properties are well understood. For Yale University’s Nordhaus, such advantages are even clearer when compared to the operation of an international ETS. As he says,

tax systems are mature and universally applied instruments of policy … By contrast, there is no experience – as in zero – with international cap-and-trade systems [although that’s not quite true now] … [I]t would be … perilous for the international community to rely on an untested system like international cap-and-trade to prevent dangerous climate change …

2.         Taxes capture revenue more easily than quantitative instruments, and are less costly. Tax infrastructure is in place; pre-existing collection mechanisms exist. Taxation has lower administrative and compliance costs than does carbon trading.

3.         Taxation is more direct and transparent than emissions trading (so it’s said), and affords less opportunity for corruption; money moves from polluters directly to the government. And a carbon tax provides price certainty and stability (as opposed to permit price volatility) and a fixed price for carbon emissions across all economic sectors and markets.

The argument for carbon taxation is concisely made by Harvard economist Richard Cooper:

Decisions to consume goods and services made with fossil fuels are made by over a billion households and firms in the world. The best and indeed only way to reach all these decision makers is through the prices they must pay. If we are to reduce CO2-emitting activities, we must raise the prices of those activities. Levying a [tax] … does that directly.

New research – why taxes are better

Recent research from Stanford Law School shows that vital information necessary for the design of cap and trade systems is unavailable to those making climate policy and that, while accurate emissions forecasts are needed to set the cap, energy models ‘are not up to the task.’

This is not the case with regard to carbon taxes. The Stanford research shows that such taxes ‘don’t require the same level of information about future emissions in order to create real policies.’ Further,

[c]arbon taxes are likely to produce emissions reductions relative to baseline emissions with greater certainty than a cap-and-trade because a real carbon tax will always create incentives to lower emissions while real cap-and-trade may not.

There is also news from the United States for adherents of emissions trading schemes. Representative Chris Van Hollen of Maryland plans to introduce legislation in the US House of Representatives that would result in permits purchased by coal, oil and natural gas corporations for each tonne of carbon in the fuels they sell being auctioned. All of the proceeds would be ‘returned straight to the American people as equal dividends for every woman, man and child.’

The New York Times asks whether the bill ‘stand[s] a snowball’s chance in the partisan hell of Washington.’

The answer is no.

The Arctic Sea Ice Bucket Challenge continues with Rich Pancost

Having been challenged by Shapingtomorrowsworld’s Stephan Lewandowsky, the Director of the Cabot Institute has risen to the challenge. Full details and the video are here, and the screenshot below provides an idea of the size of the event:

Rumour has it that Prof Mat England is working on his response to the challenge.

(Sorry, the Cabot video is not on YouTube and hence cannot be shared easily).

“Libertarian ideology is the natural enemy of science” Always?

The Guardian carried an interesting and incisive piece yesterday under the headline “Libertarian ideology is the natural enemy of science.” From gun control to health care to climate change, there are indeed many arenas in which scientific evidence clashes with libertarian (and conservative) worldviews: To illustrate, even though the data show that if you are a victim of an assault, you are between 4 and 5 times more likely to be fatally shot if you had a gun available than if you didn’t have a gun, this evidence is generally dismissed by American libertarians and conservatives. They also dismiss the fact that after Australia introduced stringent gun control in 1996, accelerated declines in firearm deaths were observed.

So not only do libertarians dismiss the problem, they also ignore the solution.

Similarly, the relationship between right-wing politics and the rejection of the overwhelming scientific evidence on climate change is established beyond much doubt, and we all know that this often involves an element of conspiratorial thinking: a particularly colorful illustration of this tendency erupted this week in Australia.

Yesterday’s Guardian piece comes hard on the heels of other bad scientific news for conservative and libertarian ideology: During the past week, we learned that conservatives show higher levels of psychopathic personality traits than non-conservatives, and we also learned that lower intelligence in childhood is associated with increased prejudice later in life, with the mediating variable being greater endorsement of right-wing socially-conservative attitudes.

Ouch.

One likely reason that this relationship with intelligence is observed is because right-wing ideologies offer simplistic and well-structured views of society, which makes them particularly attractive to people who find uncertainty and ambiguity overwhelming. Or, as Herbert Wray put it, “smart people are more capable of grasping a world of nuance, fluidity and relativity.”

Let’s explore that nuance and relativity a bit more.

For people on the political left it may be tempting to use the recent results to equate conservatism with low intelligence, psychopathy, racism, anti-scientism, or nutty conspiratorial thinking. Although those links can be justifiably drawn on the basis of existing data, this package is a bit too neat and simple and smacks itself of the simplification purportedly associated with right-wing ideologies.

So let’s look at the other side of the ledger and look at “the right” in the nuanced manner that is so cherished by the political left:

  • We need to differentiate between libertarianism and conservatism: Although the two are often lumped together (as in the Guardian piece and as I have thus far in this post), and even though the constructs are often highly correlated, recent research has begun to differentiate between libertarians and social conservatives. This differentiation can be crucial, as for example in one of my recent studies that examined attitudes towards vaccinations.
  • Although conservatism is typically associated with more dogmatism overall, the picture changes when people’s “belief superiority” is probed with respect to specific hot-button issues. Belief superiority refers to the belief that one’s position is more correct than someone else’s, and in a recent study it has been found to be associated with political extremity rather than one particular ideology. Specifically, very liberal people were as convinced of the superiority of their views on government help for the needy as highly conservative people were convinced of the superiority of their view regarding voter identification. Less politically committed individuals on both sides of the spectrum showed a more moderate preference for their own opinion.
  • There is some evidence that even racial prejudice might have more to do with an implicit presumption of attitudes than race per se. That is, the greater prejudice against African Americans that is routinely and reliably observed among conservatives might at least in part result from the attitudes that African Americans are presumed to hold—for example, African Americans are known to predominantly endorse affirmative action and welfare, two issues on which conservatives hold strongly opposing views. Thus, black skin does not trigger prejudice, but black skin signals likely attitudes that, in turn, trigger prejudice. To illustrate, in one recent study, when conservatives and liberals had to rate their impressions of an African American who either endorsed or rejected welfare, what was found to matter was the match of attitudes and not race: Liberals’ negative impressions of a conservative African American were indistinguishable from conservatives’ negative impressions of a liberal African American, and correspondingly, both groups’ enthusiasm for African Americans of their own conviction was also indistinguishable. Importantly, this symmetry can co-exist with greater overall prejudice among conservatives—as indeed it did in this study!—which is measured without providing information about a specific person’s attitude, thereby forcing people to rely on inferred attitudes of a target group.
  • Finally, and most relevant to the role of scientific evidence in society, there is the large body of work by Dan Kahan and colleagues which shows that liberals are as susceptible to cognitive shortcuts and biases as their conservative brethren—except that those biases are expressed in different directions. For example, liberals and conservatives will misinterpret hypothetical data on gun control to suit their own biases with equal flourish. (This work remains to be reconciled with the notion that conservatism is associated with lesser cognitive functioning. I am not aware of the existence of any reconciliation, and I consider this issue unresolved and in need of further research).

What, then, is the relationship between scientific evidence and political attitudes? Is libertarian ideology the natural enemy of science?

The answer has three parts: First, conservative and libertarian ideology is undoubtedly at odds with much scientific evidence. Large bodies of solid scientific evidence are being rejected or denied on the basis of ideology, arguably with considerable detriment to society. Second, there is little doubt that liberals and progressives are equally capable of rejecting scientific evidence that challenges their worldviews, using the same well-understood processes of motivated cognition as their conservative brethren. Third, one of the most wicked problems ever to have confronted humanity, climate change, is not being addressed at present because the solutions involve challenges to conservative and libertarian worldviews. Those worldviews are not natural enemies of science; they are enemies of science because of the particular historical context in which conservative cultural cognition expresses itself at the moment.

Responding and Adapting to Climate Change: A Meeting at the University of Bristol

“Uncertainty, uncertainty, uncertainty … so why should we bother to act?”

Who hasn’t heard politicians or media personalities appeal to uncertainty to argue against climate mitigation? And indeed, why should we interfere with the global economy when there is uncertainty about the severity of climate change?

Some 20 leading experts from around the world will be meeting in Bristol late in September to discuss the implications of scientific uncertainty on the proper response to climate change.

This is particularly crucial because in contrast to the widespread public perception that uncertainty is an invitation to delay action on climate change, recent work suggests that scientific uncertainty actually provides an impetus to engage in mitigative action. Specifically, the greater the scientific uncertainty, the greater are the risks from climate change.

This conflict between people’s common perceptions of uncertainty and its actual implications is not altogether uncommon, and there are many situations in which people’s risk perception deviates from best scientific understanding.

The Bristol meeting brings together scientists and practitioners with the goal of (a) developing more effective means to communicate uncertainty and (b) to explore how decision making under uncertainty can be better informed by scientific constraints.

To address the scientific, cultural, health, and social issues arising from climate change requires an in-depth and cross-disciplinary analysis of the role of uncertainty in all of the three principal systems involved: The physical climate system, people’s cognitive system and how that construes and potentially distorts the effects of uncertainty, and the social systems underlying the political and public debates surrounding climate change.

The results of the meeting will become publically available through scientific publication channels, with the details to be announced closer to the time of the meeting. In addition, two attendees at the meeting will be presenting public lectures at the University of Bristol:

Friday 19 September, 6:00-7:30 pm. Dogma vs. consensus: Letting the evidence speak on climate change.

In this Cabot Institute public lecture, we are pleased to present John Cook, Global Change Institute, University of Queensland, and owner of the Skeptical Science blog, in what promises to be a fascinating talk.

In 2013, John Cook led the Consensus Project, a crowd-sourced effort to complete the most comprehensive analysis of climate research ever conducted. They found that among relevant scientific articles that expressed a position on climate change, 97% endorsed the consensus that humans were causing global warming. When this research was published, it was tweeted by President Obama and received media coverage all over the world, with the paper being awarded the “best article” prize by the journal Environmental Research Letters in 2013. However, the paper has also been the subject of intense criticism by people who reject the scientific consensus. Hundreds of blog posts have criticised the results and newspapers such as the Wall Street Journal and Boston Globe have published negative op-eds. Organisations who deny or reject current science on human-caused climate change, such as the Global Warming Policy Foundation in the UK and the Heartland Institute in the US, have published critical reports, and the Republican Party organised congressional testimony against the consensus research on Capitol Hill. This sustained campaign is merely the latest episode in over 20 years of attacks on the scientific consensus on human-caused global warming. John Cook will discuss his research, both on the 97% consensus and on the cognitive psychology of consensus. He will also look at the broader issue of scientific consensus and why it generates such intense opposition.

 

Tuesday 23 September 2014, 6 pm to 7.30 pm. The Hockey Stick and the climate wars—the battle continues…

In this special Cabot Institute lecture, in association with the Bristol Festival of Ideas, Professor Michael E Mann will discuss the science, politics, and ethical dimensions of global warming in the context of his own ongoing experiences as a figure in the centre of the debate over human-caused climate change.

Dr. Michael E Mann is Distinguished Professor of Meteorology at Penn State University, with joint appointments in the Department of Geosciences and the Earth and Environmental Systems Institute. He is also director of the Penn State Earth System Science Center. He is author of more than 160 peer-reviewed and edited publications, and has published books include Dire Predictions: Understanding Global Warming in 2008 and The Hockey Stick and the Climate Wars: Dispatches from the Front Lines in 2012. He is also a co-founder and avid contributor to the award-winning science website RealClimate.org.

 

Readers interested in attending the talks should register for John Cook here and for Michael Mann here.

The Joys of Statistical Mapping

Statistical maps, which display the geographic distribution as well as the magnitude of a variable of interest, have become an increasingly common tool in data analysis. From crime rates to forest fires, it is now common to represent the geographical distribution of a variable of interest by coloring a map in proportion to its magnitude. Many different ways to represent magnitude exist, and as I showed together with colleagues some 20 years ago, different plotting techniques can give rise to very different impressions for identical data. Issues such as granularity (i.e. size of area for data display; states vs. counties) or choice of color (i.e., red vs. yellow or purple shading) can affect people’s perception and accuracy.

In the climate arena, maps are routinely used to display temperature anomalies. Typically, shades of red represent positive anomalies (i.e., above-average temperatures) whereas blue is used to represent negative anomalies (below average.) This opposing-colors scheme arguably works very well in drawing the reader’s attention to particularly warm or cool areas on the globe.

James Risbey and I published a paper last Sunday with colleagues that used a set of maps in one of the figures to show the modeled and observed decadal trends (Kelvin/decade) of Sea Surface Temperature (SST). The observations, as shown in Figure 5c in our paper, are shown again below in a virtually identical format. The figure was created with MATLAB, using a white color around the zero trend and a high-resolution colormap with a low resolution contour interval:

These data present an opportunity to explore the impact of subtle graphical choices on the observer’s perception of the data.

The next figure shows the same data, also plotted with MATLAB, but with no white zone in the red-blue color bar, and with a coarse colormap that matches the contour interval used.

It will be noted that this figure “runs hotter” than the one we published in Figure 5c, because some very small long-term trends are “forced” into a pink band because the white (“neutral”) choice is no longer available.

And one more figure drawn with MATLAB: This time with a white zone around (near-) zero trends, but with no contouring.  This shows the raw trends better, but the white zone and the high resolution colorbar start to change the look quite a bit.

Finally, let’s try another software. The figure below was plotted using Panoply, using the same contour interval and matching colorbar resolution as in the first figure above, which is nearly identical to Figure 5c in our paper.

 

What conclusions can we draw from these comparisons?

The figures change the appearance of the data considerably. It follows that one should apply considerable caution when comparing figures between different publications or different research groups: Visual differences may not reflect differences in the data but differences in the subtle ways in which the graphs were produced, not all of which can be identified from inspecting the figure alone.

A second conclusion that we can draw is that regardless of the specific map being used, they all show warming during the last 15 years in the Northern and central Pacific, accompanied by cooling in the Western and Eastern Pacific. As we argued in the paper, this spatial pattern is washed out when the full CMIP5 model ensemble is considered. When models are selected with respect to how well they are synchronized with the world’s ocean, then those that are in phase with the Earth’s natural variability capture the spatial pattern of ocean heating better than the models that are maximally out of phase.

Well-estimated global warming by climate models

Has global warming “stopped”? Do models “over-predict” warming? There has been much recent talk in the media about those two questions. The answer to the first question is a fairly clear “no.” Global warming continues unabated.

To illustrate, for the coverage-bias-corrected data published by Cowtan and Way last year, the temperature trend for the last 15 years up to and including 2013 is significant—in the same way that the trend was significant for the last 15 years in 2000 and in 1990. So from any of those vantage points, the Earth has warmed significantly during the preceding 15 years.

One thing that has changed since 2000 is that more heat is now going into the oceans—rather than the atmosphere—and at an accelerating pace. Or as Dana Nuccitelli put it recently:

“The rate of heat building up on Earth over the past decade is equivalent to detonating about 4 Hiroshima atomic bombs per second. Take a moment to visualize 4 atomic bomb detonations happening every single second. That’s the global warming that we’re frequently told isn’t happening.”

Let’s turn to the second question: Have models over-estimated the rate of warming? This question has a more nuanced but quite fascinating answer.

We begin by noting that the observed global temperature increase remains comfortably within the 95% envelop of model runs, as shown in the figure below, which is taken from a recent Nature Climate Change paper by Doug Smith.

Now, arguably, the observed temperatures for the last decade or so are tending towards the lower end of the model envelope (note, though, that this figure does not plot the coverage-bias-corrected data from Cowtan and Way, which would raise the final observed temperatures and trends slightly).

Does this then mean that the models “over-predict” warming?

Not exactly.

To understand why the answer is no, we need to consider three issues.

First, it will be noted that occasional brief excursions of observed temperatures outside the 95% model envelope are not unusual new—and indeed the most recent excursion occurred when the Earth warmed faster than the models. This is the result of natural variability and represents short-term disturbances that do not affect the underlying long-term trend.

Second, we need to consider the expected relationship between the models’ output and the observed data. This is a profound issue that is routinely overlooked by media commentators, and it pertains to the common confusion between climate projections and climate forecasts. Climate forecasts seek to predict the climate over a certain range, taking into account natural variability and—similar to a weather forecast—modeling the evolution of the climate from a known starting point and taking future internal variability into account. For example, the UK Met Office publishes decadal forecasts, which are explained very nicely here.

Climate projections, by contrast, seek to describe the evolution of the climate in the long run, irrespective of its current state and without seeking to predict internal variability. The figure above, like all figures that show model output to the end of the century, plots projections rather than predictions. Because projections have no information about the phase (sequence and timing) of internal climate variability, there is no expectation that any particular projection would align with what the Earth is actually doing. In fact, it would be highly surprising if global temperatures always tracked the center of the model projections—we expect temperatures to jiggle up and down within the envelope. To buttress this point, recent work by Mike Mann and colleagues has shown that warming during the most recent decade is well within the spread of a model ensemble.

Finally, we need to consider the reasons underlying natural variability, both in the models and in the planet’s warming trend. One of the major drivers of this variability involves the El Niño – La Niña oscillation in the Pacific, which determines how much heat is taken up by the oceans rather than the atmosphere. La Niña conditions favour cooler temperatures whereas El Niño leads to warmer temperatures. The animated figure below from Skepticalscience  illustrates this nicely:

 

The figure clarifies that internal climate variability over a short decadal or 15-year time scale is at least as important as the forced climate changes arising from greenhouse gas emissions.

Those three issues converge on the conclusion that in order to meaningfully compare model projections against observed trends, the models must be brought into phase with the oceans. In particular, the models must be synchronized with El Niño – La Niña.

The evidence has been mounting during the last few years that when this synchronization is achieved, the models capture recent temperature trends very well.

At least four different approaches have been pursued to achieve synchronization.

One approach relied on specifying some observed fields in the climate models while they are “free” to evolve on their own everywhere else. For example, Kosaka and Xie showed than when the El Niño-related changes in Pacific ocean temperature are entered into a model, it not only reproduced the global surface warming over the past 15 years but it also accurately reproduced regional and seasonal changes in surface temperatures. Similarly, Matthew England and colleagues reproduced observed temperature trends by providing the model with the pronounced and unprecedented strengthening in Pacific trade winds over the past two decades—and the winds in turn lead to increased heat uptake by the oceans.

A second approach involved initialization of the model to the observed state of the planet at the beginning of a period of interest. Meehl and Teng recently showed that when this is done, thereby turning a model projection into a hindcast, the models reproduced the observed trends—accelerated warming in the 1970s and reduced rate of surface warming during the last 15 years—quite well.

The third approach, by Gavin Schmidt and colleagues, statistically controlled for variables that are known to affect model output. This was found to largely reconcile model projections with global temperature observations.

The fourth approach was used in a paper by James Risbey, myself, and colleagues from CSIRO in Australia and at Harvard which appeared in Nature Climate Change today.

This new approach did not specify any of the observed outcomes and left the existing model projections from the CMIP5 ensemble untouched. Instead, we select only those climate models (or model runs) that happened to be synchronized with the observed El Niño – La Niña preference in any given 15-year period. In other words, we selected those models whose projected internal natural variability happened to coincide with the state of the Earth’s oceans at any given point since the 1950’s. We then looked at the models’ predicted global mean surface temperature for the same time period.

For comparison, we also looked at output from those models that were furthest from the observed El Niño – La Niña trends.

The results are shown in the figure below, showing the Cowtan and Way data (in red) against model output (they don’t differ qualitatively for the other temperature data sets):

The data represent decadal trends within overlapping 15-year windows that are centered on the plotted year. The left panel shows the models (in blue) whose internal natural variability was maximally synchronized with the Earth’s oceans at any point, whereas the right panel shows the models (in gray) that were maximally out of phase with the Earth.

The conclusion is fairly obvious: When the models are synchronized with the oceans, they do a great job. Not only do they reproduce global warming trends during the last 50 years, as shown in the figure, but they also handle the spatial pattern of sea surface temperatures (the figure for that is available in the article).

In sum, we now have four converging lines of evidence that highlight the predictive power of climate models.

From a scientific perspective, this is a gratifying result, especially because the community has learned a lot about the models from those parallel efforts.

From another perspective, however, the models’ power is quite distressing. To understand why, just have a look at where the projections are heading.

 

Udpate 21/7/14, 9am: The date in the post was initially misspelled and should have read July rather than June.

The Frontiers Expert Panel

Updated below: 17 April 2014

When Frontiers retracted our paper “Recursive Fury” (available at uwa.edu.au/recursivefury) they were very clear that the journal “…did not identify any issues with the academic and ethical aspects of the study.”

The journal has since issued several conflicting positions, and their latest statement raised a concern about identification of ‘human subjects’ that can only be considered an ethical issue.  

Not only does this latest statement depart from journal’s previous public stance and signed agreements, but it also deviates from the opinions of Frontiers’ own expert panel that they appointed last year to examine the issues surrounding Recursive Fury.

Concerning the subject and consent issue, that expert panel concluded:

 Participant Status and Informed Consent

The question of participant status is an important and complex one.  It turns on the question of whether an individual’s (identifiable or not) postings to blogs comprise public information and therefore do not fall under the constraints typically imposed by ethics review boards.  The issue is currently under debate among researchers and publishers dealing with textual material used in scientific research. Advice was sought from the leading researcher on web-based psychological studies and his response was that “among psychological and linguistic researchers blog posts are regarded as public data and the individuals posting the data are not regarded as participants in the technical sense used by Research Ethics Committees or Institutional Review Boards.   This further entails that no consent is required for the use of such data.”  Although this view is held by many researchers and their ethics boards, it is by no means a unanimous judgment and it is to be expected that legitimate challenges, both on ethical and legal grounds, will be raised as web-based research expands in scope.  But to the charges that Fury was unethical in using blog posts as data for psychological analysis, the consensus among experts in this area sides with the authors of Fury. 
(Emphasis added.)

The consensus among experts is further reflected in the fact that the research was conducted with ethics approval by the University of Western Australia.

The consensus among experts in the area is that scholarly analysis of public speech can be conducted without requiring consent.

The University of Western Australia agreed with this consensus.

Frontiers publicly agreed with this consensus.

 

Update 17 April 2014:

Some commenters have, quite reasonably, asked me to release the entire expert report. I cannot do so because it is still strictly confidential.

I released the above section of the report because it spoke directly to an issue on which Frontiers made public statements that were irreconcilable with both an agreement they signed and their own expert report. This was done after extensive legal consultation and after inviting the journal to correct its latest public statements. I posted this unabridged relevant section only after the journal declined the invitation to set the record straight.

If it weren’t for these special and legally vetted circumstances, I would have honoured the confidentiality of this report as I have honoured all other agreements. The confidentiality of the remainder of this report remains in full force.

Clarifying a revisited retraction

Frontiers has issued a further statement on the retraction of our paper “Recursive Fury” (available at uwa.edu.au/recursivefury). This statement is signed by their editor in chief. It cannot be reconciled with the contractually agreed retraction statement signed by the journal and the authors on 20th March.

Whereas the agreed retraction statement clarified that the journal “…did not identify any issues with the academic and ethical aspects of the study”, the latest statement raises a concern about identification of ‘human subjects’ that escapes classification as anything other than an ethical issue.

This latest statement renders inescapable the following two conclusions:

  1. As detailed previously, Frontiers made no mention of their concern for human subjects throughout the past year during which they focused exclusively on the risk of defamation. It thus appears that the journal withheld its true concerns from us for a year or that they failed to discover those concerns until recently.
  2. The journal signed a retraction statement that they are now explicitly contradicting.

The latest statement furthermore claims that “all efforts were made to work with the authors to find a solution.” What that statement omits is the fact that we submitted another paper to Frontiers in January 2014 that was completely de-identified and that did not permit anyone to ascertain the identity of the people whose public statements were analyzed.

If Frontiers were concerned about identification of ‘human subjects’, why did they decline publication of a paper that was de-identified and written in compliance with their specific criteria to resolve this issue? The only grounds offered for this declination were continued concerns about defamation.

This declination sits uneasily with the journal’s current public focus on ‘human subjects.’

Whatever might have caused the journal to take multiple and conflicting public positions on their most widely-read paper, the evidence that they were at the receiving end of intimidation and bullying has become impossible to overlook with the growing number of individuals who are publicly claiming to have done so.

The analysis of speech

What constitutes legitimate analysis of speech?

This question has been brought into sharp focus by the most recent position of the journal Frontiers that they put out last Friday. This statement claimed that our paper Recursive Fury (uwa.edu.au/recursivefury), had been retracted because it “did not sufficiently protect the rights of the studied subjects.” This seemingly stands in contrast to the contractually-agreed retraction statement, signed by legal representatives of the journal and the authors, that Frontiers “…did not identify any issues with the academic and ethical aspects of the study.”

It is helpful that the Frontiers affair involved two contrasting ways in which speech was being used by the various participants. Let us therefore analyze those two ways in turn.

The complainant(s).

Although we have destroyed all correspondence and documents involving the allegations against us at the request of Frontiers, and although now, a year later, our recollection of those events is minimal, Graham Readfearn has put something about the allegations into the public domain that has received little attention to date.

Readfearn states that the complaints against us alleged “malice” on the part of the authors in various ways. As far as I understand it, malice is a legal term meaning an improper motive in making a statement; and, if proved in Court, removes some defenses to charges of defamation.

In the present context, it is most relevant that the accusations of malice against John Cook, one of the authors of Recursive Fury, were based on his apparent sanctioning of “vile commentary” against the complainant and other bloggers.

Indeed, the material cited in support contains irate statements that none of the authors of Recursive Fury would countenance.

None of the authors made those statements.

One will fail to find anything like those comments on Cook’s blog, www.skepticalscience.com: None of the more than 88,000 public comments posted there to date contain anything that could be remotely construed as vitriolic or polemical—that’s because 7000 comments were deleted by moderators owing to their inflammatory content.

So where did the “vile commentary” come from and how did John Cook “sanction” it?

The vile commentary was made by third parties unconnected to Recursive Fury on a private forum that was password-protected, and whose purpose was to permit open and completely uncensored discussion among a small group of collaborators. Those comments were posted in the expectation of privacy, and they became public only through a criminal act—a hack attack on Skepticalscience that has been explored in great forensic depth.

John Cook neither wrote those comments, nor could he be reasonably expected to moderate them. They were made in private and became public by an illegal act by parties unknown.

What John did was to host a private forum on which other people vented their anger. If that is malice, then so would be inopportune comments by your friends at an illegally wire-tapped dinner party. You better censor what your guests say in case your next party is bugged, lest you be accused of malice.

The complainant’s conduct follows a common pattern in the Subterranean War on Science: Use of private correspondence obtained by an illegal act to construct allegations against scientists. Except that in this case, to allege malice against John Cook, hackers trolled through two years of his private conversations and found exactly nothing.

Zip. Zilch. Bupkis.

All the hackers and trolls could find were other parties expressing anger in the expectation of privacy. I cannot think of clearer evidence for the absence of malice in John Cook’s conduct.

I nonetheless think there might be evidence of malice here.

Maybe some readers can spot it.

The authors of Recursive Fury.

Recursive Fury was conducted with ethics approval (of course!) and Frontiers entered into a contractual agreement for the retraction that noted that their review “…did not identify any issues with the academic and ethical aspects of the study”.

And what did Recursive Fury do? It presented a narrative analysis of public discourse in the blogosphere in the aggregate. We did not categorize anyone into anything, we categorized statements.

That’s all.

This is the difference between saying “Joe is a racist” and saying “When Joe and Fred get together in a bar at night their discourse contains racist elements based on application of the following scholarly criteria.” Now, we could have withheld the sources of all those statements, thereby anonymizing the analysis and protecting the identity of those who feel that their public statements are too fragile to survive scholarly scrutiny.

However, we considered this unwise in light of the pervasive allegations against (climate) scientists that they are “hiding data.”

Folks, we did not hide the data.

We made them all available. And they are still here: uwa.edu.au/recursivefury.

By the way, there are ample precedents for this kind of work, including other hot-button issues such as anti-Semitism. Yes, there is a scholarly paper out there that analyzes the public speeches of contemporary Austrian politicians for their anti-Semitic undertones. (I am not linking to that study here, lest the researcher be caught up in the turmoil of requests for his/her data, or requests to destroy the data, or requests to provide ethics approval, or his/her entire email correspondence during the last 13 years.)

Here then is the crucial question about the analysis of speech that arises from the Frontiers affair:

Are public statements by people who knowingly made them in public, subject to scholarly analysis? Or is it only stolen correspondence by third parties made in the expectation of privacy that can be used to allege malice on the part of someone who never said anything malicious himself?

In Whose Hands the Future?