One of the things about the field of climate science that I find particularly striking is the way rigorous debate about so many diverse issues co-exists with a ubiquitous consensus about the fundamental facts that greenhouse gas emissions from our economic activities are warming the planet. Indeed, of all the silly things that have been said about the climate by political operatives and others who cannot accept the 150-year old physics of greenhouse warming for ideological reasons, perhaps the silliest is the claim that scientists do not agree about those fundamental physics.
Anyone who believes that has obviously never been to a meeting of the American Geophysical Union (AGU). I have attended multiple times, and the idea that there is disagreement about greenhouse warming among domain experts is just plainly and completely wrong. There is no kinder way of putting this: the consensus is not a matter of opinion, it’s a matter of fact. And the fact is that I have never heard anyone at an AGU meeting dispute that greenhouse gases are a major contributor to the observed global warming during the last 30-50 years. Nor are there any debates about greenhouse warming during those meetings—as is easily ascertained by perusing the conference program.
Given that recognition of the expert consensus is a gateway belief that determines the public’s attitudes toward climate policies, and given that informing people of the consensus demonstrably shifts their opinions, it is unsurprising that attempts continue to be made to deny the existence of this pervasive expert consensus.
Like other forms of disinformation, this denial of the expert consensus impinges on the public’s right to be adequately informed about the risks it is facing. It is therefore potentially ethically dubious. However, disinformation also provides an opportunity for agnotology—that is, learning from the analysis of mistakes and misrepresentations.
The essence of the article is encapsulated in the figure below, which shows the expert consensus—measured as the percentage agreement on the fundamental premise that the planet is warming from greenhouse gas emissions—across a large number of studies published during the last decade (for the coding of the observations, refer to the original article).
It is clear that as expertise increases, so does the consensus. And the greater the precision of the data (represented by smaller uncertainty bounds), the higher the consensus.
And if you want to know more, here is a four-minute concise summary of the results of our study:
@Mike H. Thanks :-). You are anticipating an argumentative strategy that is actually quite revealing: To first deny that there is a consensus, only to then say it does not prove anything, is incoherent. If there is no consensus it doesn't matter what it would or would not show. This incoherence is not an isolated occurrence: https://www.opendemocracy.net/conspiracy/suspect-science/stephan-lewandowsky/alice-through-looking-glass-mechanics-rejection-of-climate-science.
The chart this post says the "essence of the article is encapsulated in" is terrible. There are some fundamental questions about the data points as the Supplementary Material for this paper says the various "consensus estimates" were assigned an "expertise" value qualitatively, with no rubric provided or explanation of how the values were chosen. Still, that pales in comparison to a much larger issue - the scaling on the chart is nonsensical. There were only five "expertise" categories used, and one of them (category 4) didn't have any estimates assigned to it. That means if we add lines to show where the various categories are, we get something like what you see in this image
The highest category not only covers the entire "Higher" half of the chart, it extends into the "Lower" half as well. Then there is one category which isn't shown at all, causing the scaling to completely skip a part. There's no sensible way to interpret that.
Plus, the order the various "consensus estimates" are given in within any particular category is completely arbitrary. You could switch them around as much as you wanted, and it wouldn't be any more or less correct. The only difference is how visually appealing the results would be.
In other words, this chart is meaningless and cannot be interpreted in any coherent way, and portraying it as showing a signal is misleading.
On the one hand, I appreciate a professor publicly defending his PhD student, particularly one as troubled as Mr Cook. On the other hand, there is a point at which you have to admit that the student is incapable of producing methodologically sound research.
Curious that a purveyor of econometric foolishness worries about others. Richard's problem based on long observation is that he never subjects his mathturbation to thought. Common sense as it were but Richard enjoys dividing by zero when you can get any result you want. A nice example of this which led to his leaving Ireland was an analysis of the benefits of being unemployed. Which was amusing, because among other things he assigned costs of working to the poor which far exceeded their income