A simple recipe for the manufacturing of doubt
Mr. McIntyre, a self-declared expert in statistics, recently posted an ostensibly unsuccessful attempt to replicate several exploratory factor analyses in our study on the motivated rejection of (climate) science. His wordy post creates the appearance of potential problems with our analysis.
There are no such problems, and it is illustrative to examine how Mr. McIntyre manages to manufacture this erroneous impression.
Our explanation focuses on the factor analysis of the five “climate science” items as just one example, because this is the case where his re-“analysis” deviated most from our actual results.
The trick is simple when you know a bit about exploratory factor analysis (EFA). EFA serves to reduce the dimensionality in a data set. To this end, EFA represents the variance and covariance of a set of observed variables by a smaller number of latent variables (factors) that represent the variance shared among some or all observed variables.
EFA is a non-trivial analysis technique that requires considerable training to be used competently, and a full explanation is far beyond the scope of a single blog post. Suffice it to say that what EFA does is to take a bunch of variables, such as items on a questionnaire, and then replaces the multitude of items with a small number of “factors” that represent the common information that is picked up by those items. In a nutshell, EFA permits you to go from 100 items on an IQ test to a single factor that one might call “intelligence.” (It’s more nuanced than that, but that captures the essential idea for now).
One core aspect of EFA is that the researcher must decide on the number of factors to be extracted from a covariance matrix. There are several well-established criteria that guide this selection. In the case of our data, all acknowledged criteria yield the same conslusions.
For illustrative purposes we focus on the simplest and most straightforward criterion, which states one should extract factors with an eigenvalue > 1. (If you don’t know what an eigenvalue is, that’s not a problem—all you need to know is that this quantity should be >1 for a factor to be extracted). The reason is that factors with eigenvalues < 1 represent less variance than a single variable, which negates the entire purpose of EFA, namely to represent the most important dimensions of variation in the data in an economical way.
Applied to the five “climate science” items, the first factor had an eigenvalue of 4.3, representing 86% of the variance. The second factor had an eigenvalue of only .30, representing a mere 6% of the variance. Factors are ordered by their eigenvalues, so all further factors represent even less variance.
Our EFA of the climate items thus provides clear evidence that a single factor is sufficient to represent the largest part of the variance in the five “climate science” items. Moreover, adding further factors with eigenvalues < 1 is counterproductive because they represent less information than the original individual items. (Remember that all acknowledged standard criteria yield the same conclusions.)
Practically, this means that people’s responses to the five questions regarding climate science were so highly correlated that they reflect, to the largest part, variability on a single dimension, namely the acceptance or rejection of climate science. The remaining variance in individual items is most likely mere measurement error.
How could Mr. McIntyre fail to reproduce our EFA?
Simple: In contravention of normal practice, he forced the analysis to extract two factors. This is obvious in his R command line:
In this and all other EFAs posted on Mr. McIntyre’s blog, the number of factors to be extracted was chosen by fiat and without justification.
Remember, the second factor in our EFA for the climate item had an eigenvalue much below 1, and hence its extraction is nonsensical. (As it is by all other criteria as well.)
But that’s not everything.
When more than one factor is extracted, researchers can rotate factors so that each factor represents a substantial, and approximately equal, part of the variance. In R, the default rotation method, which Mr. McIntyre did not overrule, is to use Varimax rotation, which forces the factors to be uncorrelated. As a result of rotation, the variance is split about evenly among the factors extracted.
Of course, this analysis is nonsensical because there is no justification for extracting more than one factor from the set of “climate change” items.
There are two explanations for this obvious flaw in Mr. McIntyre’s re-“analysis”. Either he made a beginner’s mistake, in which case he should stop posing as an expert in statistics and take a refresher of Multivariate Analysis 101. Or else, he intentionally rigged his re-“analysis” so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.
Prev 1 2 3 4 5 6 7 8 9 10 11 12 Next
Comments 251 to 300 out of 568:
And, I might add, claiming they know better than me what connotations I mean when I use a term, connotations that are incompatible with the context and far better explained by ... you know, the actual context and current common usage within that specific context.
...it could mean denial of almost anything depending on the context
Almost right. As Lotharsson illustrated so plainly, denial of 'whatever' is context-dependent. It mean denial of whatever is the subject. Not almost anything, but precisely whatever is being denied. (Eg denial of the definition of 'denial'.)
We observe the advancement of the postulate that people are using the term "denier" in the context of climate science, NOT as the obvious and commonly understood shorthand for something like "climate science denier", but rather deliberately trying to invoke an association with "Holocaust Denier" - perhaps even whilst retaining "plausible deniability".
So I wonder, does this postulate count as conspiratorial ideation? Or do we first have to inquire of those who advance this postulate whether they think that individuals who decided to adopt a specific idiom for ulterior motives that happen to match a whole bunch of other people's ulterior motives ONLY ever made the decision on their own or did some of them agree to make it in groups of two or more, before we can categorise it as a tentative case of conspiratorial ideation vs paranoid ideation vs perhaps some other form of suspicion (apparently based on mind-reading) with a different formal name?
"...denial of the definition of 'denial'."
And furthermore denying that the term is often used as an accurate descriptor but rather always and everywhere an insult (which is implied to be inaccurate).
Lotharsson - not really a conspiracy ideation. As you well know, using 'that word' was instruction no. 32A in S.XXXIV of Volume 345 of the handbook of 'How to Talk Like a Warmist' handed down at the seventy-sixth and a half One World Government convention in 1876. That particular instruction has not been changed since. (Looking at the list of attendees at that particular convention, there was a representative from Oxford who, legend has it, was informally instructed to ensure the proper definition got enshrined in a dictionary.)
Inability to contain an impulse to misconstrue (Eg through cherry picking, omitting critical explanations etc) could be seen as symptomatic of the pathology under discussion, referred by Kahan et al as 'motivated cognition' and by Lewandowsky et al as 'motivated rejection'.
Some people are more severely afflicted than others.
"What motivates you to reject this?"
I'm not seeing anyone reject that, apparently because they've understood the paper a whole let better than you.
I'm seeing them reject any implication of causality from (their definition of) "scientific literacy" to assessing less risk from climate change - if only because that appears to be falsified by the paper. Or to put it another way, it looks like you are rejecting the authors' key finding - that cultural factors are far more predictive, and even appear to be sufficiently strong to account for the "overall" correlation that you cherry-pick.
Furthermore the paper does not claim to find causality for the key finding, and readers should note that it is a US-only study and the test of "scientific literacy" was pretty basic. For example, European responses and/or tests of deeper scientific literacy, actual scientific competence - or even professional levels of climate science competence - might produce different results.
Ian, your attitude toward admitting denying that humans are causing climate change is overly defensive and morally and intellectually defective. As a bunny whose family lost many and the lucky of whom survived the late unpleasantness your attempt to couple your being called on a lack of understanding of science to a human caused human tragedy is frankly offensive.
If you doubt Eli, of course, you may consult Micha Tomkiewicz, who has both direct experience and a much more judgemental view of your ilk.
@Brad - as you'll see from the link I gave my reference was to Kahan et al 2012: The polarizing impact of science literacy and numeracy on perceived climate change risks (Nature Climate Change) - your quote is not in the abstract of that paper.
You have quoted from the abstract of Kahan et al 2011: The Tragedy of the Risk-Perception Commons: Culture Conflict, Rationality Conflict, and Climate Change - which appears to be a preliminary version of same.
Here is the full abstract of the paper from which you quoted: I have bolded your quote and also bolded the section immediately after it. I appreciate that you may not have been able to see that you cherry-picked and therefore misrepresented the study, but an inherent inability to not misrepresent is still misrepresentation.
The conventional explanation for controversy over climate change emphasizes impediments to public understanding: limited popular knowledge of science, the inability of ordinary citizens to assess technical information, and the resulting widespread use of unreliable cognitive heuristics to assess risk. A large survey of U.S. adults (N = 1540) found little support for this account. On the whole, the most scientifically literate and numerate subjects were slightly less likely, not more, to see climate change as a serious threat than the least scientifically literate and numerate ones. More importantly, greater scientific literacy and numeracy were associated with greater cultural polarization: respondents predisposed by their values to dismiss climate change evidence became more dismissive, and those predisposed by their values to credit such evidence more concerned, as science literacy and numeracy increased. We suggest that this evidence reflects a conflict between two levels of rationality: the individual level, which is characterized by the citizens’ effective use of their knowledge and reasoning capacities to form risk perceptions that express their cultural commitments; and the collective level, which is characterized by citizens’ failure to converge on the best available scientific evidence on how to promote their common welfare.
Dispelling this “tragedy of the risk-perception commons,” we argue, should be understood as the central aim of the science of science communication.
@ Sou, this is in Kahan et al.'s abstract:
"On the whole, the most scientifically literate and numerate subjects were slightly less likely, not more, to see climate change as a serious threat than the least scientifically literate and numerate ones."
What motivates you to reject this?
It's not monofactorial, but this is what you're trying to twist it into.
Look at figure 2 of Kahan et al. Look at the second panel. Look at how increased education does different things for Hierarchy-Individualism people as opposed to Egalitarian-Communitarianism people.
The objective science indicates that global warming increases risk to humans and to the biosphere. Provide Egalitarian-Communitarianism people with a good education and they reflect the understanding of science in their own increased perception of risk. Provide Hierarchy-Individualism people with a good education and they use that education to twist the understanding of science in a manner that decreases their perception of risk.
And the latter group twist their perceptions of risk downward more than the former group raises theirs upward.
Kahan et al only says what you want it to say when you take factors out. You are in effect doing the same thing as those who engage in the logical fallacy of removing the warming trend from the temperature record, and then examine the residuals for evidence of warming.
To phrase it bluntly, education enhances one's prejudice...
If one's prejudice is for rationality, a good education will increase an acceptance of the rational, scientific indication of the risks of climate change.
If one's prejudice is for individuality, a good education will increase the capacity to engage the cognitive contortion/self-delusion required to dismiss the scientific indication of the risks of climate change. In other words, such people can become better at fooling themselves when educated.
Get it? Kahan et al is reflecting the existence of a mechanism to avoid the conflict between the desire to maintain a selfish focus on the one hand, and acknowledging rational truth on the other. Education is giving those sort of people a way to reorganise facts in their minds such that they rationalise the internal conflict in a direction away from what the science actually says, with the end result that the record a lower perceived risk.
As an addendum to my previous post, it would probably be useful to have said:
"In other words, such people can become better at fooling themselves when educated in circumstances where fact conflicts with ideology."
"Even when I'm so obviously right, the believalist reflex obliges you to disagree with me."
Er, no, you're not obviously right, which makes your other claim moot:
"You must have missed the bit where Sou calls my true statement a "misrepresentation" and some other hapless interloper says it's "wrong again.""
Wrong on that point too.
Firstly, Sou is correct, as is Bernard when he called you "wrong again" and proceeded to explain that you were wrong because you misrepresented what the paper found. Their earlier comments explain in some detail how you are cherry-picking and the gap between the false impression your cherry-pick gives as compared to the paper itself.
Secondly, having established your misrepresentation, apparently it is necessary to point out that calling out misrepresentation of a paper is not the same as rejecting a finding within the paper! Conflating the two is overly simplistic and fallacious. Perhaps that level of persistent miscomprehension or maybe merely illogic (on this and other issues) explains a significant portion of the disagreement you have with other commenters.
"I quoted one of its findings to make a point: that there is no basis for calling climate non-alarmism "rejection of science.""
Then you might want to consider how your conclusion does not follow from that study on even basic logical grounds. (Hint: try to consider what scenarios, if true, would falsify your "proof".)
And that's before one digs in to the definition of "science literacy" in the paper, which is nothing like competence to either perform or assess scientific work.
Then there are the other studies that show that those most competent in climate science are the most concerned...which rather undermine your point. Ignoring those studies whilst quoting the one you do is yet another form of cherry-picking.
It's hasn't been referred to as 'motivated rejection' for nothing.
From what I see around the traps, the greater the motivation (to reject climate science) the more vocal (and less rational) the rejection.
Wrong again, Brad.
This finding helps explains why what you call 'non-alarmists' reject climate science.
As Kahan et al show and as Bernard and Lotharsson explained rather clearly, it's related to the 'world view' or personal level of comfort.
Some people are able to not just ignore or be blind to facts, but blatantly reject facts as has been amply demonstrated here, if the facts make them sufficiently uncomfortable. I figure it could be a protection mechanism.
I took a couple of basic stats courses in the mid 60's. I didn't get very good with stats but I still have Ruff's book.
And here we go again McIntyre is now claiming that the free market questions are also fraudulent. Steve, come back when you have something more than supposition. Comparing what Prof. Lewandowsky did and what Watts did is not appropriate since Watts let everyone know how they should answer in advance. This "pump-priming" automatically makes the Watts survey invalid.
> the best arguments of the skeptics
are mutually contradictory
Lest is has somehow escaped people's attention, from at least post #293 onward Brad Keyes is displaying the very thing that Kahan et al report in their 2012 paper - that something he finds ideologically unpalatable is the same thing that forces him to rationalise the thing away.
You'd think that after LOG12 he'd have learned the lesson...
A.Scott, you'd better fire up the surveying again - you have another adverse finding to sweep under the carpet.
Yes, Elvis is in the building (#303) and pointing out the ramrod straight logic of LOG12 which, using the taxpayers' money, has shoved itself into the uncomfortable regions where denial of human caused climate change lives.
Good job Prof. L.
"This is the finding that makes it problematic for Sou et al. to refer to climate non-alarmism as "rejection of science."
Once more, no. Saying it again still does not make it so. As has been pointed out, and you ... er, reject, ... the paper you cite undermines your claim.
Firstly, you might want to consider whether you are conflating "rejection of [climate] science" (as is implied in this thread's context) with the "[non-climate] science literacy" of the paper, and if so whether that invalidates your claim that a relationship based on the latter poses difficulties for certain claims about the former.
Secondly, comments above provide a number of lines of evidence and logic that undermine your claim - including the fact that you're pushing a proposition that relies on ... er, rejecting ... the import of the strongest observed relationship in favour of a rather weak aggregate one. And as you have frequently done on this website you've largely rejected or ignored the points from those earlier comments without showing that they do not hold. It's an interesting phenomenon.
Thirdly, your argument appears to be presuming that which you seek to prove - that non-alarmism is justified by the scientific evidence, despite it being difficult to argue that the best inference from all the scientific evidence leads to lack of concern. Furthermore Kahan et al clearly do not tackle this question, so it's a little odd to be citing them as supporting your assertion.
Fourthly your argument appears to imply that Kahan et al found a causal relationship between "scientific literacy" and "non-alarmism". However Kahan et al do not make that claim - and no doubt would have, had they had the data to justify it.
Fifthly, do you really want to justify your "non-alarmism" on your "scientific literacy" as defined by Kahan et al? That is based on eight multiple choice questions, seven of which have true/false answers. You seem to think they are talking about college or professional level scientific understanding. This is not true.
Here are the eight questions (PDF):
The center of the Earth is very hot [true/false]
All radioactivity is man-made [true/false]
Lasers work by focusing sound waves [true/false]
Electrons are smaller than atoms [true/false]
Does the Earth go around the Sun, or does the Sun go around the Earth?
How long does it take for the Earth to go around the Sun? [one day, one month, one year]
It is the father’s gene that decides whether the baby is a boy or a girl [true/false]
Antibiotics kill viruses as well as bacteria [true/false]
Feel free to assert that these provide a good measure of one's ability to understand climate science and its case for concern! Whilst I think Kahan et al is interesting, it goes nowhere near assessing subjects' ability to comprehend the scientific case for concern.
Sixthly, given that simplistic level of "scientific literacy", and those other relationships I mentioned where those with professional level post-graduate scientific competence in the relevant fields tend to be more concerned, you might also consider what Dunning and Kruger might have to say about your
hypothesis apparent assertion that basic "scientific literacy" extends to understanding climate science well enough to validly derive a "climate non-alarmism" position as the best inference from all the evidence.
But don't let me stop you digging yourself in while you're enjoying it so much ;-)
"Bear in mind that Kahan et al.'s results are equally amenable to the opposite "explanation": i.e. that communitarians accept non-facts because they make them comfortable."
Your claim is false for any common understanding of "equally", and further misrepresents the paper.
Firstly, that would require climate "alarmism" to be non-factual - which IIRC Kahan et al don't presume, and based on your performance on these threads is a difficult proposition to support. Secondly, that relationship ("communitarians" to alleged "non-facts") is so weak in the case where it runs in the opposite direction to the relationship for "individualists" that you can't assert a positive relationship due to the strong overlap in confidence intervals. (Although that apparently hasn't stopped you asserting it.) And if you were referring to the other relationship, that one is much stronger for "individualists" than "communitarians", so you would be arguing that they are more likely to accept non-facts due to their value systems. I suspect you don't want to go there.
Thus "equally" is not at all supported by those observations - and neither is your proposition sans "equally", due to the aforementioned large confidence interval overlap.
Furthermore, you'd want to have strong support for a claim that a false serious concern makes a whole bunch of people "more comfortable", given that concern and comfort are generally considered to be at least orthogonal if not downright antagonistic.
"I'm not aware of another example in history that suggests that such a mechanism even exists in the normal population."
IIRC such a mechanism is an uncontroversial part of modern understanding of psychology. Heck, even "you're in denial" is common idiom outside of psychology. Wonder why?!
re: 300 rumleyfips
Most know journalist Darrell Huff's "How to Lie With Statistics" as a simple, classic book on the mis-use of statistics to fool people, and how to recognize it. I first read it in high school.
More complex statistics can be a dandy tool to cause confusion, because they rapidly take many readers beyond their expertise, and whether done well or not, can create confusion. In climate anti-science, there seems to have been movement away from physics-based arguments (which often run afoul of conservation laws and other sophomore physics) towards statistics-based arguments that have less obvious problems. For instance, in #203, I wrote:
'You don't have to go into statistical problems, as per DC's Replication and Due Diligence... or Nick Stokes on selection, i.e., wrong parameters and 1:100 cherry picks.'
The 1:100 cherry-pick ought to be understandable to most. The "wrong parameters" issue takes more work, and it surely helps to have some background in stochastic processes/time series or at least enough statistics background to learn it quickly.
But back to Huff. Sadly, he used his skill in explaining statistics as a shill for the tobacco industry. Hence, he and his book offer two kinds of lessons: the obvious ones in the book and the not-so-obvious one behind the scenes. Of course, climate anti-science inherited much from cigarette anti-science.
More on #203 or
false citation, flat-earth-maps, dog astrology journal
First, Errata: Willard points out 2 broken links, fixed:
See comments here and here.
and edited transcript version, p.6
Bill Connolley added new comments, the first since 2005, to The Significance of the Hockey Stick. McIntyre fixed the 1995/SAR problem.
So, that false citation is gone (although not from the various books where it is also found), but flat-earth-map and dog astrology journal remain, and as noted in #203, both were used often thereafter.
Thus far Brad Keyes has focused on the term "scientific literacy" in the Kahan et al paper. Readers of the paper will note that they rolled up their scientific literacy and numeracy responses into a single variable. Tom Curtis points out that the average respondent did not do very well on that variable:
- A full 28% wrongly thought the Sun goes around the Earth;
- Just 32.4% knew how long it takes the earth to go around the sun;
- Only 28% got a question of basic arithmetic derived from a given probability correct;
- A mere 21% figured out the value of double a given risk;
- Just 12% figured out a simple arithmetic question couched in English
- A tiny 3% could do the Bayesian probability question.
Clearly most of the sample don't have the skills to reliably evaluate mainstream scientific and contrarian arguments about basic probabilities, and science, let alone statistics and climate science.
These fairly dismal results alone ought to give people pause should they be tempted to argue that "climate non-alarmism is based on the scientific evidence, as indicated by the relationship between this kind of scientific literacy/numeracy and non-alarmism". The sample is clearly almost entirely full of people with lack of competence to assess the kind of claims Brad Keyes is referring to when he cites this paper, and it seems odd that Brad would wish to associate himself with this sample via a shared assessment of climate risks.
@301 John Sully - has A Scott or McI provided the data yet?
Given that Watts' poll on his own website (just after the AScott survey IIRC) showed 99% of his readers reject climate science, this poll should reflect the same all things being equal - or be even more skewed toward science rejectors (since the avowed purpose of the poll was to discredit the Lewandowsky et al paper, proportionally even more 'skeptics' would be expected to respond in their enthusiasm).
If the AScott survey is markedly dissimilar in 'denier demographics' to the WUWT readership poll then the difference needs to be explained.
How many other population tests has McI conducted? Is he assuming the WUWT survey population is flawed or is he alleging the Lewandowsky et al survey population is askew? If he is saying one is "right" and one is "wrong" - on what basis is he saying that? (eg has he selected a third party 'gold standard' comparison demographic/variable/factor?)
Also, does McI say how he compensated for having what I gather was a mangled Likert scale on the A Scott survey? (I believe someone said it was a five point scale with the neutral and don't know categories combined into one). How did he handle the extremes and how did he handle the mid-point option?
Looks as if McI says some of the differences are statistically significant but he hasn't tested it?!
Has he compared the results to findings in the broader literature - eg the recent favourite (Kahan et al)?
Finally, has McI done the EFA and SEM to determine if the difference in respondent demographics makes a difference to the analysis and results? (Remembering that this is not a poll to determine the relative proportions of 'rejectors' to 'acceptors' of science in any particular population - it's a study to test hypotheses relating to motivation/association of ideas within each group.)
Until AScott puts up the data, McI is just dangling a carrot to the Wattsians, trying to get more traffic to his blog and giving Watts something to write about (all of which are sorely in need of a boost at the moment - their denial is showing).
/joke - Has anyone started a campaign yet - to complain to their employer and funding body (eg about misusing IP, hiding the data and code), and to send McI/AScott FOI requests and emails and blog comments chanting "We want the data, release the code, show us every step of your workings, give us your emails!!!" /endjoke
That's 32.4% of 72%, which is even worse.
So basically the science questions should be answerable by anyone who made it through middle school (in US terms) and the numeracy questions should be answerable by anyone who made it through high school (again US terms) algebra. Not exactly a high standard, although the Bayesian one took a bit of reasoning to get right (90%, right?).
From my #310: If the AScott survey is markedly dissimilar in 'denier demographics' to the WUWT readership poll then the difference needs to be explained.
Explanations that are worth testing (albeit difficult to test) would include that many 'skeptics', particularly the more extreme ones, avoided answering the survey because they:
a) didn't trust it, didn't know AScott from Adam,thought it could be a plot :) (as indicated in some of the comments on WUWT survey request thread)
b) wanted to avoid skewing the results in their direction, given their views on the subject.
/cont from #312
Other explanations for difference in demographics (if they exist) that could be hypothesised are:
c) a number of 'skeptics' posed as 'warmists' in the AScott survey
d) the WUWT poll does not reflect the readership distribution and WUWT has a higher proportion of 'warmist' readers than indicated by the poll.
Sou at #312.
e) responded by admitting their disbelief of climate science, but not admitting that they subscribed to the conspiracy theories. I know of at least two denialists who answered thusly. This is of course a derivative of (b) where the motivation is the same, but the action is different.
Which leads me to consider your point (c). Do the responding pseudo-'warmists' subscribe to conspiracy theories, or do they eschew them? If the former, then the survey is ever more compromised/scammed; if the latter, the results will lend support to the original LOG12 results.
Whether people 'scamming' the survey makes a significant difference or not can be tested in a number of ways. The proportions of rejectors vs accepters matters little (although I maintain that it's worth comparing the profile of respondents with the WUWT poll conducted at the same time).
For example - do the findings (eg in terms of analysing the constructs re free market vs rejecting climate science) differ markedly from findings in other related surveys? If not, then there is no reason to think that 'scammers' have unduly influenced either survey. If so, then further investigations could proceed to see why the AScott survey differs from others that surveyed similar things. (We already know that the Lewandowsky survey is consistent with findings of related surveys.)
(Surveys are not perfect beasts, neither are one on one interviews. 360 degree interviews/surveys were an attempt to overcome the tendency of people to misreport - whether intentionally or inadvertently. These are subject to bias as well, but the 'triangulation' does help to some extent. Lewandowsky et al described how they assessed the span of results. McI has not so far done so. His headline for his latest article shows that he is very biased himself - assuming on no grounds other than his own 'thoughts', that any differences between the two sets of responses means that the one he doesn't like is 'wrong' (McI uses the emotive term "deception" to play to his target audience.)
"That's 32.4% of 72%, which is even worse."
No, it's 45% of 72% = 32.4%. But still bad enough.
Interesting comments on use of the term denier. Whatever you may or may not believe, the comment policy of this blog specifically states "Comments using labels like 'alarmist' and 'denier' are usually skating on thin ice." I rather think that supports my comment to Bill on his use of the term
@- Brad Keyes
" On climate concern, the difference between the individualists and the communitarians is that the former changed their mind with more information/education, whereas the latter stuck to their politically-predicted views no matter how much math/science they were exposed to"
Kahane et al did not test for changing views when more information is supplied.
But there has been much research on this point and the common finding is that those with a free market, idividualistic authoritarian ideology are much less likely to change their opinions when faced with contradictory evidence than communitarian liberals.
But feel free to post a link to research that does indicate the opposite, if you can find any.
"...is that the former changed their mind with more information/education..."
Firstly unless I missed something huge, it's not a longitudinal study! It's a survey of a bunch of people at a single point in time.
Secondly, again unless I missed something huge, there was no survey of how much information participants had already consumed relevant to the risk questions.
Thirdly, the maximal amount of education that can be deduced from these questions is insufficient to understand how complex science is done or to evaluate the work of scientists and those claiming they are bullshitting the public, so you can't rely on that factor to deduce that sufficient information to actually understand the issues is driving the difference in beliefs.
Fourth, just as before, your explanation that "one group is immune to information" - apart from being exploded by the points above - presumes that which you seek to prove and have dismally failed to demonstrate.
Fifth, it is a logical fallacy that "one group is responding to [measures of rather unrelated] 'education' by changing its views [on complex questions for which sufficiently advanced and relevant education was not measured]".
But I'm sure you still believe the study demonstrates that non-alarmism simply does not and cannot be based in rejection of climate science. (Kahan is on the phone and wants to talk to you ;-)
Sou (#315) Your comment about Steve McIntyre "assuming on no grounds other than his own 'thoughts', that any differences between the two sets of responses means that the one he doesn't like is 'wrong'" is perhaps a little unusual as this seems to be a trait exhibited in many of your posts
@ Ian #322 - Oh? Examples?
My personal opinions should be plain enough. When putting forward something as factual, or something that others contend, or something other than a personal opinion, I usually provide a reference to support what I write. (eg Oxford Dictionary.)
Happy to provide references if I've missed one - just let me know.
Sou in response to your invitation in #323 here are a couple of your recent comments.
#310 Until AScott puts up the data, McI is just dangling a carrot to the Wattsians, trying to get more traffic to his blog and giving Watts something to write about (all of which are sorely in need of a boost at the moment - their denial is showing). I'm surprised the moderator let this through as it is very close to slander
#315 (McI uses the emotive term "deception" to play to his target audience.). How can you possibly know this? The comment attributes base motives to another person with no evidence to support the direct claim you make
Re Ian's post #325 (in response to my invitation #323, which was in response to Ian's allegation at #322) and my post #310 - To clarify to the reader who might not recognise that this is my personal opinion, I didn't write this part: I'm surprised the moderator let this through as it is very close to slander. I'll speculate that this is Ian's personal opinion, but that's just my opinion (in case anyone misconstrues).
Re my post #315 - again my opinion (in case people aren't aware). Anyone who agrees with Ian or thinks that a headline "More Deception in the Lewandowsky Data" is not playing to an audience, given the body of the article doesn't support the headline, would be free to offer an alternative explanation.
Isn't this little sideshow a very pleasant distraction from the OP. Not unlike the semantic struggle on previous pages :)
I have to admit Sou I too find the sideshow pleasant. I think credit goes to this blog in that it facilitates comment and response. With regard to the semantic discussion, I think that the Comments Policy of this blog which specifically identifies the term denier and indicates its use is not appropriate, is more telling than any semantic arguments about what that term does or does not mean.
Sou at #315.
Whether people 'scamming' the survey makes a significant difference or not can be tested in a number of ways.
And indeed I suspect that there will be cognitive psychologists rubbing their hand in glee and just waiting for this phenomenon to play out. There's a veritable smörgåsbord of data being trucked in by those who deny human-caused climate change: I can discern at least three projects arising, not the least of which is a more detailed investigation of this peculiar phenomenon where deniers of climate change seem determined to prove the conclusions of papers that focus on their cognitive process, even as these same deniers imagine that they're doing exactly the opposite.
Speaking of which... Brad Keys, you seem determined to dig your hole ever deeper. You might be convincing yourself, but everyone else here with a more rational grasp of facts can see the cognitive scotoma that afflicts you so. However, as I've said before, please do continue - it's absolutely fascinating for us to watch.
Need to read the Comments Policy properly in context Ian. The section of the Comments Policy that refers to the word "denier" is under the header "No ad hominem attacks". So "denier" should not be used as an ad hominem. That doesn't mean that one can't use the term in appropriate context.
That section also states "For example, comments containing the words 'religion' and 'conspiracy' tend to get deleted." However in discussing the Lewandowsky paper it's rather likely that the word "conspiracy" will be much used since "conspiracy theories" is an element of the subject of the paper. Despite the "Comments Policy" these posts haven't been deleted.
Context is everything. It's obvious that the numerous posts discussing the meaning of the term "denier" or otherwise using the term in a non ad hominem manner have not been skating on thin ice, since these haven't been deleted or moderated!
Bernard J #328, I thought the critics of L. et al (regardless of what they thought about climate change) were on the side of good science, and on pretty solid ground IMHO.
If this is what passes muster in cognitive psych I hope none of you are allowed to practice clinically.
I thought the critics of L. et al (regardless of what they thought about climate change) were on the side of good science, and on pretty solid ground IMHO.
As you say, that's your opinion.
It doesn't mean that you're right.
For that, you also need the weight of evidence, and the empirical evidence from many sources suggests that Lewandowsky et al have nothing to fear from your humble opinion.
Is there anything resembling evidence from those of the denialist persuasion? Nothing leaps out...
Bernard - some might speculate (and some blog commenters have - but in the style of the Rabett I'll add 'not I'!) that this paper was but the first stage of a larger study :D
Whether fanciful conspiracy theory or not, it does look like there is a wealth of material for further research here on this website and around the traps. Not just in regard to this particular research either (deflection, distraction, re-definition and other techniques of blog commenters - as old as the first newsgroups and bulletin boards).
" I thought the critics of L. et al were on the side of good science, and on pretty solid ground IMHO."
You might find it difficult to substantiate and support that humble opinion.
@- "If this is what passes muster in cognitive psych I hope none of you are allowed to practice clinically."
I would await with interest your justification and evidence for the clear implication that the authors would be a danger to patients if they apply their findings in clinical practice.
Bernard J #331, I think you are confusing two ideas here.
The first is testing L. et al against the standards of the scientific method, which is a broadly accepted canon. Among a number of deficiencies L. et al fails this test simply because they draw universal conclusions based on a single case ( (-snip-).
Moderator Response: Inflammatory snipped.
izen #333, refer to my #334 for a basic failing.
On our second point I don't think I'd want to be treated by someone that treated all clients based on something they seen in a particular idiosyncratically selected sub-population.
Chris (#329) Here's the context of my comments on use of the term denier. In post #240 I commented on the H.pylori story saying "Yes I am skeptical of those who blindly accept that what is published must be right. For many years duodenal ulcers were regarded as being due to excess gastric acid but are now, thanks to Barry Marshall and Robin Warren, known to be due more to Helicobacter pylori" In response I got this (#242) "Also, it's common for 'deniers' to bring up Helicobacter to 'prove' we're about to discover AGW isn't 'real'". That seems pretty much like an ad hom labelling me as a denier. This is what the policy actually says under the heading of No ad hominem attacks "Comments using labels like 'alarmist' and 'denier' are usually skating on thin ice.
Ian, I've already pointed out that my observation was just that, an observation - and the internet is rife with examples. It was illustrative of my contention that Helicobacter research (a study of a specific organism) has little relevance to climate science (a vast area of science encompassing multiple disciplines, with a consilience of findings).
If you decided to adopt the label for yourself I can only say that's not how I intended it. If you are saying you accept climate science - just say so. I don't know what your views on climate science are. I only know your views on the definition of 'denier'.
HAS at #334.
It has been repeated ad infinitum on many threads here that LOG12 explicitly notes the limitations of the survey.
It staggers me that you and others persist in the meme that it attempts to apply its findings to all circumstances, that they draw "universal conclusions" - this is untrue. Permit me to remind you:
Our respondents were self-selected denizens of climate blogs. One potential objection against our results might therefore cite the selected nature of our sample. We acknowledge that our sample is self-selected and that the results may therefore not generalize to the population at large. However, this has no bearing on the importance of our results - we designed the study to investigate what motivates the rejection of science in individuals who choose to get involved in the ongoing debate about one scientic topic, climate change. As noted at the outset, this group of people has demonstrable impact on society and understanding their motivations and reasoning is therefore of considerable importance.
[Emboldened emphasis mine]
The only thing failing any test is your capacity to read.
As to the tone of my writing, it's difficult to be patient with a group of people who have demonstrated over years of public discourse that they are not prepared (or equipped) to engage rationally with the science that they claim to understand as fraudulent or incompetent - and especially so when they do so with little or no scientific training or experience of their own. And when they're as obtusely recalcitrant to understanding the facts, as you have above demonstrated yourself to be, it really is a bit rich to be expected to be treated with kid gloves.
Prev 1 2 3 4 5 6 7 8 9 10 11 12 Next
Post a Comment
You need to be logged in to post a comment. Login via the left margin or register a new account.