There are two articles in Psychological Science that appeared online today: The first article by Ruth Dixon and Jonathan Jones presents an alternative analysis of two papers that I published with colleagues in 2013 on the role of conspiracist ideation in the rejection of science. The second article is a rejoinder to Dixon and Jones and is authored by me together with Gilles Gignac and Klaus Oberauer.
We are very pleased that, more than two years after the first article in Psychological Science became available, a critical commentary on our work was submitted for peer review. This is the type of scientific debate that moves the field forward, unlike some of the other responses to our work, detailed here, that have only wasted our time and that of other university staff without anything to show for it.
So how did the field move forward by the two sets of articles that appeared today? Opinions will likely be divided on this question, but in our view the alternative analyses of our data by Dixon and Jones underscored the robustness of our original results.
The case that we make in our rejoinder is somewhat nuanced, but in a nutshell Dixon and Jones showed that the correlation between endorsement of conspiracy theories and the rejection of climate science (they did not consider the other scientific propositions that were of interest in our research) will become statistically non-significant under certain circumstances, if the data analyst makes particular choices. We do not question their result per se (we get the same results if we apply their technique), but we argue that the choices that are necessary for this correlation to “disappear” are inadvisable.
- Dixon and Jones (D&J from here on) dismiss one of our studies because the sample was skewed. We agree that the sample was skewed, but because we were aware of this aspect of our data, we used a data analytic technique—an ordinal rank-based structural equation model—that was unaffected by the presence of the skew. We therefore do not believe that a wholesale dismissal of the study is justified.
- D&J did not model the full variance-covariance structure of the data, as we did in both studies, but resorted to bivariate linear regression involving a single pairwise relationship between two variables while ignoring all others. Linear regression is known to be susceptible to attenuation through measurement error, and it is therefore not surprising that D&J report associations that are only 1/3 the magnitude of those in our—measurement-error free—analysis. Our view is that if techniques exist for precise assessment of relationships then those techniques should be preferred over tools that are known to reduce the signal-to-noise ratio.
- D&J further removed 35 respondents from our second study with a representative sample because they responded “neutral” on all items involving the two variables of interest. This has no effect on the association between endorsement of conspiracy theories and the rejection of climate science in the structural-equation model (that’s the tool that is known to be free of measurement error), but it does render the bivariate correlation non-significant. In our opinion, the removal of participants on ad hoc grounds (i.e., looking at the data to identify observations that are deemed unsuitable) represents one of the degrees of freedom available to researchers that has recently been identified as a potentially “questionable research practice”, and we believe that the elimination of observations must therefore be approached with great care. This is particularly true in the case of our study, whose sampling plan was “de facto” pre-registered: That is, we contracted with a professional survey company (Qualtrics.com) to obtain a representative sample of Americans of a pre-determined size, and to include only those respondents who completed all items and passed an attention filter question. Those were a priori constraints on our sampling plan, and once we obtained the data, we analyzed them all. Of course, one can now stipulate any number of criteria to eliminate observations, but any such ad hoc elimination can be critiqued as a potentially questionable research practice
- D&J furthermore reverse the role of the dependent and independent variables in their bivariate analysis, whereupon the observed association disappears. To clarify, instead of predicting the attitude towards science from conspiracist ideation, they predict the latter based on the endorsement of climate science. Now, the fact that the results differ between those two statistical models is not surprising because the two models answer different questions. The question we asked is: Do people with a relatively stable disposition to endorse various conspiracy theories (we know that those cognitive attributes are quite stable) tend to reject established scientific propositions? We have a good theoretical reason for asking this question. A person prone to accept conspiratorial thinking may find it easier to explain away the scientific consensus as arising from a conspiracy among scientists. Indeed, this possibility is supported by a large body of research. The question asked by a statistical model reversing the direction of prediction is: Do people who reject established scientific propositions tend to endorse all sorts of conspiracy theories? We see no theoretical rationale for why one’s attitude towards specific scientific claims should influence one’s general tendency to endorse thematically unrelated conspiracy theories. D&J provide no theoretical rationale for their reversal of the roles of predictor and predicted variable either.
In summary, we are pleased that our work has finally been critiqued in the appropriate forum—namely, the peer-reviewed literature. We profited from this exchange because it forced us to consider our data in new and different ways, and in so doing we were able to show how robust our results are to a number of choices that data analysts might legitimately make. It is only under a fairly specific concatenation of such choices—all of which we believe are sub-optimal—that one of the many associations between variables reported in our paper becomes non-significant.
There is, moreover, one clear point of agreement between us and D&J: The association between conspiracist cognition and the rejection of climate science is relatively small in magnitude. Indeed, the effect explains only about 4% of the variance in our structural-equation models, compared to more than 60% that is explained by endorsement of “free-market” economics. However, just because an effect is small does not mean it is inconsequential: there are several well established and highly consequential effects that are as small as, or smaller than, our reported association. For example, the correlation between combat exposure and post-traumatic stress disorder, and between lead exposure and children’s IQ, explain in the order of only 1% of the variance. They nonetheless have notable public-health implications once scaled up to society as a whole.