A simple recipe for the manufacturing of doubt

By Klaus Oberauer
Posted on 19 September 2012
Filed under Cognition
and Stephan Lewandowsky
Professor, School of Experimental Psychology, University of Bristol

Mr. McIntyre, a self-declared expert in statistics, recently posted an ostensibly unsuccessful attempt to replicate several exploratory factor analyses in our study on the motivated rejection of (climate) science. His wordy post creates the appearance of potential problems with our analysis.

There are no such problems, and it is illustrative to examine how Mr. McIntyre manages to manufacture this erroneous impression.

Our explanation focuses on the factor analysis of the five “climate science” items as just one example, because this is the case where his re-“analysis” deviated most from our actual results.

The trick is simple when you know a bit about exploratory factor analysis (EFA). EFA serves to reduce the dimensionality in a data set. To this end, EFA represents the variance and covariance of a set of observed variables by a smaller number of latent variables (factors) that represent the variance shared among some or all observed variables.

EFA is a non-trivial analysis technique that requires considerable training to be used competently, and a full explanation is far beyond the scope of a single blog post. Suffice it to say that what EFA does is to take a bunch of variables, such as items on a questionnaire, and then replaces the multitude of items with a small number of “factors” that represent the common information that is picked up by those items. In a nutshell, EFA permits you to go from 100 items on an IQ test to a single factor that one might call “intelligence.” (It’s more nuanced than that, but that captures the essential idea for now).

One core aspect of EFA is that the researcher must decide on the number of factors to be extracted from a covariance matrix. There are several well-established criteria that guide this selection. In the case of our data, all acknowledged criteria yield the same conslusions.

For illustrative purposes we focus on the simplest and most straightforward criterion, which states one should extract factors with an eigenvalue > 1.  (If you don’t know what an eigenvalue is, that’s not a problem—all you need to know is that this quantity should be >1 for a factor to be extracted). The reason is that factors with eigenvalues < 1 represent less variance than a single variable, which negates the entire purpose of EFA, namely to represent the most important dimensions of variation in the data in an economical way.

Applied to the five “climate science” items, the first factor had an eigenvalue of 4.3, representing 86% of the variance. The second factor had an eigenvalue of only .30, representing a mere 6% of the variance. Factors are ordered by their eigenvalues, so all further factors represent even less variance. 

Our EFA of the climate items thus provides clear evidence that a single factor is sufficient to represent the largest part of the variance in the five “climate science” items.  Moreover, adding further factors with eigenvalues < 1 is counterproductive because they represent less information than the original individual items. (Remember that all acknowledged standard criteria yield the same conclusions.)

Practically, this means that people’s responses to the five questions regarding climate science were so highly correlated that they reflect, to the largest part, variability on a single dimension, namely the acceptance or rejection of climate science. The remaining variance in individual items is most likely mere measurement error.

How could Mr. McIntyre fail to reproduce our EFA?

Simple: In contravention of normal practice, he forced the analysis to extract two factors. This is obvious in his R command line:

pc=factanal(lew[,1:6],factors=2)

In this and all other EFAs posted on Mr. McIntyre’s blog, the number of factors to be extracted was chosen by fiat and without justification.

Remember, the second factor in our EFA for the climate item had an eigenvalue much below 1, and hence its extraction is nonsensical. (As it is by all other criteria as well.)

But that’s not everything.

When more than one factor is extracted, researchers can rotate factors so that each factor represents a substantial, and approximately equal, part of the variance. In R, the default rotation method, which Mr. McIntyre did not overrule, is to use Varimax rotation, which forces the factors to be uncorrelated. As a result of rotation, the variance is split about evenly among the factors extracted.

Of course, this analysis is nonsensical because there is no justification for extracting more than one factor from the set of “climate change” items.

There are two explanations for this obvious flaw in Mr. McIntyre’s re-“analysis”. Either he made a beginner’s mistake, in which case he should stop posing as an expert in statistics and take a refresher of Multivariate Analysis 101. Or else, he intentionally rigged his re-“analysis” so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.

Bookmark and Share

654 Comments


Prev  1  2  3  4  5  6  7  8  9  10  11  12  Next

Comments 151 to 200 out of 589:

  1. Brandon

    Help me here. L et al say in their paper:

    "Our results identify conspiracist ideation as a personality factor or cognitive style, with numerous theories being captured by a single latent construct ….., to our knowledge, our results are the first to provide empirical evidence for the correlation between a general construct of conspiracist ideation and the general tendency to reject well-founded science."

    I thought Factor Analysis was used to identify underlying structure not PCA.

    Does this mean this conclusion is wrong?
  2. Brandon, from the OP:

    There are several well-established criteria that guide this selection. In the case of our data, all acknowledged criteria yield the same conslusions.

    The next paragraph starts "for illustrative purposes we focus on ..."
  3. Brandon Shollenberger at 17:00 PM on 21 September, 2012
    Nathan, if you want to believe factor analysis and principal component analysis are both subsets of... factor analysis, you can. And if that belief makes you think it is "pretty obvious" Steve McIntyre and others have no idea what they're doing, so be it.

    faustusnotes, I'd think the fact he discusses it might be a hint.

    HAS, there is some truth to that, but I don't believe that means PCA is incapable of identifying underlying structure. In most cases, PCA will give similar results to factor analysis. That means even if factor analysis is designed for that and PCA isn't (which is true), PCA could still be used to accomplish it.
  4. PCA is one of two methods by which factors are extracted for Factor Analysis (see Brandon's above confusion for exhaustive detail).

    Brandon, maybe he discusses it in order to mislead Mcintyre into another fuming post that makes him look bad?
  5. Its just that I thought there was a problem of hypothesis testing (required if you are drawing conclusion as L. et al do) while analyzing all the variance as PCA does.

    faustusnotes I think you may have missed the extent of the claims being made by L et al. They aren't extracting factors they are claiming empirical evidence for general constructs.

    I know, I know, their sampling etc was never appropriate for the conclusions as claimed for any population other than those they got responses from, but PCA doesn't do the test they want.
  6. However when I think about it is quite possible that in this branch of psych it's legit to do deductive analysis and then claim empirical results. Just don't think the FDA would let you do it on humans, although maybe "climate skeptics" at a pinch.
  7. I know, I know meant "inductive analysis".
  8. HAS, they clearly use MLE estimation for factor analysis, and they have a method for choosing the number of factors (there are many). The PCA method they gave here was done just for illustrative purposes. It's perfectly legit to use PCA to determine the number of factors and then MLE to do the factor analysis itself. YOu're barking up the wrong tree.
  9. HAS - The next 24 hours should be quite interesting. I'm going to get popcorn.
  10. faustusnotes MLE estimation ain't a method for testing empirical fit to hypothesised constructs in this context, so pray tell how they got to their conclusions?
  11. @162: faustusnotes -- "Brandon, maybe he discusses it in order to mislead Mcintyre into another fuming post that makes him look bad?" -- (-snip-).
    Moderator Response: Inflammatory snipped.
  12. HAS at that point I can't give an opinion - my familiarity with Factor Analysis stops at the application, and I have no idea about those kinds of issues. Maybe Lewandowsky and Oberauer are better placed to answer that ... could you elaborate for the gallery?
  13. faustusnotes why do you use factor analysis?

    To look at a dataset to create hypotheses about what is going on, or to use data to empirically test a hypothesis about it (in this case about the underlying constructs)?

    L et al do the first and claim the second.
  14. I think I see what you mean HAS, and I think that might be very common in the psych literature. I don't have an opinion as to whether that's what L et al do, though (haven't read that much of their paper).
  15. (-snip-).
    Moderator Response: Off topic and inflammatory snipped.
  16. HAS@150
    Ah yes, who was it referred above to McIntyre not stooping so low, or whatever? Oh, yes, the man with such an endearing "dry wit", that was it. Understated, scholarly, only interested in the truth etc
    His opening salvo is certainly most circumspect and hilariously witty, yes?
    "As CA readers are aware, Stephan Lewandowsky of the University of Western Australia recently published an article relying on fraudulent responses at stridently anti-skeptic blogs to yield fake results."
  17. Bluebottle I think it was Brad Keyes #116 who mentioned "dry wit", but I think it was in a sardonic way.
  18. Sardonic, you say? Right. Maybe so, but I'm sitting here wondering why McIntyre hasn't jumped all over Oberauer and Lewandwosky's other "in press" papers, particularly "Evidence against Decay in Verbal Working Memory" and "Modeling Working Memory: An Interference Model of Complex Span".
    Surely these two so-called scholarly papers also rely on the same flawed methodology, the same fake researchers, the same kind of fraudulent data, the same corrupt peer-review process? The evidence is piling up. Drum them out of the regiment, I say.
  19. Bluebottle - classic diversion - two point penalty.

    This discussion is about the present paper - why would they review other work?
  20. @- A.Scott
    "This discussion is about the present paper - why would they review other work?"

    Perhaps it would help confirm or refute the present conspiracy theory that you put forward earlier that this paper, its survey and analysis, are a front for the carefully designed {by the authors with their expertise in cognitive psychology} title, press statements and blog post that are intended to smear and discredit skeptics.
    It might indicate their adherence to the 'cause' you say motivates this duplicity if the other papers also support the 'cause' or indicate this speculation is false if the other research the authors have done is regular, mainstream cognitive research.
  21. Okay, I'll accept one squirrel point, but only one because I was making a serious point - really. Stand back and ask - if the scientific crimes these researchers are charged with are so heinous, why are their critics focussing only on this one minor paper? (Oh, where's Barry Woods when I need him?) E.g., really, have you actually looked at and analysed any of the Lew's other papers? Is there a consistent pattern here that might relieve the critics of their rather telling overkill of attention on this one? Indeed, why stop at Lew? Are other activist psychologists committing the same fakes and frauds as O & L?
  22. 179: Bluebottle, I'll bite.

    Maybe these papers do confirm that this paper is at the same high standard as other papers in this discipline, but can you tell us:
    1) Doey these papers have deliberately provocotive titles that emphasise a minor and weak part of the findings?
    2) Did they rely on suvery responses drawn from blog audiences with both the means and motive to game their responses?
  23. Well, that's my point. Here's a sample from Lewandowsky's publications: go have a look. Don't focus only on an outlier - if that's what this paper is - establish a pattern (the title on the last one is a bit whimsical):

    Farrell, S., & Lewandowsky, S. (in press). Response suppression contributes to recency in serial recall. Memory and Cognition.
    Lewandowsky, S., Yang, L.-X., Newell, B. R., & Kalish, M. L. (in press). Working memory does not dissociate between different perceptual categorization tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition.
    Sewell, D. K., & Lewandowsky, S. (in press). Attention and working memory capacity: Insights from blocking, highlighting, and knowledge restructuring. Journal of Experimental Psychology: General.
    Lewandowsky, S., Ecker, U. K. H., Farrell, S., & Brown, G. D. A. (in press). Models of cognition and (unnecessary?) constraints from neuroscience: A case study involving consolidation. Australian Journal of Psychology.
    Craig, S., & Lewandowsky, S. (in press). Whichever Way you Choose to Categorize, Working Memory Helps you Learn. Quarterly Journal of Experimental Psychology.
    Ecker, U. K. H., Lewandowsky, S., Swire, B., & Chang, D. (2011). Misinformation in memory: Effects of the encoding strength and strength of retraction. Psychonomic Bulletin & Review, 18, 570-578.
    Craig, S., Lewandowsky, S., & Little, D. R. (2011). Error Discounting in Probabilistic Category Learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 673-687.
    Lewandowsky, S. (2011). Working memory capacity and categorization: Individual differences and modeling. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 720-738.
    Oberauer, K., & Lewandowsky, S. (2011). Modelling working memory: A computational implementation of the Time-Based Resource-Sharing theory. Psychonomic Bulletin & Review, 18, 10-45.
    Ecker, U. K. H., Lewandowsky, S., & Apai, J. (2011). Terrorists brought down the plane!—No, actually it was a technical fault: Processing corrections of emotive information. Quarterly Journal of Experimental Psychology, 64, 283-310.
  24. #181
    Thanks for the list. Based on this set, only one other paper has an attention-grabbing title. The current paper is the only paper with a provocative title, or one that is only barely justified.
    I htink we can also safely say just based on the titles that none of them depended on dubious internet surveys.

    Can you tell us whether any of these papers (or associated SI) gave an unabiguous description of their methods?
  25. A journalistic view:
    (-snip-)
    Moderator Response: Link to inflammatory blog post snipped.
  26. AndyL.
    Why not read some of the papers yourself. Primary sources are always recommended.

    And before you ask, I am not about to read them. The academic matters here are periferal to my interest. What is interesting is sensitive nerve highlighted by the hurt outrage of denialist caught out amd their anger that anyone should point out a MacIntyre mistake .
  27. I read McI's article and the above article. McI is talking about eigenvectors and the researchers about eigenvalues. McI is talking about PCA while the researchers used EFA (and SEM).

    Everything I read about EFA makes the point right up front about options for determining the number of common factors (of which eigenvalue>1 is one of the ways). Yet McI says he didn't use any of them, using the excuse that L et al didn't tell him what to do or how to do it.

    The researchers say that in the example they used for illustrative purposes above, the single factor with the eigenvalue of 4.3 accounted for 86% of the variance among the items. McI talks about eigenvectors not eigenvalues.

    Now as I've said, I'm not even a novice in these matters. But I can't help wondering if McI fully appreciates the difference between the two methods (PCA and EFA). Nothing wrong with that - I gather lots of people find it hard to differentiate. But he does go on as if he's the world's expert on application of statistics to cognitive science.

    I'm hoping there is more to come from the researchers. On the other hand, McI doesn't deserve answers so I won't blame them if they let him stew - or give him a bit more rope.
  28. To clarify the issues, McIntyre posted: "Here's a brief description of the difference between principal components and factor analysis by Brian Ripley, a distinguished statistician." http://www.stats.ox.ac.uk/~ripley/MultAnal_HT2007/PC-FA.pdf
  29. Sou @186
    You have it 100% backwards
    L et al said they used EFA but in fact used PCA. What you read about EFA is therefore irrelevant.
    Oh and McIntyre has published papers where selection of the numbers of factors was a key topic.
  30. I found this paper interesting.  Among other things, it discussed the two in the context of stats packages.
  31. 184 Rumleyfips
    Someone else brought up these papers wihtout saying what arguement they were making. I'm not clear whether they are being quoted to demonstrate that L sometimes uses unusual titles, or that these papers explain the method in more detail, or just that the quality of this particular paper is not unusual in this branch of science.
  32. Brad Keyes:

    I wrote those words in English. just in a different order than they ended up.

    Anyway try:
  33. It just did it again.

    The outrage from deniers shows me how sensitive a nerve has been touched. I am also surprised by how defensive some are about MacIntyre's performance.
  34. I'm a skeptic by nature. I'm skeptical of new scientific findings that have yet to pass 4 simple tests
    1) It must be reproducible
    2) It must be explainable theoretically and the theory should be reasonable within the context of other theories (as applicable to the theory)
    3) The findings must be reproduced by others
    4) reasonable objections to the theory must be answered

    Looking at this blog, I keep coming back to the fact that McIntyre's objection seemed reasonable. The response to McIntyre's objection did not seem reasonable.

    If McIntrye were wrong, then all they had to do was to quickly publish enough details for anyone skilled in statistics could follow the explanation and reproduce Stephan Lewandowsky result. That would have shut up McIntryre very quickly and hurt his reputation. But that was not done.

    (-snip-). The survey itself with so little data and no way of verifying it cannot be taken seriously. No matter what statistical methods are used to analyze it, if the accuracy of the data can't be reliably quantified, then any subsequent statistical analysis of the data is useless.

    Stephan Lewandowsky should withdraw the paper until such time as he can get better data.
    Moderator Response: Inflammatory snipped.
  35. Brad:

    Maybe I mean DeNeros or denerios but I don't. I mean balls, noviates, bishops, discount viscounts, wotters and many more. I don't count luchia because Zeke makes her blog relevent.

    You asked Sou why the dislike of MacIntyre. Here are my reasons. There is a line selecting for H ( hockey sticks) unmentioned in the text. Also there is no mention that in the thousans of hidden runs some go uo, some go down and some run level. Also that this is to be expected with random noise.

    In another incident, a big fuss was made about something that everyone views as making no difference.

    I am a Canadian and I am embarassed by such behaviour.
  36. Replication in science is supposed to mean finding substantially the same results. Ideally, the "replicator" would be doing an independent study -- in this case, designing their own experiment and reanalyzing the data the way they think it should be done. Then you see if the two experiments are consistent.

    What McIntyre does is "auditing" -- assume the study is fraudulent, then try to nail down small bits of ambiguity. This is why he only audits mainstream climate science, and not the obviously flawed papers that litter the denialsphere. So when McIntyre audits this paper he finds using his methods that belief in conspiracy theories "only" explains 83% instead of 86% of someones belief that AGW is wrong. Big deal. As far as I can see he (and people like Brandon) are arguing nits and do nothing to challenge the central results of the paper given the data used in the paper.
  37. faustusnotes,

    Please note that Brandon's #157:

    > faustusnotes, your response makes no sense.

    is a line he's been using a lot. WebHubTelescope calls it the Chewbacca Defense. I prefer to call it the Chewbacca Attack, since Brandon's use is mainly offensive. More background is provided there:

    http://neverendingaudit.tumblr.com/post/24188591829

    Please check your spam filter on your website, btw.
  38. Gator: "he only audits mainstream climate science".
    Clearly he's branched out into picking nits in cognitive psychology now. I really look forward to him submitting his critique to the journal. It will be nice to see his arguments and calculations laid out soberly without his emotive accusations of "scam" "fraud" etc (after surviving peer-review, of course).
    Historians of science will no doubt look back in awe on his lasting contributions to so many fields of scholarly literature.
  39. Bluebottle:
    Historians of science will no doubt look back in awe...

    The scope of awesomeness is almost unbounded. Leaving aside the particular matter at hand and the entire specific controversy over climate science, the landscape is entirely novel, an breathtaking collision of faults and virtues some of which are simply translated from other domains, some previously impossible.

    Is it awesomely beautiful, or ugly?
  40. 197 rumleyfips

    You don't have to go into statistical problems, as per DC's Replication and Due Diligence... or Nick Stokes on selection, i.e., wrong parameters and 1:100 cherry picks.

    Here's a no-stats case: The Significance of the Hockey Stick, Mar 16, 2005 at Climate Audit plus a few related talks shortly thereafter.

    a) False Citation and Flat Earth Schematic

    McIntyre shows the ~1965 Lamb graph, used once by 1990 IPCC (Figure 7.1.c, with caveats in text, unclear whether ever read.) It had disappeared by 1992 Supplementary Report, and certainly was not in the 1995 IPCC, whose Figure 3.20, p.175 had an early reconstruction back to 1400AD. Maybe someone should ask McIntyre where he actually got this schematic from, because it was not the 1995 IPCC. The 1990/1995 date matters, seen later. See Jones, etal, Appendix A for the history of the schematic, which dates back to H.H. Lamb ~1965 on Central England, which is certainly not the NH. (Referenced IPCC reports are here.)

    Anyone citing this schematic as credible any time after 1992 might be likened to a flat-earther clinging to Anaximander. Think of the schematic as a flat-earth temperature sketch, which by 2005 got retroactively elevated (by a few people) and promoted as Absolute and Unchanging Truth being hidden by a cabal of climate scientists, just as the way NASA is hiding the absolute truths known by the Flat Earth Society. There we find The Conspiracy:, the active faking of the whole space program, not just Apollo! It has a good FAQ. They show a newer map than Anaximander, still flat.

    b) Use of Unsubstantiated Claim Published in a "Dog Astrology Journal," but actually cited as a preprint at Fred Singer's Website, in a glowing review of a fiction book by a geologist of strong views, David Deming, author of Why I Deny Global Warming, which starts:
    ”I'm a denier for several reasons. There is no substantive evidence that the planet has warmed significantly or that any significant warming will occur in the future. If any warming does occur, it likely will be concentrated at higher latitudes and therefore be beneficial. Climate research has largely degenerated into pathological science, and the coverage of global warming in the media is tendentious to the point of being fraudulent. Anyone who is an honest and competent scientist must be a denier.”

    McIntyre took the key quote from Fred Singer's SEPP Website, where it appeared March 5, 2005, ~3 months before it was actually published in Journal of Scientific Exploration (JSE). This was a curious publishing practice and seems to have violated policy:
    “ The material must not appear anywhere else (including on an Internet website) until it has been published by the Journal (or rejected for publication).”

    Singer and McIntyre were in contact no later than 2003. Deming cited McIntyre and McKitrick (2003), and McI+McK made use of Deming’s citation of Huang, et al on boreholes (see below).

    JSE as “Dog Astrology Journal”
    JSE often publishes on UFOs, ESP, reincarnation, etc, but more interestingly on dog astrology. See commentary on Deming’s other articles in JSE and other articles in same issue. Occasionally, a serious debunk (as of crop circles) is seen, but sampling the articles may help the reader assess overall credibility. (Philosophically, I’m happy that someone looks at anomalies, but the reader can study articles here and estimate how many are by people with long-cherished ideas lamenting mainstream science’s unwillingness to accept them, publish their papers, etc. See abduction research, for example.)
    JSE has also published Joel Kauffman’s Climate Change Reexamined. See comments here and here. Kauffman:
    ”Because of the existence of a research cartel and media control in this field (Bauer, 2004), the readers' forbearance in my use of websites and non-refereed sources is requested.
    An example of non-scientific pressure in the climate field is the firing of six editors by the publisher of the journal Climate Research because they published a literature review on long-term temperature proxy studies (Soon & Baliunas, 2003). …”

    Kauffman was rather confused about the PALS case.. The reader may assess reasons why emeritus-chemist Kauffman’s work on climate might be hard to get published and the reader might want to assess the quality of peer review on climate-related papers at JSE.

    Deming has not produced any evidence of the email he claimed to have gotten in 1995. His views are certainly clear.

    Back to McIntyre post and talk
    If McIntyre in 2005 had properly labeled the schematic as long since deprecated, gone by 1992, with modern work starting by 1995 IPCC, it would have been obvious that scientists had discarded it like a flat-earth map. That would have made Deming's 2005 claim of a 1995 email look silly:

    a) Nobody in 1995 would have been worried about “getting rid of the MWP,” although they might have wanted to dispel the idea the Lamb schematic was Truth. They had years before abandoned the schematic.

    b) Most reconstructions since (including MBH99) had a modest hemispheric MWP, in fact, MBH99 was higher than many. Perhaps readers understand why one would expect decreasing jiggles going from a) Central England or further North to b) half of Northern Hemisphere to c) entire NH to d) world.

    The themes of Lamb schematic=Truth and conspiracy by climate scientist cabal show up in the Wegman Report, whose "blueprint" was May 11 2005 presentation in Washington for Cooler Heads Coalition (CEI) and George Marshall Institute, and then somewhere on Capitol Hill. Hence, all this actually matters, since it was used to mislead Congress.

    McKitrick had gotten the 1990/1995 problem wrong in this an April 4, 2005 talk, but fixed it ~July 2005 McIntyre did not go back and fix the blog post. Mckitrick cited both Science and JSE correctly in April, but by May (p.12), McIntyre was attributing Deming’s 2005 JSE quote to Science 1995. That persisted in George Marshall institute’s later edited transcript version, p.6. Replacing JSE with one of the world’s top two general science journals is another false citation.

    The Huang, et al borehole discussion is another flat-earth graph, shown in SSWR, Appendix W.4.3, too long for here, but an example of clinging to an older article one likes, ignoring the same authors’ more recent work that contradicts it.

    To summarize, in one short McIntyre blog post, we find:
    false citation
    reliance on flat-earth schematic, obsolete for 13+ years.
    reliance on dog astrology journal preprint via Singer

    And then we find a talk in which JSE becomes Science.

    I leave it to the reader to weigh the possible explanations for all this.
  41. Gator =>"What McIntyre does is "auditing" -- assume the study is fraudulent, "

    I just read McIntyre's "audit" of this and am not familiar with all his work. But assume you are right and McIntyre is assuming that the study is fraudulent - how is that a problem if Stephan Lewandowsky is right? There have always been people who thought some new scientific theory was fraudulent. Theories that are robust and correct easily survived such attacks.

    All Lewandowsky needed to do was to post his methods and data and he would quickly get vindicated if he is right and McIntyre's reputation would go down the tubes. (-snip-).
    Moderator Response: Inflammatory snipped.
  42. People interested in the difference between PCA and EFA might find this MS Powerpoint presentation helpful.  

    (The paper I linked to previously is great.  The powerpoint presentation gives some simple worked examples and gives more clues as to where people can trip themselves up, particularly early in the analytical process.)
  43. @RobertG - you may not have seen it in the paper, but the paper does address the point about the possibility of scammed responses. The authors have no need to repeat it again and again for the lazy reader.

    McIntyre alleges fraud and fakery multiple times in a very colourful manner, but provides nothing to back up his allegations. He's pulled a number 20% seemingly out of the air. That's all typical of fake skeptics. It suggests he is fearful of being seen either as someone who is a moon hoax conspiracy theorist or (climate) science hoax conspiracy theorist. (See John Mashey's post.)
  44. Further to RobertG wanting step by step instructions on how to apply EFA and SEM - needless to say, the paper does describe and discuss the data and methodology. The fact is that not everyone will be experienced in the techniques used nor how they are applied in cognitive science and more generally in psychology. (Journal papers are rarely if ever intended to be a primer for the layperson.)

    I don't know why anyone should have an expectation that scientists would comply with hostile demands issued as tirades by fake skeptic bloggers. They might do so on occasion, but the normal route for critiques and comments and public exchange of ideas on a published paper is via the journal itself.

    I'm with Bluebottle...
  45. Anyone who ever reads a paper by Lewandowsky will remember...

    Well, yes, some will.

    Given the selective attention paid to certain individuals and the trivial advancement of understanding obtained at unnecessary cost in noise and irrelevancies, it's arguable the central objective of the maneuvers is to shift attention from science to personalities. While picking over factual trivialities adjectives such as "fraud" and "scam" are applied with a trowel, with the correct assumption those words will stick in at least a few memories.

    The tactic was a failure in the previous iteration. One reputation was made, another continued without any lasting or significant harm. The newly minted reputation is not portable, the other still is, amply.
  46. Blair (#50) is exactly right. It is the responsibility of the author and the editor of the journal to ensure the information provided is sufficient for others to replicate the results. This used to be the sine qua non of scientific publishing but is now a concept seldom adhered to particularly in the more contentious areas of science. (-snip-)
    Moderator Response: Inflammatory snipped.
  47. @Brad - I've said repeatedly that I've got no problem with discussion. (I'm enjoying learning about this.) You're ignoring the context of my comments (aka cherry-picking) - namely - why do fake skeptics have an expectation that scientists will drop everything to answer their questions? Particularly when the people demanding answers are rude, hostile and go on and on and on repeating unfounded allegations.

    I doubt the paper would have been released if the scientific basis hadn't been fit for inspection. Are you suggesting it isn't 'fit for inspection'? If so, on what grounds?

    (McI is certainly not the gold standard for deciding this. He rushes in making naive mistakes, isn't familiar with the field and even seems to be floundering when it comes to the stats the researchers used despite his supposed expertise. He's just another blogger making a lot of loud noise in a very small space.)
  48. Sou

    (-snip-). McI is perfectly within his rights to ask for the details of the procedures used by Lewandowsky to obtain his results. See my post (#211) for the reasons why.
    Moderator Response: Inflammatory snipped.
  49. @- Ian
    "It is the responsibility of the author and the editor of the journal to ensure the information provided is sufficient for others to replicate the results. "

    The crucial qualifier is - " by others familiar with and skilled in the field.
    You would have to be very familiar and skilled in the field of population genetics and mutation rates to replicate the recent claims of the non-miscegenation of modern humans with Neanderthals for instance.

    @- " If McIntyre hasn't got the necessary detail from Lewandowsky's paper to enable replication of his results, the editor of the journal should refuse to publish the paper until the necessary details are provided. "

    Just suppose, as a hypothetical, that somebody else perhaps a little more familiar with the field of cognitive psychology and the analysis methods it uses DID claim they had replicated the results of the paper from the available data and description.

    Would you believe them and accept that the paper is methodologically robust, or is Mcintyre the sole and only arbiter of scientific integrity ?
  50. Ha ha - your reverence at the feet of McI is noted, Ian.

    McI can and has threatened, cajoled and demanded. Doesn't mean the scientists have to do his bidding.

    The authors have already provided all the survey data which everyone can analyse any way they see fit.

    In addition they've provided more than enough information for a competent person in the field to check the analysis using the same analytical tools and approach as the researchers.

    What other info do you want? (Apart from elementary and advanced texts on EFA and SEM and cognitive psychology.)

    (Brandon said it would take no more than a few hours to write the paper - I presume he meant the stats analysis only. McI is still struggling to get to first base in the EFA part of the analysis. Doesn't say much for his competence IMO.)

Prev  1  2  3  4  5  6  7  8  9  10  11  12  Next

Comments Policy

Post a Comment

You need to be logged in to post a comment. Login via the left margin or register a new account.