A simple recipe for the manufacturing of doubt

By Klaus Oberauer
Posted on 19 September 2012
Filed under Cognition
and Stephan Lewandowsky
Professor, School of Experimental Psychology and Cabot Institute, University of Bristol

Mr. McIntyre, a self-declared expert in statistics, recently posted an ostensibly unsuccessful attempt to replicate several exploratory factor analyses in our study on the motivated rejection of (climate) science. His wordy post creates the appearance of potential problems with our analysis.

There are no such problems, and it is illustrative to examine how Mr. McIntyre manages to manufacture this erroneous impression.

Our explanation focuses on the factor analysis of the five “climate science” items as just one example, because this is the case where his re-“analysis” deviated most from our actual results.

The trick is simple when you know a bit about exploratory factor analysis (EFA). EFA serves to reduce the dimensionality in a data set. To this end, EFA represents the variance and covariance of a set of observed variables by a smaller number of latent variables (factors) that represent the variance shared among some or all observed variables.

EFA is a non-trivial analysis technique that requires considerable training to be used competently, and a full explanation is far beyond the scope of a single blog post. Suffice it to say that what EFA does is to take a bunch of variables, such as items on a questionnaire, and then replaces the multitude of items with a small number of “factors” that represent the common information that is picked up by those items. In a nutshell, EFA permits you to go from 100 items on an IQ test to a single factor that one might call “intelligence.” (It’s more nuanced than that, but that captures the essential idea for now).

One core aspect of EFA is that the researcher must decide on the number of factors to be extracted from a covariance matrix. There are several well-established criteria that guide this selection. In the case of our data, all acknowledged criteria yield the same conslusions.

For illustrative purposes we focus on the simplest and most straightforward criterion, which states one should extract factors with an eigenvalue > 1.  (If you don’t know what an eigenvalue is, that’s not a problem—all you need to know is that this quantity should be >1 for a factor to be extracted). The reason is that factors with eigenvalues < 1 represent less variance than a single variable, which negates the entire purpose of EFA, namely to represent the most important dimensions of variation in the data in an economical way.

Applied to the five “climate science” items, the first factor had an eigenvalue of 4.3, representing 86% of the variance. The second factor had an eigenvalue of only .30, representing a mere 6% of the variance. Factors are ordered by their eigenvalues, so all further factors represent even less variance. 

Our EFA of the climate items thus provides clear evidence that a single factor is sufficient to represent the largest part of the variance in the five “climate science” items.  Moreover, adding further factors with eigenvalues < 1 is counterproductive because they represent less information than the original individual items. (Remember that all acknowledged standard criteria yield the same conclusions.)

Practically, this means that people’s responses to the five questions regarding climate science were so highly correlated that they reflect, to the largest part, variability on a single dimension, namely the acceptance or rejection of climate science. The remaining variance in individual items is most likely mere measurement error.

How could Mr. McIntyre fail to reproduce our EFA?

Simple: In contravention of normal practice, he forced the analysis to extract two factors. This is obvious in his R command line:

pc=factanal(lew[,1:6],factors=2)

In this and all other EFAs posted on Mr. McIntyre’s blog, the number of factors to be extracted was chosen by fiat and without justification.

Remember, the second factor in our EFA for the climate item had an eigenvalue much below 1, and hence its extraction is nonsensical. (As it is by all other criteria as well.)

But that’s not everything.

When more than one factor is extracted, researchers can rotate factors so that each factor represents a substantial, and approximately equal, part of the variance. In R, the default rotation method, which Mr. McIntyre did not overrule, is to use Varimax rotation, which forces the factors to be uncorrelated. As a result of rotation, the variance is split about evenly among the factors extracted.

Of course, this analysis is nonsensical because there is no justification for extracting more than one factor from the set of “climate change” items.

There are two explanations for this obvious flaw in Mr. McIntyre’s re-“analysis”. Either he made a beginner’s mistake, in which case he should stop posing as an expert in statistics and take a refresher of Multivariate Analysis 101. Or else, he intentionally rigged his re-“analysis” so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.

Bookmark and Share

657 Comments


Prev  1  2  3  4  5  6  7  8  9  10  11  12  Next

Comments 501 to 550 out of 589:

  1. A.Scott at 06:00 AM on 30 September, 2012:

    I'd be particularly interested in Bernard J.'s comments, as he has been one of the fiercest supporters


    As usual, you engage in that little twisting-of-the-facts thing where you either deliberately or otherwise miss the point.

    I fiercely support the principle that a paper produced by professionals and accepted by a high-ranking journal be assessed and critiqued by fellow professionals rather than by armchair experts who have never darkened the doorway of a psychology lab, especially one where statistical analyses are specifically conducted.

    I've said as much on STW previously about this subject. As a non-psychologist it's not up to me to refute it, especially when there is much evidence elsewhere to support it. It's certainly not up to non-scientists such as the audience at WUWT and similar blogs to pretend that they have the professional conpetence with which to refute the paper.

    If the content of the paper is to be refuted, it needs to be done with competence and with appropriate knowledge, neither of which have been evidenced in the blogosphere chatter. It also needs to be done with accounting for the previous publications which found similar correlations, and for the weight of empirical evidence, so the task becomes greater than the rebuttal of just one paper.

    I happily stand by my comments on Deltoid, and I'm delighted that you drew them to the thread's attention. I still think that my points are valid ones, although I will also note that the only one that might have a bearing on the results is my comment about question 3, and not being an expert in this type of survey I have no idea whether the researchers have a particular handle on the question's intent that escapes me.

    All in all it just goes to show how difficult it is to construct a robust survey... and how your effort to parrot it to a prejucied audience, with potential shortcomings not even addressed, shows no sign of professionalism at all.
  2. Highlighting something can be risky to one's 'argument'.

    What AScott's link demonstrates is that, contrary to what McIntyre (arguably libellously) alleges (he used the word 'fabricated' ie lied), there is a diversity of views among readers of 'pro-science' blogs - as is also demonstrated in the survey results.
  3. "...there is a diversity of views among readers of 'pro-science' blogs..."

    Which, of course, has been crashingly obvious all along to anyone who hangs out on them - or to anyone who doesn't, but decided to spend a few minutes finding out...
  4. I doubt that there is any creative data topiary that can prune the scare quotes and turn 'skeptics' into skeptics.
    I would also be surprised if it is possible either in this survey or its attempted repeat to eliminate the other big correllation between the rejection of climate science and the belief that past environmental problems like acid rain and CFCs are fully 'solved'.

    It is a common feature of 'sceptics' that now it is pointed out in the survey becomes and obvious aspect of the mindset of many 'skeptics' for anybody that has some exposure to the blog postings of the rejectionists.
  5. Bernard J, I think your comment at 13:10 PM goes to the heart of some of the issues here. Who are the experts?

    The problem is that empirical scholarship in the social sciences is multidisciplinary. It involves expertise in experimental design and statistics along with subject content. This isn't much different from a range of other empirical disciplines but in the softer end of the social sciences the engagement has been weak.

    (-snip-).

    (-snip-).

    Constructing a robust survey (and experiment) isn't hard if you hire appropriate help. Look for papers in psych where the last author comes from the stats department IMHO. (-snip-).

    (-snip-).
    Moderator Response: Moderator trolling and multiple inflammatory snipped.
  6. @HAS - McI clearly states in a comment (update September 29) that Professor Lewandowsky "fabricated" the assertion that pro-skeptic blogs are diverse. Check it out.

    McI chose the word 'fabricated' and accused Prof Lewandowsky only - rather than saying he didn't agree with the authors of LOG12, or he thinks the authors of LOG12 were mistaken or in error.

    It occurs to me that if there were any skeptical 'skeptics' posting on CA, they would have been asking some hard questions after more than three weeks of unsupported and wild allegations (including the humourous one of having his IP address blocked - paranoia plus), lots of bluster but still no sign of any promised reanalysis*.

    I find it quite appalling that all the 'skeptics' appear to find nothing amiss with the behaviour. (That could be worth some research in its own right.)

    (*There's been nothing but a tentative first step in EFA of a small portion of data (maybe), and some very basic frequency charts of a couple of variables comparing them with the AScott survey.)

    It's probably all a show put on to distract other 'skeptics' from the record Arctic ice melt this year. Very poor form just the same.
  7. If you look at the author list there are no statistician

    Look harder! Multivariate statistics, psychometrics, EFA and SEM are specialisms of (at least) one of the authors.
  8. Sou good catch, I misunderstood, didn't read properly. Can't find the SM quote but assume you've got it right.

    Having said that you'd have to agree it is a long bow for L. et al to say their sample drawn from the readership (or perhaps more accurately the respondents from) the blogs they selected wouldn't be biased.

    As to the rest you'll see that over at CA I've been expressing the view (as I have here) that L. et al is (-snip-)
    Moderator Response: Inflammatory & ad hominem snipped.
  9. Sou I see no one with a qualifications in stats. (-snip-)?
    Moderator Response: You do understand that compliance with the Comments Policy is not optional? Inflammatory snipped.
  10. Yes.
  11. Sou, and how do you think the Genos brand feels about its association with this paper?
    Moderator Response: Off-topic struck out. Please adhere to the OP of this thread.
  12. It is not associated with this paper, is it?

    However having worked in not dissimilar environments I would think that should they hear of it, most would be surprised by the kerfuffle (results are in line with previous research), appalled at the abuse hurled at the researchers (very unprofessional and arguably unethical), not surprised that most bloggers aren't up to the stats - it's a specialised area, nor surprised that bloggers who aren't familiar with psych research go on as if they know what they are talking about. (They'd go to social gatherings where everyone's a shrink when they meet a psychologist, just like everyone's a doctor when they meet a doctor, or an expert educator when they meet a teacher. Nonna, on the other hand, knows a lot more about sucking eggs than her grand-bambinos do.)
  13. HAS at 18:02 PM on 30 September, 2012:

    Having said that you'd have to agree it is a long bow for L. et al to say their sample drawn from the readership (or perhaps more accurately the respondents from) the blogs they selected wouldn't be biased.


    I've said it before, and I'll happily say it again.

    If the most extreme of the denialist blogs chose (either consciously or otherwise) not to participate, when those same denialist sites are demonstrably poulated with conspiracy theorists, then the only way that LOG12 is biased is conservatively - that is, away from the results it found.

    It's past time that the squarkers here thought about that.
    Moderator Response: As this term is undefined (even by Google) it could be interpreted as inflammatory within the context used and is thus struck out.
  14. To clarify, my comment #510 'yes' means that I do know what 'qualifications' means. (The specialisms I refer to in #507 relate to specialisms based on work experience and qualifications - with a higher degree in the subject area).
  15. Sou at 13:31 PM on 30 September, 2012

    What AScott's link demonstrates is that, contrary to what McIntyre (arguably libellously) alleges (he used the word 'fabricated' ie lied), there is a diversity of views among readers of 'pro-science' blogs - as is also demonstrated in the survey results.


    So you would try to tell us Bernard J. is a skeptic?
  16. Sou at 19:03 PM on 30 September, 2012

    However having worked in not dissimilar environments I would think that should they hear of it, most would be surprised by the kerfuffle (results are in line with previous research)


    I'm curious. Please explain how results are "in line" with previous research, and exactly what research you are referring to.
  17. re: 509

    People should argue the merits of this particular paper, not try to generate the false meme that papers using statistics need a statistician author to be valid.

    Like any field, I'd guess that statistics skills are normally (or lognormally) distributed. Bell Labs had some of the world's top statisticians, but most papers with statistics didn't have them as coauthors. To have done that would have required about 10% of the total staff. Of course, people got help when needed, but usually the others could do well enough without bothering Tukey, for instance, and John Chambers' S made it a lot easier for more people to do better analyses.

    This meme is an especially silly argument, given that psychologists certainly can be experts in statistics. Maybe there is a another cognitive-psych problem behind this meme.
  18. So you would try to tell us Bernard J. is a skeptic?


    It all boils down to the fact that I am highly skeptical of your bona fides, and of the process that you are employing to respond to LOG12.

    You have but to address these points properly (or at all) if you want to assuage my scepticism.
  19. JohnMashey at 07:33 AM on 1 October, 2012
    re: 509 ... People should argue the merits of this particular paper
    Like you did with your lengthy attack on Steve McIntyre?
    Moderator Response:

    You falsely characterize the linked comment as a personal attack on Mr. McIntyre. That is incorrect. It is a critique of blog posts and thus arguments, not the person himself.

    Given that this mischaracterization by you is part of a larger pattern of such comments you have made, further such comments constructed thusly by you will be deleted in their entirety.

  20. FWIW at Rabett Run the first two referring blogs are Real Climate and Climate Audit. About 3 to 1 in favor of the first
  21. Few have the professionalism and courage to say what they think is right, and correct, even when understanding the fallout.

    What I think is right and correct is that this is clearly hyperbole, and that what Tom actually did, and almost certainly what elicited this comment, is that he defended the person who wrote it. I do not think that it is right or correct that complimenting someone for defending oneself is courageous.

    There is a lot more that I think is right and correct that I would gladly say and suffer the consequences but which is against this site's policy.

    I also think that it is right and correct that many climate scientists have bravely said what they think is right and correct and what is right and correct about the "big climate problem" we face and, as a result, have been accused of being criminals, conspirators, frauds, self-aggrandizers, and so on and have even received death threats.

    I think it is right and correct that we are observing a tragedy unfold as numerous people have been misled into taking on views and making claims and taking actions (fighting against necessary changes in policy) that are against all of our interests, including their own. I think it is right and correct that work such as that of Lewandowksy et. al. is an attempt to mitigate this tragedy by trying to analyze the causes of this unfolding sociological disaster that is leading to a much greater ecosystemic disaster for human society.
  22. So you would try to tell us Bernard J. is a skeptic?

    Of course Bernard J. is a skeptic. He is not, however, a "skeptic" -- a climate-science rejecter erroneously self-identifying as a skeptic. I have pointed out at length the very different meanings of these terms. As long as you fail to distinguish them you will have trouble communicating with those who do grasp the difference (but will instead communicate something else).
  23. Willard, it's called salting.
  24. Please make a point to review the actual post Mr. Mashey was so critical of located here.

    It is important to gain perspective on Mashey's level of criticism, to see the original post.
  25. In addition, of course, Bernard J. was not the only commenter on the linked page. The very first post what from someone calling themselves "denier", and there were other non-accepters posting there. When the point of diversity was made A.Scott's response was to completely ignore that fact and instead pose a non sequitur question.

    Bernard J. wrote of the lack of professional competence of the critics at WUWT and similar blogs. I think it is right and correct that the non-accepters are generally (but not to a person) out of their depth in re the science of climate, or of the other subjects they discuss such as statistics and sociology. But it isn't only science where they are out of their depth ... I also observe a general failure in their treatment of evidence and in the application of logic, including the recognition of fallacies and the ability to deal with modal and set logic (even in such cases as distinguishing between {Bernard J.} and {Bernard J. and others}). This creates a gap in communication between them and the scientifically trained accepters of science (set clarification -- I'm not including those who aren't scientifically trained), and it's hard to even see how it can be breached.
  26. I responded abive noting it was my opinion Masheys post on McIntyre was an attack on McIntyre. I also intended that to refer to McIntyre's work not personally.

    However I agreed it was easily misconstrued and in order to not have my phrasing distract from the important point asked that the mods:
    "Please change my comment to the more appropriate "lengthy and extremely critical review of McIntyre's old work".
    I repeat that request.
  27. Mashey's long and energetic critical review of McIntyre's very old (2005) work, despite being wholly off topic

    But isn't it a review that you linked to? If it was indeed wholly off-topic (but I got the impression that it was a response to another commenter, over a week ago), how is discussion of it on-topic? What is its relevance here and now?

    It is important to gain perspective on Mashey's level of criticism, to see the original post.

    Is it likewise important to gain perspective on your level of criticism of Mashey's criticism?
  28. an attack on McIntyre. I also intended that to refer to McIntyre's work not personally.

    A criticism of someone's work simply is not an attack on them ... those are two very different things. OTOH, rather than pointing out some error in Mashey's post, you seem to be criticizing the act of writing it -- "long and energetic", "wholly off-topic", "disproportional", etc.
  29. Clicking on the previous page and seeing the last comment on it reminded me of something else that I have observed to be generally true of non-accepters ... when they make false claims and those claims are shown to be false, they rarely acknowledge the error or withdraw the claim.
  30. John Mashey wrote

    People should argue the merits of this particular paper

    A.Scott responded with a tu quoque fallacy that implicitly -- and falsely -- accused Mashey of failing to do that himself. This was one in a string of fallacies, false statements, and tendentious claims on this subject that deflect from the valid point Mashey made in the quoted statement.

    Please make a point to review the actual post Mr. Mashey was so critical of located here.

    I have done so, and find Mashey's criticisms to be valid.
  31. John Mashey, when authors whether psych or otherwise produce empirical papers with no basic experimental design or sampling methodology they need a statistician to help them (I'm sure Bell labs would have seen to that).

    This concept ain't anything as highfaluting as a meme, its just called maintaining basic standards.
    Moderator Response:

    Please provide proof that the authors of the paper did NOT follow existing standards in their field. Or the entirety of your comment will be deleted as a sloganeering and inflammatory meme. Sloganeering struck out.

    Edit: HAS' reply here was insufficient to justify his comment above; thus, the remainder of his above comment was struck out.

  32. @- HAS
    " when authors whether psych or otherwise produce empirical papers with no basic experimental design or sampling methodology they need a statistician to help them...
    ... its just called maintaining basic standards."

    Can you substantiate the allegation that LOG12 has no experimental design or sampling methodology and fails to maintain basic standards ?
    Or is this just a claim of incompetence in others that reveals rather more about the accuser than the accused.
  33. 1) I have for years argued for "shadow threads" or similar mechanisms to allow moderators to move comments to a shadow thread, thus deprecating them, but leaving a link behind. This is somewhat akin to RC's "Borehole" and equivalents elsewhere. In the real world, people build track records, positive or negative, that is useful in assessing opinions and even doing research. After all, the reaction to this paper is at least as interesting as the paper itself.

    2) This thread suffers somewhat from the large disappearance of comments, both by renumbering, and in removal of context/motivation for other comments, such as my long one.

    3) Thus, I hope that HAS' comment #531 remains. If the requested proof is not provided, perhaps the remaining words can just be struck?

    Then we can get back to the paper, or at least the topic of this specific thread:
    'A simple recipe for the manufacturing of doubt'
    This one turns out to be a specific example of a common confusion technique, more later, when time.
  34. (-snip-).

    I posted a fair critique of Masheys earlier post here, using the standard specifically identified by the moderators. I have reviewed it and the claims and statements in it are accurate.

    I edited a couple small points to clarify and be sure they were clear, factual and non-inflammatory, and am re-posting below. (-snip-):
    _______________________

    Mashey's lenghty critical review in an earlier post (190) here, of McIntyre's very old blog post was, in my opinion, highly disproportional to the importance of the issue he critiques.

    A fact that prompted a simple explanation and a highly detailed follow on article from McIntyre.

    Please make a point to review the actual McIntyre post Mr. Mashey was so critical of located here. It is important to gain perspective on the extent of Mashey's criticism, compared to the original post he criticizes.

    Mashey accused McIntyre of a "false citation" yet what he really found was a simple error with a single citation McIntyre readily admitted he made, in one blog post from early 2005.

    McIntyre showed he quickly learned of the error and from then forward correctly reported, but forgot to go back and correct the original single post with the error. Which he promptly corrected when made aware it was still there.

    Which pretty much eliminates Mashey's entire lengthy criticism of this simple error.

    But Mashey also made a second criticism, with relevance to the instant topic (key excerpts):
    Use of Unsubstantiated Claim Published in a "Dog Astrology Journal," but actually cited as a preprint at Fred Singer's Website

    McIntyre took the key quote from Fred Singer's SEPP Website, where it appeared March 5, 2005, ~3 months before it was actually published in Journal of Scientific Exploration (JSE). This was a curious publishing practice and seems to have violated policy:

    “The material must not appear anywhere else (including on an Internet website) until it has been published by the Journal (or rejected for publication).”
    First, it would seem a fair question; is Mashey's referring to the Journal of Scientific Exploration as the "Dog Astrology Journal" acceptable under the comment policy here?

    As to Mashey's claim ... you'll have to follow very closely here ...

    Mashey criticizes McIntyre's "Use of Unsubstantiated Claim Published in a 'Dog Astrology Journal'" because, in a short blog post in early 2005,, he used:

    - a quote by David Deming,
    - from an article posted in early 2005 at Fred Singers site,
    - that was clearly identified as a "pre-print" of Demings upcoming publication,
    - and was noted would appear in the Journal of Scientific Exploration in June 2005

    Mashey criticized McIntyre because he simply used a comment from the author, Deming, posted by the author, Deming, on someone else's, Singer's, website - 3 months before it was published in JSE.

    McIntyre had nothing whatsoever to do with any of that - yet Mashey claims McIntyre was responsible for violating JSE's publishing policy, which Mashey alleges prevents any appearance prior to publication.

    Even if Mashey's claim about appearance prior to publication is true, he provides zero evidence or proof that links McIntyre as responsible in any way. Mashey asks us to believe McIntyre somehow had control over Deming and Singer's decision to publish a pre-print version of Demings paper on Singers site.

    To re-cap, according to Mashey, McIntyre's alleged misdeed was, he "took [a] key quote" from a paper he found posted online, and used in a blog post he wrote before the actual journal publication. Let me remind the reader this accusation by Mashey - about a simple, since corrected, and readily admitted error, occurred in 2005.

    What I really find more interesting though, is the relevance to this topic.

    Mashey criticizes because McIntyre used a single quote from a paper not his own, posted at another site, 3 months before the paper was published, despite McIntyre having no association with or control over the paper or the preprint posting.

    Yet here, with the present paper, the authors released the article to, and promoted in, the media, months prior to its Journal publication. They do not note it is a pre-print - they state it is "in press."

    I have been told by a peer of the authors, that this policy is encouraged by the journal here, so it would not technically be a violation, as Mashey alleges with McIntyre.

    (-snip-).
    Moderator Response: Moderation complaints and sloganeering snipped.
  35. In 517, John Mashey made a valid point about " the false meme that papers using statistics need a statistician author to be valid". Rather than address the point, A.Scott launched a personal attack in the form of a tu quoque fallacy -- a form of ad hominem fallacy, which is a type of fallacy of irrelevance -- starting with "Like you did with your ...". And so he continues, at great length and with great energy, but his efforts are wasted on the intellectually honest.
  36. (-snip-).

    Not aiming top speak for HAS, (-snip-).
    Moderator Response: Actually, you are (attempting to do so). The point belonged to HAS; the onus is on HAS to support it.
  37. See comments currrently #155-7, 160, 296, 329, 334, 431, 449, 451, 453 (some info snipped around sample bias and inappropriate attribution of casality) & 484.

    (-snip-).
    Moderator Response:

    You were tasked to "provide proof that the authors of the paper did NOT follow existing standards in their field"; your comments you cite from earlier in the thread are all your own opinion. Thus you make a circular and evasive reply which fails the task to which you have set yourself.

    Moderation complaints snipped.

  38. Sorry, the Journal of Scientific Exploration is the sort of thing you ask for under the counter in a plain brown wrapper.

    A nice example is "Unexplained Weight Gain Transients at the Moment of Death" - In the Words of William the Sane, this guy weighed sheep while he was suffocating them with a plastic bag. He concluded...there's no way to tell what he concluded.

    That may float A. Scott's boat, but not Eli's
    Moderator Response: Fixed link.
  39. Oh my goodness, surely this thread has run its course about ten times over.

    To satisfy those without the time, means or inclination to do their own research, and in the hope that this will put an end to the multiple diversions and distractions and/or amuse and/or broaden a reader's education, here is the paper from which the dog astrology journal rightfully earned its nickname. (Warning: Put down your coffee before clicking either link.)

    (I believe AScott is barking up the wrong tree - the squirrel ran the other way!)
  40. Sorry Eli, our posts crossed. I nearly posted a link to one of your excellent articles on the same topic, too.
  41. JohnMashey at 01:56 AM on 2 October, 2012

    Regarding your comment (3) ...

    My opinion on this issue is that there have been a number of questions raised about this current paper. Right or wrong, whether they meet standards or not, no one can tell for sure, as the information necessary to answer that has not been provided to date.

    There is bipartisan agreement that the title was sensationalized and inflammatory. There is clear proof the title claim - associating belief in the Moon Landing hoax to motivated rejection of climate science is supported only by the thinnest of threads - based on literally a handful of affirmative responses (10) out of 1145 total.

    And even that thin association disappears when the suspect responses are removed
    .

    Further - the data selection process has been questioned. Again here, there is at least some bipartisan agreement that collecting responses aiming to gather views including from "skeptic-leaning" respondents, solely thru sites that appear largely antagonistic to those with skeptic-leaning views, is a legitimate question. The authors themselves indicated their plan anticipated obtaining responses from skeptic leaning sites, at least tacitly agreeing such responses were a necessary part of the survey design.

    There are other concerns raised as well.

    Whether these concerns are well founded cannot be answered until the authors are more forthcoming with information. But I think the level of concerns raised, and that there is some bipartisan support for them, makes them legitimate questions that do relate to standards and the like.
    Moderator Response: Non-sequitur sloganeering (an endless repetition of points already discussed that do not help clarify relevant points, tiresome to participants and a barrier to readers) struck out. "Proof" is a mathematical concept. Unqualified "agreement" is not established.
  42. [Eli's link is bad because it ends with an extraneous quote mark.]

    From Sou's link:

    The JSE is the quarterly, peer-reviewed journal of the SSE. Since 1987, the JSE has published original research on consciousness, quantum and biophysics, unexplained aerial phenomena

    I suppose it doesn't sound so much like quackery if you avoid referring to them as UFO's.
  43. First, it would seem a fair question; is Mashey's referring to the Journal of Scientific Exploration as the "Dog Astrology Journal" acceptable under the comment policy here?

    There are nine bullet points. Which of them do you think one can fairly ask applies? I can't see it.
  44. Sou and Rabett

    The JSE states "The Society for Scientific Exploration (SSE) is a leading professional organization of scientists and scholars who study unusual and unexplained phenomena. Subjects often cross mainstream boundaries, such as consciousness, ufos, and alternative medicine, yet often have profound implications for human knowledge and technology."

    It appears the paper you linked - whether you or I think its silly or not - meets the papers stated purpose.

    I also reviewed its Editors and Editorial Board. They appear to be degreed professionals including top institutions. Do you hold the same disdain and ridicule for them and their qualifications?

    Either there are policy's and standards or there are not. JSE is a professional publication edited by professionals from some of the top institutions. Whether you personally agree with their publishing guidelines and purpose or not is immaterial.

    I have questions about the qualifications and competence right now of the Psychology Journal here - but I can assure you if I called it the "Dog ate my methods" Journal it would be dealt with swiftly. And rightfully so.

    You don't get to follow policys and standards only when you agree with them.
  45. I think "bipartisan" is an interesting word to use in the current context ... I think partisanship is indeed a major factor in the rejection of climate science, and in the great amount of energy invested in criticizing this particular paper -- as opposed to the many very poor papers presented at sites like WUWT -- which is seen by the non-accepters as an attack on "their side".

    As for "bipartisan agreement", intellectually honest persons will note that there is also "bipartisan disagreement" if at least one person from each side disagrees.
  46. It appears the paper you linked - whether you or I think its silly or not - meets the papers [sic] stated purpose.

    That doesn't make it not be a Dog Astrology paper. And since it is one and meets the journal's stated purpose, that establishes that the journal is a Dog Astrology journal (among other things).

    I also reviewed its Editors and Editorial Board. They appear to be degreed professionals including top institutions. Do you hold the same disdain and ridicule for them and their qualifications?

    Fallacies of irrelevance.
  47. Honestly, there's no argument that if the authors removed all the responses from people endorsing extreme free market ideology, and removed all the responses from people supporting conspiracy theories, then the conclusions they drew would have been different. (Why else are 'skeptics' calling some of the responses 'fake' and 'fraudulent'? It's so they can eliminate them because they don't like the actual findings. That's not science, that's called 'fiddling the data' and would earn more than a slap on the wrist in scientific circles.)

    As for 'bipartisan support' - is there an instance where an author put either the content or title of her or his paper to 'opposing parties' in the blogosphere and asked them to choose the title and the content and the data they liked? (These posts seem to be getting sillier by the day.)

    (The title is great IMO.)
  48. #554 - JSE is a professional publication edited by professionals from some of the top institutions

    After reading post #554 I'm now very puzzled as to what it is that AScott objects to in LOG12.

    (Not saying there is no difference between the paranormal and conspiracy theories. Maybe I should ask that lizard person chap to explain the subtleties.)
  49. It is a fact that the JSE publishes many atrociously, monstrously, comically bad papers such as the dog astrology paper and the sheep weighing paper ... I think that you would find widespread agreement about that among scientifically literate persons, regardless of "partisan" factors. This is a fact regardless of the credentials or qualifications of journal editors. And it's a fact that justifies John Mashey's comments about the journal. I find it bizarre that anyone would defend this journal, or think that defending it can be part of a cogent argument. On the contrary, defending it severely undermines one's argument and casts doubt on one's rhetorical methods ... even more so than was already the case.
  50. I have questions about the qualifications and competence right now of the Psychology Journal here - but I can assure you if I called it the "Dog ate my methods" Journal it would be dealt with swiftly. And rightfully so.

    But surely any intellectually honest and competent person can see the major difference. The JSE published a paper on dog astrology, and thus is objectively a dog astrology journal (among other things) ... this is not a matter of personal opinion or "questions". PJ, OTOH, did not publish a "Dog ate my methods" paper -- did it? If you want to call it the Motivated Belief Journal, that would be valid since it did publish a paper about that topic.

Prev  1  2  3  4  5  6  7  8  9  10  11  12  Next

Comments Policy

Post a Comment

You need to be logged in to post a comment. Login via the left margin or register a new account.