A simple recipe for the manufacturing of doubt

By Klaus Oberauer
Posted on 19 September 2012
Filed under Cognition
and Stephan Lewandowsky
Professor, School of Experimental Psychology and Cabot Institute, University of Bristol

Mr. McIntyre, a self-declared expert in statistics, recently posted an ostensibly unsuccessful attempt to replicate several exploratory factor analyses in our study on the motivated rejection of (climate) science. His wordy post creates the appearance of potential problems with our analysis.

There are no such problems, and it is illustrative to examine how Mr. McIntyre manages to manufacture this erroneous impression.

Our explanation focuses on the factor analysis of the five “climate science” items as just one example, because this is the case where his re-“analysis” deviated most from our actual results.

The trick is simple when you know a bit about exploratory factor analysis (EFA). EFA serves to reduce the dimensionality in a data set. To this end, EFA represents the variance and covariance of a set of observed variables by a smaller number of latent variables (factors) that represent the variance shared among some or all observed variables.

EFA is a non-trivial analysis technique that requires considerable training to be used competently, and a full explanation is far beyond the scope of a single blog post. Suffice it to say that what EFA does is to take a bunch of variables, such as items on a questionnaire, and then replaces the multitude of items with a small number of “factors” that represent the common information that is picked up by those items. In a nutshell, EFA permits you to go from 100 items on an IQ test to a single factor that one might call “intelligence.” (It’s more nuanced than that, but that captures the essential idea for now).

One core aspect of EFA is that the researcher must decide on the number of factors to be extracted from a covariance matrix. There are several well-established criteria that guide this selection. In the case of our data, all acknowledged criteria yield the same conslusions.

For illustrative purposes we focus on the simplest and most straightforward criterion, which states one should extract factors with an eigenvalue > 1.  (If you don’t know what an eigenvalue is, that’s not a problem—all you need to know is that this quantity should be >1 for a factor to be extracted). The reason is that factors with eigenvalues < 1 represent less variance than a single variable, which negates the entire purpose of EFA, namely to represent the most important dimensions of variation in the data in an economical way.

Applied to the five “climate science” items, the first factor had an eigenvalue of 4.3, representing 86% of the variance. The second factor had an eigenvalue of only .30, representing a mere 6% of the variance. Factors are ordered by their eigenvalues, so all further factors represent even less variance. 

Our EFA of the climate items thus provides clear evidence that a single factor is sufficient to represent the largest part of the variance in the five “climate science” items.  Moreover, adding further factors with eigenvalues < 1 is counterproductive because they represent less information than the original individual items. (Remember that all acknowledged standard criteria yield the same conclusions.)

Practically, this means that people’s responses to the five questions regarding climate science were so highly correlated that they reflect, to the largest part, variability on a single dimension, namely the acceptance or rejection of climate science. The remaining variance in individual items is most likely mere measurement error.

How could Mr. McIntyre fail to reproduce our EFA?

Simple: In contravention of normal practice, he forced the analysis to extract two factors. This is obvious in his R command line:

pc=factanal(lew[,1:6],factors=2)

In this and all other EFAs posted on Mr. McIntyre’s blog, the number of factors to be extracted was chosen by fiat and without justification.

Remember, the second factor in our EFA for the climate item had an eigenvalue much below 1, and hence its extraction is nonsensical. (As it is by all other criteria as well.)

But that’s not everything.

When more than one factor is extracted, researchers can rotate factors so that each factor represents a substantial, and approximately equal, part of the variance. In R, the default rotation method, which Mr. McIntyre did not overrule, is to use Varimax rotation, which forces the factors to be uncorrelated. As a result of rotation, the variance is split about evenly among the factors extracted.

Of course, this analysis is nonsensical because there is no justification for extracting more than one factor from the set of “climate change” items.

There are two explanations for this obvious flaw in Mr. McIntyre’s re-“analysis”. Either he made a beginner’s mistake, in which case he should stop posing as an expert in statistics and take a refresher of Multivariate Analysis 101. Or else, he intentionally rigged his re-“analysis” so that it deviated from our EFA’s in the hope that no one would see through his manufacture of doubt.

Bookmark and Share

657 Comments


Prev  1  2  3  4  5  6  7  8  9  10  11  12  Next

Comments 451 to 500 out of 589:

  1. Tom Cruse, my point was that you can't even generalise to "people who are active in internet discussions on climate change" given the way the sample was drawn. The best you can talk about is the "people that completed the survey".
  2. Tom Curtis at 22:05 PM on 28 September, 2012
    A Scott @445, in appendix A of Heath and Gifford, it is states that "The response format for all of the items below is the same: strongly agree to strongly disagree." The items below include the climate change items and the free market items, which therefore use the same response format.
    Tom - I looked in Appendix A and couldn't see what you note - took running search which led me to this buried within:
    Perception of causes:
    (The response format for all of the items below is the same: strongly agree to strongly disagree.)
    I suspect it just very poor placement but taken as written it would only apply to the "Perception of Causes" block. It should have been above that block.
    Earlier it is stated that, "The response format ranged from 1 (strongly disagree or very unlikely, depending on the wording of the question) to 5 (strongly agree or very likely) for all questions." That statement is made explicitly with reference to the global warming questions, but given the statement in the appendix it applies also to the free market questions.
    There you make a larger leap - and one purely of faith ... that a comment, as you note, that was in direct reference to a specific group of questions, would apply to the entirety of questions. I see nothing in the Appendix A comment, even if it was in its proper location, and not within a subgroup of questions, that ties the two comments in any way.

    I do agree with you, it is likely the 5 point scale applies to the entire document.

    That said - there is simply no way to tell what scale the authors used for the FM Questions from the paper.

    This is a published and I assume peer reviewed paper that we cannot answer the simple question as to what scale was used for many of the important questions.

    More importantly neither could Lewandowsky. And yet they made specific direct representations as to the Heath & Gifford paper.
    It is certainly possible that the switch from a 5 to a 4 point Likert scale by Lewandowsky is responsible for some of the increase in the CC/FM correlation.
    I agree. But we cannot know that with any certainty. Nor can the Lewandowsky authors. Yet again, they made specific representations as to the Heath & Gifford 2006 paper, which cannot be made from the paper itself.

    Heath & Gifford was sloppy. In more ways than this. The Heath & Gifford peer review apparently never caught this or asked the question - and the scale of a central part of the data is certainly important and relevant to the conclusions.

    Lewandowsky it appears was sloppy yet again here as well, in several issues involving the Heath 2006 paper alone. Some potentially quite significant. And peer review has apparently missed those again as well.

    Please don't take my comments as critical of you - they are not intended to be - simply trying to get to the truth and the facts. And as you are clearly more experienced am trying to show to you so you can review.

    (I'd like to exchange emails - please send me a note from my A. Scott Wordpress contact form if you want.)
  3. A. Scott and Sou, re the comment "And I appreciate that it could take you and your 'team' months to years to learn EFA and SEM if they are the techniques you plan to use."

    It is of course somewhat snide, but it does show something of a clash of cultures between the soft end of psych and the hard end of stats as disciplines.

    The soft end of psych sees techniques such as EFA and SEM as easily accessible techniques to run on data sets to eliminate noise in the data and divine causual relationships.

    The hard end of stats on the other hand actually understand what the technique is doing, the assumptions the data needs to meet, and know that measurement error is ever present, likely ill behaved and not (-snip-).

    I'm reminded of an academic statistician complaining of others coming to him with a pile of print-outs saying "I've got this data - now tell me what it means".

    I think the problem here is that (-snip-).
    Moderator Response: Inflammatory snipped.
  4. Tom Curtis at 07:42 AM on 29 September, 2012
    HAS @449, while the results of Lewandowsky's paper re conspiracy theories are dubious at best, the same cannot be said for the results re the CC/FM correlation, which are very robust
    Tom - the correlations as I understand it are based on the accuracy of the responses, in particular the FM responses. Lewandowsky based the FM section and questions directly from Heath et all 2006 and noted the parallel with the findings from their previous work.

    Without knowing the method, and therefore the accuracy, of that previous work they cannot make any such claim. There are may be other potential issues involved.
    It must be understood, however, that they sample, and are only intended to sample, a specific population, ie, people who are active in internet discussion on climate change.
    I agree. They set out to sample "blog denizens" to determine correlations between skeptic beliefs and "motivated rejection of climate science."

    There is little relevance/importance to any other group. That said, as the purpose was to examine the links between skepticism and rejection of climate science, it is imperative to collect data from those with legitimately skeptic leaning. We know this was not done. While there may well be some true skeptics at the strongly "anti-skeptic" blogs, using the "opposition" to try and collect data is highly questionable - at best.
    As such, they show that the split on economic theory between acceptors and rejectors of climate science is very marked in that group. More so than in the general population; but how much more so is open to question based on the comparison to Heath and Gifford based on other confounding factors.
    Again - the analysis is only as good as the integrity and quality of the data. Both the data collected and the data and findings from the underlying work they base theirs on. There are the "confounding" factors with that underlying work, and the Lewandowsky authors use of it, perhaps more than you've already id'ed.
    As to Steve McIntyre, while some of his posts have been interesting, he currently appears to be on a campaign of ad hoc pruning of the data set so that no "skeptic" response that does not fit his preferred image of what AGW skeptics should look like survives. Another name for that is doctoring the data.
    Data integrity, and quality control is far different than "ad hoc pruning." You do yourself a disservice by this position. We both know that is not the intent. Just as we both know suspect data is pretty easily identified.

    With no statistical analysis, and as a complete layman without prior knowledge, I went thru a large block of responses and noted ones I thought looked suspect. My selections largely matched the list of several others who did much more robust analysis - such as you've done.

    You seem to consider it picking at nits - why bother when it can be shown it doesn't significantly affect the final result, thinking it should not be relevant. But you would be equally quick to criticize - as has been done to me here with a denigrating "eyecrometer" comment - if I said use my guesses because they largely matched the more analyzed results.

    For some its not picking nits. It an attention and commitment to doing all the steps fully, even though there may be no need to do so.

    And to be very clear that is simply my personal observation.
  5. HAS @451, it is of the nature of surveys that you can only generalize from those who respond to them. This does not make them useless. So long as the respondents are reasonably representative of a given demographic you can generalize to that demographic. So far there is no firm evidence that the respondents were not representative. In fact, in my experience the "skeptical" respondents to the survey were far more representative of "skeptics" in general than are McIntyre's "lukewarmers".

    What is more, it should not be overlooked that the survey was posted on a "skeptical" site, although not one that was approached by Lewandowsky (according to Lewandowsky). Specifically it was posted on Junk Science on Sept 24th. Although that was after Lewandowsky's comments at Monash University, between those comments and the close of the survey, the number of respondents increased from 1100 to 1377 respondents. Comments on the survey died of quickly at pro-science blogs, suggesting that most responses where received in a few days of posting. That strongly suggests the leap in numbers after Sept 23rd came from an initial surge of respondents following the Junk Science posting. Given the lack of evidence for more than 2 scammed responses in the final collection of data, it appears likely that a significant number of the "skeptical" responses came from Junk Science, with further "skeptical" responses coming from "skeptics" who habituate pro-science blogs.
  6. A Scott @454, the items above the Perception of Causes items had different available responses and are listed in detail. That the following items are not listed in detail strongly suggests that the response format under Perception of Causes apply not only to that section, but to all following items. Further, the description in the main body of the paper applies to the first three items, showing a five point scale was used for Perception of Consequences. That fact suggests the response format was not restricted in scope as you suggest.

    I had noted these points as relevant factors before I made my "leap of faith" and thought you would also. I should also note that only if my "leap of faith" is incorrect have Heath and Gifford been "sloppy". Perhaps rather than making unfounded criticisms of scientists, you should accept the evidence before you. If you are unwilling to do that, perhaps you should email Heath for clarrification.
  7. They set out to sample "blog denizens" to determine correlations between skeptic beliefs and "motivated rejection of climate science." ... as the purpose was to examine the links between skepticism and rejection of climate science, it is imperative to collect data from those with legitimately skeptic leaning. We know this was not done. While there may well be some true skeptics at the strongly "anti-skeptic" blogs

    You keep using the unquoted term "skeptic" in a way that a) flies in the face of the facts ... those facts being that non-accepters of climate science are generally not skeptics, as they readily accept numerous incorrect, even absurd, claims, many of which are contradictory, as long as the claims are counter to the scientifically supported claim that "there's a big climate problem"; some of them spend huge amounts of time here and elsewhere expounding on supposed flaws in this study but have never put in any effort to find flaws in the numerous erroneous studies posted at places like WUWT that supposedly support non-acceptance -- that's not skepticism, it's motivated belief, and b) that seriously misrepresents the intent of the authors of the study since, even if (a) were not true, what they mean by "skeptic" (in scare quotes) is not actual skepticism. You would gain a considerable amount of credibility (or at least you would avoid a very definite loss of credibility) if you would correctly put the term "skeptic" in scare quotes when it is used to refer to non-accepters of the view that "there's a big climate problem".
  8. A Scott @456, in social sciences (as also in physical sciences) the easiest person to fool is yourself. To avoid this possibility, there is a very strong convention in social sciences that you do not exclude data as "suspect" unless you have very clear grounds to do so. "Clear grounds" does not consist of just one line of evidence. With regard to the two most suspect responses, there are three lines of evidence suggesting that they are scammed responses. Had this not been the case, I would not think they should be excluded from the survey. Even as it is, I think results both including and excluding them should be given in the supplementary material at least. That is because they may just possible be genuine, and the decision as to whether to consider them genuine or not should be left to the reader if possible. Space constraints may prevent that, in which case the reasons for considering them suspect should be stated, and (if not included in the analysis) the data for those to respondents should be included with the data for the paper when provided to anyone that requests it.

    In contrast to this conservative approach, McIntyre wishes to screen out anything for which he finds a single reason to consider it suspect. The problem is that using that approach, the explanation for the "odd" feature of the data that raises suspicion is by its nature ad hoc. If you think data is suspect for some reason, you must apply an independent test to the data to confirm or falsify that supposition. McIntyre does not, or at least, and at most, his independent test amounts to a claim of how he thinks "skeptics" would really respond. Given that McIntyre is motivated to find problems with the data, the correct conclusion is that he is simply fooling himself.

    The same is also true about Shollenberger's claims to have detected scammed response based on supposed inconsistencies in responses.

    However, there is no need to worry. Both McIntyre and Shollenberger will discover the error of their analysis as soon as they discover the actual effect of deleting their supposed scammed responses on the critical CC/CY correlation. At least, they will if they ever actually tell us what that effect is.
  9. I should note that A. Scott's misuse of the term "skeptic" is generally accompanied by radically mistaken claims about what can be inferred from what. Even if we were to put the proper scare quotes around the word in his "This is a study whose stated purpose is to define what skeptics think regarding climate science...", it is not at all "exactly accurate" ... the stated purpose is nothing of the sort. In support of his claim, A. Scott quoted a statement that says something very very different ... a statement that the study was designed to investigate, not "define", motivation (not beliefs) of rejection of science (not climate science), and it was not a statement about either skeptics or "skeptics", but rather about those individuals who both reject science and choose to get involved in the debate about climate science. Whether such people are skeptics or "skeptics" is not presumed.

    If one is going to credibly claim that their view is not only accurate but "exactly" accurate, one must at least show a bit of allegiance to those words, and one must possess the skills needed to determine what exactly and accurately follows from a statement.
  10. "Another name for that is doctoring the data."

    Data integrity, and quality control is far different than "ad hoc pruning."


    Indeed it is, but what McIntyre does is ad hoc data pruning.

    You do yourself a disservice by this position.

    I don't think Tom does himself a disservice by telling the truth.

    We both know that is not the intent.

    a) None of us can know that. b) Tom said nothing about intent ... remember that the whole subject here is motivated belief, and that can lead people to do all sorts of bad things without explicitly intending to do anything bad.
  11. While there may well be some true skeptics at the strongly "anti-skeptic" blogs, using the "opposition" to try and collect data is highly questionable - at best.

    It is this assertion that isn't just questionable, but completely wrongheaded. As you quoted them, their intent was to investigate the motivation of those who reject science and enter into the debate about climate science. To investigate that but a priori leave out any sites where the debate takes place where people who actually know something about climate science hang out would be absurd.
  12. Tom Curtis at 12:19 PM on 29 September, 2012

    Tom

    There is zero definitive, nor remotely definitive proof, as to the scale used for the FM questions. Even if we assume (which I agree is highly likely) the note in Appendix A was intended to apply to the remaining questions in the Appendix it does not address scale of the responses to those questions in any manner whatsoever.

    And as you note the earlier reference in the paper to a 5 point scale solely references "Beliefs about Climate" Change questions:
    Beliefs about global climate change. These were measured in terms of three conceptually different beliefs: (a) the belief that global climate change is occurring, (b) the beliefs about its possible causes, and (c) the beliefs of its possible consequences. The response format ranged from 1 (strongly disagree or very unlikely, depending on the wording of the question) to 5 (strongly agree or very likely) for all questions
    Further down the page under "Other Variables" it noted:
    Other variables. Perceived knowledge about the causes of global climate change was queried by asking the following question: “I would say my technical knowledge about global climate change is” minimal, limited, moderate, extensive, and professional, coded from 1 to 5.

    Support for free-market ideology1 was measured with six items, such as “Free and unregulated markets pose important threats to sustainable development” (reversed item).

    Thompson and Barton’s (1994) scales were used in their original form to measure environmental attitudes
    And Appendix A had the previously discussed commen:
    Perception of causes:
    (The response format for all of the items below is the same: strongly agree to strongly disagree.)
    1. Global warming is mainly due to natural causes, not human activity.
    2. The main causes of global warming are human activities.
    3. Global warming is merely a natural fluctuation, not caused by human activity.
    4. I am quite sure that human activities are to be blamed for global warming.

    Perception of consequences:
    1. Unlike what most s .......
    As you can see - that statement is posted within a subgroup. We cannot know as written and published if they meant the scale applied to the rest of that sub-group, the rest of that group (Perceptions, Self Efficacy, Intention), or if it applied to all the rest of the questions in Appendix A.

    There is no identification of the scale used for the Free Market questions. Even in the "Other Variables" each of the other variables is listed along with a note about the scale - with the exception of the Free Market variable.

    If the only way you can make a case is by using "suggests" that conclusion is not supported by the facts.

    As to contacting the author ... first, this is - like Lewandowsky - an allegedly peer reviewed paper. Yet one cannot draw any accurate conclusion - as you note - without knowing the scale.

    One should not have to contact the authors, many years after peer review and publication for so central a question.

    Unless you are using that paper as the basis for your own work. At that point I firmly agree with you - any author using this paper for their own work must find the answer to that question.

    You agree one must know whether the data was based on a 4 or 5 point scale too make any accurate or meaningful analysis or conclusion.

    Saying the paper "suggested" it used a 5 point scale should not be sufficient for true scientific work.
  13. mk at 12:52 PM on 29 September, 2012

    So they thought they might find some who are not skeptics might be motivated to reject climate science as well? Do I have your response correct?
  14. No, A. Scott, my response is what I gave, with all its complexity, not your ever mistaken interpretations of the things you read.
  15. P.S.

    So they thought they might find some who are not skeptics might be motivated to reject climate science as well?

    They did surely expect to find that since, as I just noted, in their eyes at least the non-accepters of climate science aren't skeptics.
  16. To clarify my 464, one of those complexities is that the set of people whose motivations they were investigating were those who both reject science and choose to get involved in the debate about climate science. If that's what they are investigating, then it would be rather bad science to presume the other characteristics of such people; what they "thought they might find" isn't relevant. And indeed there are non-skeptical accepters ... for instance, I know a number of people who think that global warming is a serious threat but know nothing about climate science and also think that 9/11 was a government plot and gullibly buy into a number of other conspiracies. Whether they involve themselves in climate science debates I don't know. But if investigating the motivations of the subject set one presumed who comprises the subject set, that would be bad science. You seem to assume that the whole point of the study was to score points for one side of a climate science debate, rather than start with the assumption of good faith on the part of the investigators.
  17. A Scott @462, I have never been impressed by Pyhhric Skepticism. I find it less impressive when disguised as mere pedantry, and less impressive still when inconsistently applied.
  18. And it really should be mentioned how ironic and backwards the complaint about including 'strongly "anti-skeptic" blogs' (as if the point of blogs that are run by people who actually know something about climate science is to be "anti-skeptic"), as it paints the denizens of such blogs as not rejecting science. Not only does AS imply that the authors didn't expect to find "pro" people (what AS insists on calling "non-skeptics") at those sites who reject science, but AS implies that they were right to lack such an expectation. Otherwise, one would expect those in the "skeptic" camp to be eager to see a study of people who reject science that included numerous members of the "opposition", rather than saying that the "opposition" should have been excluded from such a study, leaving only the (oh so) numerous members of the "skeptic" camp who reject science among the subject set.
  19. Tom - a slightly more specific response ...

    Lewandowsky based the free market questions one Heath 2006.
    The free-market items were taken from Heath and Gifford (2006).
    They reported several times in the paper tying their work and results to Heath 2006, and supported their conclusions based on Heath 2006:
    Rejection of climate science was strongly associated with endorsement of a laissez-faire view of unregulated free markets. This replicates previous work (e.g., Heath & Gifford, 2006)
    You seem to indicate the issue I raise is immaterial, unimportant ...not worth concern. Yet Lewandowskys work is has little value, an unknown accuracy without knowing the answer.

    They made Heath & Gifford 2006 an integral part of their paper and its conclusions. And like many other issues raised, we cannot tell the accuracy of that work in this instance either.

    I think you agreed that without identifying/knowing the scale used there is little value in the results.
  20. mk at 13:37 PM on 29 September, 2012
    P.S. So they thought they might find some who are not skeptics might be motivated to reject climate science as well? They did surely expect to find that since, as I just noted, in their eyes at least the non-accepters of climate science aren't skeptics.
    You are quite confused I believe.

    Throughout their paper they talk about "skeptics" The term is used along with "deniers/denial" - those who deny anthropogenic climate change.
    climate "skeptic" blogs have become a major staging post for denial,

    One 'skeptic" blogger (Steven McIntyre of 'Climateaudit") has testified ...

    Links were posted on 8 blogs (with a pro-science science stance but with a diverse audience); a further 5 "skeptic" (or "skeptic"-leaning)blogs were approached
    Where your confusion comes is that they do make a point to differentiate those skeptics - the deniers of climate change - from skepticism, which they believe is well and good.

    They did not talk about "skepticism" leaning sites for example ... they talked about skeptic leaning sites. Sites where rejections of science was the primary slant.
  21. AS: I think Tom's #467 was in regard to your "We cannot know". As he notes, this is an "unimpressive" complaint, being a philosophic dead-end, pedantic, and inconsistently applied. The rational epistemological standard for empirical questions is not knowledge, but rather rational expectation based on a Bayesian evaluation of all the evidence. There is no plausible, well-grounded basis for your objection.
  22. I had noted these points as relevant factors before I made my "leap of faith" and thought you would also. I should also note that only if my "leap of faith" is incorrect have Heath and Gifford been "sloppy"


    Tom -

    You basically say errors and omissions are fine as long as your guess as to what they meant turns out correct.

    I don't buy that and cannot believe you do either. These are published, peer reviewed papers. Expecting them to meet a minimal standard - that they are accurate and correct and report at least the important parts correctly, and in at least the minimum detail necessary does not seem to be that high a bar.

    These are supposed to be professional papers by highly experienced practitioners in the field.
  23. You are quite confused I believe.

    I believe otherwise and you have failed to identify any point on which I am confused. (You have also failed to address any of my points.)

    Throughout their paper they talk about "skeptics"

    All your quoted examples have scare quotes, which you then immediately drop:

    they do make a point to differentiate those skeptics

    Again, you would be (somewhat) more credible if you could bring yourself, in your own statements that are not quotations, to put that word in scare-quotes when referring to the non-accepters.

    They did not talk about "skepticism" leaning sites

    A very odd strawman ... no one said they did.

    they talked about skeptic leaning sites

    No, they talked about "skeptic" leaning sites, as per your own quotes -- sites that are populated by people who are not skeptics, but only claim they are.

    Sites where rejections of science was the primary slant.

    Well, yes, that they are, as a matter of fact.
  24. mk at 12:52 PM on 29 September, 2012
    Even if we were to put the proper scare quotes around the word in his "This is a study whose stated purpose is to define what skeptics think regarding climate science...", it is not at all "exactly accurate" ... the stated purpose is nothing of the sort. In support of his claim


    mk ... I primarily put "quotes" around things that are actually quotes. And sometimes - like all I paraphrase what something syas. Then - I do not use "quotes"

    Once and a while I probably do one of the other wrongly.

    (-snip-).
    Moderator Response: Inflammatory snipped.
  25. I don't buy that and cannot believe you do either.

    So you not only put words in Tom's mouth but imply that he's lying with them.
  26. If you have to resort to a nonsensical punctuation battle as your primary complaint I think its pointless to discuss further.

    The difference between calling someone a skeptic and calling them a "skeptic" is not at all nonsensical; educated writers understand the concept and meaning of scare quotes, and you are an educated writer. Again, your refusal to put scare quotes around "skeptic" when referring non-accepters robs you of credibility ... these people are not skeptics, but you claim they are each and every time you omit the quotes.

    And by saying that I "have to resort" to something, you are making an accusation of bad faith where there clearly is none. I have made my point clear and you have simply refused to address or acknowledge it.
  27. mk - no one - except perhaps you - has any question what is meant by skeptic ... with or without "scare" quotes (see - there I was quoting you).

    Casual contemporaneous writing on a blog comment is not a place where every utterance must meet a perfect grammatical standard.

    In the context of climate change discussion the word skeptic - with or without quotes - is rarely confused as to its meaning.
  28. P.S.

    I primarily put "quotes" around things that are actually quotes

    When referring to non-accepters, the word "skeptic" is actually a quote, because it is a self-description, not an accurate claim. That is why intellectually honest people assiduously put the word in quotes and never omit them. But you have, as I have noted, done the opposite.
  29. A Scott @472, no. I was saying that my inferences where reasonable and that the fact that the were inductive inferences based on available evidence rather than deductive inference did not make them less so. They certainly did not warrant your calling them "leaps of faith", your term, which I used ironically.
  30. mk at 14:48 PM on 29 September, 2012
    "I don't buy that and cannot believe you do either."

    So you not only put words in Tom's mouth but imply that he's lying with them.


    Words in Tom's mouth?

    What part of "I ... cannot believe" leads you to claim stating my belief is putting words in Tom's mouth? Or calling him anything. It was simple statement of opinion. My opinion.

    And yes I am going to keep right on writing, as does all of the real world skeptic - without the quotes - instead of your apparently preferred 'anthropogenic climate change non-acceptor.'

    I think everyone will continue understand what I mean.
  31. In addition, no paper not matter how professional will not contain some element of wording that can be quibbled at by those determined to find fault. That is irrelevant to the quality of the paper, which is judged by whether it will mislead informed people acting in good faith. Such people will have little doubt that Heath and Gifford used a five point scale. Even you do not doubt it. Therefore there is no problem with what Heath and Gifford wrote.

    Of course, if informed people acting in good faith have a genuine doubt about the issue, and it is a matter of concern, they can simply contact the authors.
  32. mk - no one - except perhaps you - has any question what is meant by skeptic

    Strawman. Again, the issue is not what that word means, but the difference between calling someone a skeptic and calling them a "skeptic" ... those mean very different things.

    Casual contemporaneous writing on a blog comment is not a place where every utterance must meet a perfect grammatical standard.

    Strawman. I have said nothing about perfect grammatical standards, I am talking about radical differences of meaning. Where I write a term that means "climate-science-rejecter who pretends to be a skeptic", you write a term that means "a person who maintains a doubting attitude".

    In the context of climate change discussion the word skeptic - with or without quotes - is rarely confused as to its meaning.

    And yet here you are, confusing these very different terms.
  33. What part of "I ... cannot believe" leads you to claim stating my belief is putting words in Tom's mouth?

    The words you put in his mouth ("you basically say") were

    errors and omissions are fine as long as your guess as to what they meant turns out correct.

    And then you say you cannot believe he means what he said, which is a direct accusation of lying.
  34. Tom Curtis (some way back), I didn't say of surveys you can't "generalize from those who respond to them". As I'm sure you are aware it is exactly the purpose of careful design of one's sample frame and sampling methods to allow (as best possible) inferences from the sample to the population.

    What I said was L. et al. failed to do this, and therefore one can not draw any conclusions from their study beyond the fact that the respondents answered thus.

    In fact there is nothing in L. et al. that would give any confidence that the sample frame or sampling methods were considered a priori. Nor is there any systematic attempt to show how the sample lines up against other studies of these attributes to compensate.

    There is one attempt in the paper to slide past the weaknesses and make a claim that they "designed the study to investigate what motivates the rejection of science in individuals who choose to get involved in the ongoing debate about one scientific c topic, climate change."

    The sample simply wasn't selected with that purpose in mind - if they had wanted to make a serious attempt at that posting an online questionnaire in a rather haphazard way isn't it.

    I am sure you understand that.

    But perhaps you don't? If not the basic process if you want to do a survey reflective of a population like this is first you define it. Then you work out how you are going to get to talk to them. Then you work out how you are going to select the sample.

    The paper offers only a post fact definition of the population (quote above), and tells us respondents were self-selected by following a link at some blogs (and we don't know how the blogs were selected, but it seems there might have been an attempt to stratify the sample using undefined criteria "pro-science" and "'skeptical'", but that the authors failed in their attempt to achieve this).

    So as I said - only tells us about the people that completed the survey.
  35. That is irrelevant to the quality of the paper, which is judged by whether it will mislead informed people acting in good faith. Such people will have little doubt that Heath and Gifford used a five point scale. Even you do not doubt it.

    Indeed not only people acting in good faith don't doubt it.
  36. I was saying that my inferences where reasonable and that the fact that the were inductive inferences based on available evidence rather than deductive inference did not make them less so.

    I wonder if AS can believe you mean that, rather than the words he put in your mouth that you never remotely said or meant.
  37. Fair enough Tom ... I accept the words could be misconstrued as negative ... not intended as derogatory - just to convey that it was a decision there was not definitive support with which to make.

    'Made a conclusion without having all the facts to support it' just sounded worse to me.

    And I wasn't "putting words in your mouth" either ;-)
    ... but rather trying to succinctly convey what I thought your meaning was.

    I think professional standards are important when talking about peer reviewed, published professional papers.

    And I see the failure to adhere to a high standard - to insure at least the minimum information necessary is provided, and that if other information is a basis for your conclusions, you make sure you verify and understand exactly what you are using - as important issues.

    Here, while we can guess - likely with some accuracy - what the missing information is. But we should not have to guess.

    As to Lewandowsky we cannot even guess. We have no clue - he did not include the important information in methods as to how he corrected the scales between the 5 point and 4 point data. But then we apparently don't have enough info on core methods to be able to review his work either, so I guess this 4 vs 5 point issue is small by comparison.
  38. A Scott @469 said, "You seem to indicate the issue I raise is immaterial, unimportant ...not worth concern. Yet Lewandowskys work is has little value, an unknown accuracy without knowing the answer."

    That rather depends on the issue you are talking about. If it is the precise location of the indication that all following items use a five point scale? Well, yes that is immaterial and unimportant as it raises no genuine question about that fact. If it is the difference between using a five point scale as per Heath and Gifford, or a 4 point scale as in Lewandowsky. No, that is not immaterial. It is probably part of the explanation of the difference in the correlations between CC and FM items obtained between the papers, but not the major difference.

    It is, however, not a sufficient factor to eliminate that correlation. The key point here is that Lewandowsky et al consider their paper to confirm Gifford and Heath's on this point. That means that at the level of precision of their hypothesis there is no substantive distinction between a correlation of -0.8 and a correlation of -0.4. That is, they are unwilling to draw any conclusion whose truth depends on the difference between those two correlations.

    I am sure that part of Lewandowsky's resistance to the idea that the two almost certainly scammed responses are relevant is because excluding them will not even halve the strength of the correlation they found - which at the level of precision in their claims means it makes no difference at all. It that is the case, they are correct in that judgement, or at least would be if there were no other issues with the paper. My concern is that the reduction in correlation strength means that other concerns in addition to the effect of the two scammers completely erode the result.
  39. mk at 15:07 PM on 29 September, 2012

    Ok - enough.

    (-snip-).

    We're done here.
    Moderator Response: Inflammatory snipped.
  40. Of course, if informed people acting in good faith have a genuine doubt about the issue, and it is a matter of concern, they can simply contact the authors.

    I daresay that informed people acting in good faith who have a genuine doubt about the issue and for whom it is a matter of concern would have contacted the authors rather than asserting that a scientific effort "has little value" and falsely accusing people of leaps of faith.
  41. It is impossible to respond intelligently when you quote one comment then claim you were talking about another comment altogether.

    That is an utterly false charge. I will grant that it is also not intelligent.
  42. A Scott @487, first, be assured I would let you know if I thought you were putting words into my mouth. As it is, you have not (mk, please note).

    Second, I am sure Lewandowsky is well aware of the impact of using a 4 or 5 point scale. I believe, however, that at the level of precision of his conclusions, the effect is negligible (see my preceding comment), at least so far as comparison of the FM items is concerned.
  43. I want to reiterate that

    "You basically say errors and omissions are fine as long as your guess as to what they meant turns out correct"

    was A.Scott putting words in Tom Curtis's mouth that make Tom look like a grossly intellectually dishonest person when Tom said nothing of the sort. It was a vile act.
  44. "mk, please note"

    I directly quoted A.Scott claiming that "you basically say" something. Did you in fact say that "errors and omissions are fine as long as your guess as to what they meant turns out correct"? If not, then it logically and objectively follows that he put words in your mouth.
  45. mk @495, A Scott merely summarized his understanding of what I said. It is clear that that was his intent. I corrected his misunderstanding @479, and he accepted the correction.

    FYI, when somebody writes that "you xxx say" where xxx is some qualifier other than "truthfully" or "exactly" of "falsely", it is an indication that they are summarizing your opinion as they understand it. The time to get upset about it is only after you correct them and they insist that their understanding of what you meant is better than yours (which I have seen happen). That is probably not so much the time to get upset, as the time to remember there is no point conversing with the pointless.
  46. "merely summarized his understanding of what I said"

    That is not mutually exclusive with putting words in someone's mouth and is in fact usually the case when someone does.

    As for the rest, I won't tell you when to get upset ... please don't tell me when I should.
  47. thanks Tom ...

    I probably won't agree with you a lot, will call you out if I disagree, will push, and strongly advocate for my beliefs, and might be too critical, less than civil or harsh at times. That is the nature of online discussion and debate.

    Understand however, that thru all that you have my respect. Few have the professionalism and courage to say what they think is right, and correct, even when understanding the fallout.
  48. > Few have the professionalism and courage to say what they think is right, and correct, even when understanding the fallout.

    Tom Curtis might very well be the only one in this set, as far as this sorry story is concerned.

    I never understood how auditors could argue against data mining by using mining data themselves with a straight face.
  49. Speaking as a professional and a person, I resent that remark!

    (I suspect that what AScott means is that he agrees with some of what Tom Curtis has written.)
  50. For all those here so strongly defending this work - contemporaneous comments about the survey, from one of the "pro-science" blogs that released it:

    http://scienceblogs.com/deltoid/2010/08/29/survey-on-attitudes-towards-cl/

    I'd be particularly interested in Bernard J.'s comments, as he has been one of the fiercest supporters

Prev  1  2  3  4  5  6  7  8  9  10  11  12  Next

Comments Policy

Post a Comment

You need to be logged in to post a comment. Login via the left margin or register a new account.