With ‘fake news’ fast becoming a global issue, the ability to effectively correct inaccurate information has never been more pertinent. Unfortunately, the task of correcting misinformation is far from trivial. In many instances, corrections are only partially effective and people often continue to rely on outdated information. This is known as the continued-influence effect, and it has been of great interest for cognitive psychology, political science, and communication studies.
One recommendation that has emerged from the literature regarding optimal misinformation correction, is that it is best to avoid repeating the initial misconception. This is due to the risk of the repetition of the misconception increasing its familiarity. For example, truthfully stating that “playing Mozart to an infant will not boost its IQ” mentions both “Mozart” and “boosting IQ”. This makes the link between the two concepts more familiar, even though the actual statement is aiming to correct the myth. The potential problem that arises from increased familiarity is that people are often more likely to think that familiar information is true.
The Familiarity Backfire Effect
Some reports even suggest that the increased familiarity associated with a correction can be so detrimental, that it causes a familiarity backfire effect. A backfire effect is when a correction ironically increases an individual’s belief in the original misconception. For example, if people were more likely to think that listening to Mozart can boost a child’s IQ after they had been told that this was false (in comparison to their belief levels prior to the correction), this would be considered a backfire effect.
However, scientific evidence for this phenomenon has been fairly thin on the ground. In fact, the most cited example of the familiarity backfire effect is from an unpublished manuscript. The manuscript reports an experiment that corrects misinformation regarding vaccines. We know that corrective attempts for misinformation about contentious subjects is likely to backfire because people’s worldview is being challenged. It is therefore unclear whether the backfire effect described in the manuscript arose solely due to familiarity.
Ullrich Ecker, Stephan Lewandowsky, and I designed a set of experiments to see how detrimental familiarity really is to the updating of beliefs. We focused on many different topics to avoid confounding worldview backfire effects and familiarity backfire effects. The experiments were reported in an article that recently appeared in the Journal of Experimental Psychology: Learning, Memory, and Cognition.
The experiments were based upon what we know about how memory works. It is a common assumption that there are two types of memory retrieval; strategic and automatic. Strategic memory allows you to remember details such as where or when you learnt the information, and whether information is true or false. However, strategic memory takes effort and may therefore fail when you are distracted or cannot expend effort for other reasons.
Automatic memory retrieval, by contrast, does not take effort and is based largely on a perception of familiarity. For example, you will have little difficulty recognizing that the word “Mozart” occurred in this post based on its sense of familiarity alone, whereas you might have to engage in more strategic retrieval to recall in what context it appeared.
You are less likely to be able to use strategic memory retrieval (and are more likely to rely on automatic memory retrieval) if you are (1) an older adult (2) you are not provided with enough detail about why the information is incorrect, and (3) there is a long period of time between the correction presentation and when you are asked to remember it.
Given that we know under what circumstances people are more likely to use automatic processes, we can manipulate the extent to which people rely on familiarity when they are updating their beliefs.
Experiment 1: Young Adults
We presented 100 undergraduate students with 20 familiar myths (for example, “Ostriches hide their head in the sand when frightened”), and 20 facts (for example, “Dogs shouldn’t eat chocolate”). We asked participants to rate how much they believed each item on a scale between 0-10. We then informed them as to what was true and what was false. We did this by either briefly stating “this is true/false”, or by giving participants a short evidence-based blurb as to why the information was true or false. The participants then re-rated their belief either immediately, after half an hour, or after one-week.
If a familiarity backfire effect were to be elicited in an undergraduate population, we would expect it to occur in cases where they had brief corrections and/or after a long delay. For example, the figure below shows what a backfire effect could hypothetically look like. Belief levels in the myth after correction would be greater than people’s belief prior to the correction:
We did not find this to be the case—the figure below shows the pattern of results that we actually found. Note that the dotted horizontal line refers to the belief prior to the correction. Any bar that falls below that line therefore represents successful memory updating. As belief levels after the correction consistently remained below belief prior to the correction, participants updated their beliefs in the right direction.
Familiarity did play a role in that both brief corrections and a long delay led to corrections being less effective, but there was no evidence for a familiarity-based backfire effect.
As previously noted, age is also known to be a factor in whether people can use strategic memory processes. It is therefore possible that the familiarity backfire effect only exists in older adult populations. This idea was tested in our second experiment.
Experiment 2: Older adults
We asked 124 older adults over the age of 50 to participant in a very similar study to Experiment 1. The only change we made to the Experiment 1 design, was that we added a three-week delay condition to maximize the chance of eliciting a familiarity backfire effect. The results of this study can be seen below:
Belief after the correction remained well below belief levels prior to the correction. Even under circumstances most conducive to a familiarity backfire effect (i.e. after a long period of time, with a correction that provided little detail, in older adults), we failed to find it.
However, we again found that familiarity impacted the efficacy of the correction: People were better at sustaining their belief over time if they were middle aged participants under the age of 65, in comparison to those over the age of 65.
It might be a relief for fact-checkers and news accuracy experts that repeating the misinformation when correcting it will most likely not make people’s belief worse. This is also good news as correcting information without mentioning the original misconception can be extremely challenging. In fact, there is evidence to suggest that repeating the misconception prior to the correction can be beneficial to belief updating, as we showed in a previous post.
One concern that stems from our data is that people over the age of 65 are potentially worse than middle-aged and younger adults at sustaining their post-correction belief that myths are inaccurate. It is therefore even more important that we not only understand the underlying mechanisms of why people continue to believe in inaccurate information, but develop techniques to facilitate evidence-based updating, so that all sectors of the population can be on the same page as to what is fact and what is fiction.
So where do our results leave the familiarity backfire effect? Should we avoid repeating the myth when correcting it? Can we confidently use the phrase “playing Mozart does not boost an infant’s IQ” without ramifications? Those issues will be taken up in the next post.