Familiarity-based processing in the continued influence of misinformation

Advertisements in the U.S. for Listerine mouthwash falsely claimed that the product helped prevent or reduce the severity of colds and sore throats for more than 50 years. After a prolonged legal battle, the Federal Trade Commission directed Listerine to mount an advertising campaign to correct these deceptive claims. For 16 months, the company ran a $10,000,000 ad campaign in which the cold-related claims about Listerine were retracted. The campaign was only partially successful, with more than half of consumers continuing to report that the product’s presumed medicinal effects were a key factor in their purchasing decision. This real-life example from the 1970s underscores the difficulties of debunking of misinformation. Even a large budget, prime time TV exposure, and numerous repetitions may be insufficient to alter the public’s belief in false information.

But this does not mean that debunking of misinformation is necessarily impossible: On the contrary, there are some known techniques that can help make a correction more effective, which John Cook and I summarized in the Debunking Handbook some time ago.

One of the recommendations in the Debunking Handbook was to avoid the so-called Familiarity Backfire Effect. As we noted in the Handbook:

To debunk a myth, you often have to mention it – otherwise, how will people know what you’re talking about? However, this makes people more familiar with the myth and hence more likely to accept it as true. Does this mean debunking a myth might actually reinforce it in people’s minds?

To test for this backfire effect, people were shown a flyer that debunked common myths about flu vaccines. Afterwards, they were asked to separate the myths from the facts. When asked immediately after reading the flyer, people successfully identified the myths. However, when queried 30 minutes after reading the flyer, some people actually scored worse after reading the flyer. The debunking reinforced the myths.

We therefore went on to recommend that communicators should

avoid mentioning the myth altogether while correcting it. When seeking to counter misinformation, the best approach is to focus on the facts you wish to communicate.”

Instead of saying “it’s a myth that vaccines cause autism” it is better to accurately state that “vaccinations save many lives.”

Two recent studies published by Ullrich Ecker and colleagues and I took another empirical look at this familiarity backfire effect. In a nutshell, we did not find the effect in our experiments, which instead showed that repeating a myth while correcting it need not be harmful.

The two studies have been discussed in previous posts by Tania Lombrozo here and by Briony Swire here.

These results raise two related questions: First, is the cautionary discussion of the familiarity backfire effect in the Debunking Handbook wrong? Second, is the recommendation not to repeat a myth during debunking wrong?

It would be premature to answer those questions in the affirmative.

To understand why, we need to consider the theory that underlies the familiarity backfire effect. We discussed the effect in the Debunking Handbook not just because it had been shown to exist, but also because its existence is compatible with a lot of our knowledge about how memory works. A common assumption in memory research is that there are two separate classes of memory retrieval processes, known as strategic and automatic, respectively. Strategic memory processes allow for the controlled recollection of the information’s contextual details. Similar to the meta-data of a computer file, contextual details include information about the information itself, such as the information’s spatiotemporal context of encoding, its source, and its veracity. In contrast, automatic processes provide a fast indication of memory strength or familiarity of the information but little else.

Automatic retrieval processes can therefore contribute to the continued influence of misinformation in two related ways. First, people’s truth judgments about a statement are known to be influenced by its familiarity. It follows that false information might be accepted as true just because it seems familiar. Second, when corrected misinformation is automatically retrieved from memory without any accompanying contextual details, it might mistakenly be considered true. To illustrate, many researchers have suggested that when information in memory is retracted, a “negation tag” is linked to the original memory representation (e.g., “Listerine alleviates cold symptoms–NOT TRUE”). If retrieval relies only on automatic processes, those processes might retrieve the claim without also retrieving the attached negation tag. If strategic memory processes are not engaged, familiar claims may be mistakenly judged to be true even after they have been corrected (and the person has diligently updated their memory).

In support of this idea, misinformation effects have been found to be reduced when conditions encourage reliance on strategic processing. In an earlier study, Ullrich Ecker and colleagues and I found that presenting participants with a pre-exposure warning detailing the continued-influence effect greatly reduced reliance on misinformation, and was as effective as providing a factual alternative. We suggested that those warnings not only allowed individuals to more effectively tag misinformation as false at encoding, but also boosted later recall of the “negation tag” because people were more likely to engage strategic processes.

It follows that the recent studies showing the absence of a familiarity backfire effect should be examined more closely to see if the data contain a signature of the presumed familiarity process. If that signature could be detected, then the studies might best be interpreted as showing that the familiarity backfire effect is not always observed but that there are theoretical reasons to expect it to occur in some circumstances. Note that this is quite different from showing that the familiarity backfire effect does not exist.

It turns out that the study by Swire, Ecker, and Lewandowsky contains fairly strong evidence for the presence of the presumed familiarity process. The figure below shows one aspect of this evidence (it replicates across measures and experiments):

The bars on the left show belief ratings for facts after they were affirmed across three retention intervals. It can be seen that affirmation raises belief above the pre-intervention baseline (the dotted line), and that the increase in belief ratings is remarkably stable over time (we can ignore the ‘brief’ vs. ‘detailed’ manipulation as it clearly had little effect).  

The bars on the right show the same measure for myths after they were rebutted. Again, the rebuttal was effective because it lowered belief far below the pre-intervention baseline. However, unlike the effect on facts, the effects of the correction obviously wore off over time: After a week, people’s belief in the myths had increased considerably compared to the low level immediately after the correction had been received.

In other words, misinformation began to be “re-believed” after a week, whereas newly-acquired fact belief remained stable during that time.

This asymmetrical forgetting of the intervention is arguably a signature of the familiarity process. As we note in the paper:

“In the case of an affirmed fact, it does not matter if an individual relies on the recollection of the affirmation or on the boosted familiarity of the factual statement—familiarity and recollection operate in unison and lead to the individual assuming the item to be true. However, in the case of a retracted myth, recollection of the retraction will support the statement’s correct rejection, whereas the myth’s boosted familiarity will foster its false acceptance as true, as familiarity and recollection stand in opposition.”

The asymmetry of forgetting therefore suggests that the familiarity of the material persists whereas the strategic component of memory retrieval becomes less effective over time: The strategic component is required to retrieve the negation tag and “unbelieve” a myth, whereas familiarity is sufficient to believe affirmed facts even when strategic processes have become less effective.

A clear implication of this interpretation is that under conditions in which strategic processes are even less effective, a familiarity-based backfire effect may well emerge. In a nutshell, our recent studies show that familiarity-based responding is important but that it can be overridden by a correction when strategic processing remains intact.

What our studies do not show is that the familiarity backfire effect will never be observed.

Although our experiments showed that familiarity does not lead to a backfire effect in some cases where strategic processes are compromised (i.e. in older adults, when little detail is provided about why the misconception is incorrect, after a long period of time between encoding the retraction and recalling it), our studies do not address whether familiarity backfire effects occur in other circumstances. For example, strategic memory processes are also compromised when people pay little attention during encoding of the correction because they are otherwise occupied (e.g., driving a car while listening to a corrective message on the radio). Those expectations remain to be examined by experimentation.

A recent as-yet unpublished study by Gordon Pennycook and colleagues reports results that point in the direction of a familiarity-backfire effect. In their study, participants were shown “fake news” headlines that were sometimes accompanied by warnings that they were false (“disputed by independent fact checkers”). At a later stage in their study, participants rated fake news headlines that had been identified as false and were presented a second time as more accurate than novel fake news headlines. In other words, the familiarity afforded by repetition outweighed the effect of the warnings about the veracity of the content. 

In light of these latest results, which point to a fluid balance between familiarity-based processing and strategic processing, what should be our recommendations to communicators who are tasked to debunk misinformation?

The picture below shows the recommendations from the Debunking Handbook annotated in light of the recent results:

2 thoughts on “Familiarity-based processing in the continued influence of misinformation

Comments are closed.