Last Saturday, a powerful earthquake struck the Philippines.
It was first reported as having a magnitude of 7.2; this was later corrected to 6.8.
Last Friday, a wharf collapsed in Gloucester Harbor in Massachusetts. It was first reported as a wharf belonging to Cape Ann Ice, but later identified as a wharf used by Channel Fish.
Last Thursday, President Trump announced plans regarding NAFTA. He originally claimed that he would withdraw from the agreement entirely, but later indicated plans to renegotiate.
Corrections and retractions are common — not only in the news, but also in science and in everyday life. Sometimes it’s as simple as correcting a careless mistake; other cases involve new information that leads to a reinterpretation of the evidence and the rejection of some prior assumption. We discover that the complaint wasn’t made by our neighbor after all, or that the purported link between vaccines and autism was based on deliberate fraud.
The trouble is that initial beliefs are sometimes hard to dislodge. Dozens of studies in experimental psychology have identified a phenomenon known as the continued influence effect: Even after misinformation is retracted, many people continue to treat it as true. In other words, it has a continued influence on their thinking.
When misinformation concerns something like the safety of vaccines or the perpetrators behind some atrocity, getting it wrong can be personally and societally consequential. That’s one reason why psychologists have been eager to understand precisely what drives the continued influence effect, and what kinds of corrections are most likely to be effective.
A new paper by Ullrich Ecker, Joshua Hogan and Stephan Lewandowsky, forthcoming in the Journal of Applied Research in Memory and Cognition, takes up one important question regarding the correction of misinformation: Is it better to explicitly state and retract the false claim, or is it better to avoid repeating something false, and instead simply state what’s now believed to be true?
Both possibilities are suggested by prior research. On the one hand, repeating a false claim could make it more familiar. Familiarity, in turn, could be mistaken for fact, or at least the suspicion that there’s something to the (false) claim. Weeks after reading a brochure about vaccine safety, for example, there might be something familiar about the idea that vaccines are associated with autism, but you might not remember precisely what was claimed, and in particular that the association was refuted.
On the other hand, there’s evidence that explicitly articulating a misconception can facilitate the process of updating one’s beliefs. For instance, some approaches to student learning emphasize the value of engaging with students’ initial (mistaken) beliefs as a precursor to conceptual change. Perhaps drawing attention to a false belief is a good way to assimilate the new information in a way that replaces, rather than merely co-exists with, the initial misinformation.
Given these competing possibilities, Ecker and his colleagues designed an experiment in which 60 university undergraduates read a series of scenarios that were written as pairs of news stories, half of which involved a retraction in the second story of some misinformation stated in the first story. The crucial variation was in how the retraction occurred: by merely stating the new claim; by implying that the new claim revised a prior claim (but without stating what the prior claim was); or by including both the initial claim and the new claim that superseded it.
To measure the “continued influence” of the initial misinformation, participants were asked a series of questions relevant to that aspect of the news story. The researchers found that people’s reasoning often showed an influence of the initial, retracted claim, thus replicating prior work. However, they also found that this influence was most pronounced when the new claim was simply stated, and least pronounced when the retraction included both the initial claim and the new claim that superseded it. At least for these scenarios, the most effective retractions were those that repeated the initial misinformation.
The study’s authors are cautious about making strong recommendations on the basis of this single result. For instance, they still suggest that unnecessary repetitions of misinformation should be avoided; if someone doesn’t already believe the misinformation, repeating it could do more harm than good.
It’s also important to know how robust these findings are to different kinds of (mis)information and different ways in which it is presented. One important factor could be time. Does it matter if the retraction follows the initial information almost immediately, versus after a long delay? Moreover, it could be that the retraction that’s most effective for the few minutes after it’s been read doesn’t have the most staying power as weeks and months go by.
These caveats aside, the new results offer an important qualification to prior recommendations concerning misinformation and its correction, some of which encouraged educators and communicators to avoid repeating false claims. At least sometimes, there may be value in repeating misinformation alongside the alternative we now consider to be true.
This post was originally published at NPR’s 13.7: Cosmos & Culture page. It is reposted here as a first post in a series of three posts on recent research on misinformation by Ulli Ecker, Stephan Lewandowsky, and colleagues.
The next post reports a recent study, with Briony Swire as first author, that takes a further look at whether repeating a myth during its debunking is always harmful.