Skip to main content

Misinformation and How to Correct It

Media

Part of Misinformation and How to Correct It

Title
Misinformation and How to Correct It
extracted text
Misinformation and How to
Correct It
JOHN COOK, ULLRICH ECKER, and STEPHAN LEWANDOWSKY

Abstract
The increasing prevalence of misinformation in society may adversely affect
democratic decision making, which depends on a well-informed public. False information can originate from a number of sources including rumors, literary fiction,
mainstream media, corporate-vested interests, governments, and nongovernmental
organizations. The rise of the Internet and user-driven content has provided a venue
for quick and broad dissemination of information, not all of which is accurate. Consequently, a large body of research spanning a number of disciplines has sought to
understand misinformation and determine which interventions are most effective in
reducing its influence. This essay summarizes research into misinformation, bringing
together studies from psychology, political science, education, and computer science.
Cognitive psychology investigates why individuals struggle with correcting
misinformation and inaccurate beliefs, and why myths are so difficult to dislodge.
Two important findings involve (i) various “backfire effects,” which arise when
refutations ironically reinforce misconceptions, and (ii) the role of worldviews
in accentuating the persistence of misinformation. Computer scientists simulate
the spread of misinformation through social networks and develop algorithms
to automatically detect or neutralize myths. We draw together various research
threads to provide guidelines on how to effectively refute misconceptions without
risking backfire effects.

INTRODUCTION
Misinformation by definition does not accurately reflect the true state of
the world. In the present context, we apply the term misinformation to
information that is initially presented as true but later found to be false
(cf. Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012). For example,
one might initially believe a news report that a causal link has been found
between use of deodorants and breast cancer but find out later that this is
(most likely) just a myth.

Emerging Trends in the Social and Behavioral Sciences. Edited by Robert Scott and Stephen Kosslyn.
© 2015 John Wiley & Sons, Inc. ISBN 978-1-118-90077-2.

1

2

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

There are several reasons why misinformation has a more potentially
damaging effect than ignorance, that is, the absence of knowledge. (i)
Misinformation can be actively disseminated with an intent to deceive (it
is then sometimes referred to as disinformation). For example, antiscience
campaigns misinform the public on issues that have achieved consensus
among the scientific community, such as biological evolution, and the human
influence on climate change. However, an intention to deceive need not
always be present—for example, news coverage of unfolding events by its
very nature requires regular updating and correcting of earlier information
(e.g., the death toll after a natural disaster). (ii) False beliefs based on misinformation are often held with strong conviction, which is rarely the case with
ignorance. For example, people who reject climate science also believe they
are the best informed about the subject. (iii) Misinformation is often immune
to correction. Despite clear retractions, misinformation and associated false
beliefs may continue to influence people’s reasoning and judgments. This
continued influence can be observed even when people explicitly remember
and believe the retractions. Misinformation may thus adversely affect
decision making in democratic societies that depend on a well-informed
public.
The psychological and social implications of misinformation have been
under investigation for decades, although interest has intensified in recent
years, arguably because misinformation has an increasing presence in
society and its adverse consequences can no longer be overlooked. The
meteoric rise of social media, the acceleration of news cycles, and the
fragmentation of the media landscape have facilitated the dissemination of
misinformation.
Accordingly, much research has explored how misinformation originates
and propagates through society, and what its effects are at a societal level.
We focus on how misinformation unfolds its effects at the level of the individual. This requires research into the psychology of how a person accesses
information and updates memories and beliefs, and how this is affected by
cultural factors and worldviews. Applied research has been looking into the
effectiveness of various intervention techniques to determine which methods are most effective in reducing the influence of misinformation and how
technology can help achieve this.
Understanding misinformation is a multidisciplinary topic, where cultural
values, individual cognition, societal developments, developing technology,
and evolving media all come into play. Therefore, reducing the influence of
misinformation requires a multidisciplinary response, synthesizing the findings of social and political science, information and computer science, and
psychology.

Misinformation and How to Correct It

3

FOUNDATIONAL RESEARCH
SOURCES OF MISINFORMATION
False information can derive from a number of sources, and the analysis
of the origin and dissemination of misinformation has yielded a new field
known as “agnotology”: the study of culturally produced ignorance and
misinformation-driven manufactured doubt (Proctor, 2008).
Misinformation can be disseminated even by seemingly counterintuitive
sources. For example, straightforward fiction is effective at implanting misinformation, even when readers are warned beforehand that the content is
nonfactual. This is especially concerning when a writer pretends to base fictional work on a scientific basis, thereby misrepresenting the science (e.g.,
Michael Crichton’s novel State of Fear, which grossly distorts climate science).
Rumours and urban myths are further significant sources of misinformation that tend to produce “sticky” memes that resist subsequent correction.
Social media websites and blogs, which allow the bypassing of traditional
gatekeepers such as professional editors or peer reviewers, have contributed
to the increased dissemination of such misinformation.
Moreover, Internet content is fast becoming a replacement for expert
advice, with a majority of Americans looking online for health information.
However, numerous analyses of online content have found that a significant
proportion of websites provide inaccurate medical information. Likewise,
the quality of information from mainstream media (e.g., newspapers, TV),
and thus the standard of consumers’ knowledge depends strongly on the
news outlet.
Another potential source of bias, ironically, is the media’s tendency to
present balanced coverage by giving equal weight to both sides of a story.
This can result in “balance as bias,” when domain experts are given equal
voice with nonexperts.
While misinformation can originate inadvertently from all those channels,
they can also be used to plant and disseminate misinformation in a targeted
manner. For example, to promote their case for the invasion of Iraq in 2003,
the Bush administration announced that there was no doubt that Saddam
Hussein had weapons of mass destruction (WMDs) and linked Iraq with the
9/11 terrorist attacks. Even though both assertions are now known to have
been false, a significant percentage of Americans continued to believe that
WMDs had been found in Iraq even after the post-invasion search failed to
turn up any WMD, and around half of Americans endorsed (nonexistent)
links between Iraq and al-Qaida.
Finally, there is evidence that corporate-vested interests have engaged
in deliberate campaigns to disseminate misinformation. The fossil-fuel
industry, for example, has demonstrably campaigned to sow confusion

4

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

about the impact of fossil fuels on the environment, and tobacco manufacturers have promoted misinformation about the public health impacts of
smoking.
IDENTIFYING MYTHS AND MISCONCEPTIONS
Identifying and analyzing the content and rhetorical arguments of misinformation is a necessary step toward understanding misconceptions and
developing appropriate interventions. Taxonomically organizing the misinformation landscape allows deeper exploration of root causes, provides
insights into the psychology of misconceptions, and can assist in identifying
potential policy implications of inaccurate information. Most important, it
provides a framework for developing effective refutation strategies.
Foundational work on taxonomies dates back to Aristotle, who defined
the first taxonomy of logical fallacies by dividing them into those that
are dependent on language (e.g., ambiguity: using a word or phrase that
can have more than one meaning) and those that are not (e.g., sweeping
generalization). Gilovich (1991) sorted reasoning flaws into two main
categories—cognitive (resulting from the tendency to find order in random
data) and motivational/social (wishful thinking or self-serving distortions of
reality). This taxonomy has been applied, for example, to the most common
antivaccine myths (Jacobson, Targonski, & Poland, 2007). In another domain,
Rahmstorf (2004) categorized climate skepticism into three types: trend (climate change is not happening), attribution (climate change is not caused by
humans), and impact (impacts from climate change are inconsequential).
The benefits of the taxonomical approach can be illustrated with an analysis of myths associated with the burning of charcoal in sub-Saharan Africa
(Mwampamba, Ghilardi, Sander, & Chaix, 2013). By taxonomically organizing a diverse set of myths, the authors identified the root problem (conflation
of charcoal with wood-based fuels), provided policy consequences of each
myth, and recommended responses. For example, the myth that “charcoal is
used only by the poor” had resulted in interventions that targeted the wrong
user groups. By dispelling this misconception, communicators were able to
target interventions more appropriately.
Despite the diversity of taxonomies, arguably one of the more useful and
applicable taxonomies is a general approach applied to a number of domains.
A broader synthesis has identified five common characteristics across a number of movements that deny a well-supported scientific fact: fake experts,
cherry picking, unrealistic expectations, logical fallacies, and conspiracy theories (Diethelm & McKee, 2009). There is a deeper psychological reason why
this is a potentially effective approach: providing an alternative explanation
for how misinformation originates is an important element to refutation, as

Misinformation and How to Correct It

5

explored in subsequent sections on retraction techniques. To understand why
this is important, we need to examine the psychological challenges in reducing the influence of misinformation.
CHALLENGES IN RETRACTING MISINFORMATION
Misinformation is surprisingly resilient to correction or retraction. In some
cases, refutations have actually reinforced misconceptions. Such ironic reinforcements of false information are known as “backfire” or “boomerang”
effects. Even when corrections do not backfire, people often cling to misinformation in the face of a retraction, a phenomenon known as the Continued
Influence Effect.
In a commonly used experimental design, participants are presented with
a news report that describes an unfolding event, such as a fire or a robbery.
A critical piece of information (e.g., the cause of the fire) is provided but
later retracted (i.e., the earlier information is identified as being incorrect).
People’s reliance on the retracted information is then measured with inference questions (e.g., “why was there so much smoke?”). Studies using this
paradigm show that retractions rarely have the intended effect of eliminating
reliance on misinformation, even when participants remember the retraction.
People draw inferences from the same discredited information whose correction they explicitly acknowledge.
One explanation of the lingering effects of misinformation invokes the
notion that people build mental models of unfolding events. If a central piece of the model is invalidated, people are left with a gap in their
model, while the invalidated piece of information remains accessible in
memory. When questioned about the event, people often use the still
readily available misinformation rather than acknowledge the gap in their
understanding.
There are several cases in which attempts to correct misinformation have
been shown to actually reinforce them. For example, in an experiment
where people were exposed to health claims that were either labeled valid
or invalid, after a delay of 3 days, older people classified 40% of repeatedly
encountered invalid claims as valid. This represents one instance of the
“familiarity backfire effect,” when refutations make a myth more familiar.
There is also suggestive evidence that refutations may backfire when they
become too complex, an effect described as an “overkill backfire effect.”
For example, researchers have found that asking people to generate a few
arguments for why their belief may be wrong was successful in changing
a belief, whereas generating many counterarguments reinforced the belief.
People generally prefer simple explanations over complicated ones, and
hence when it comes to refutations, less might sometimes be more.

6

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

SUCCESSFUL RETRACTION TECHNIQUES
Three techniques have been identified to date that can make retractions
of misinformation more effective. First, reliance of misinformation can be
reduced if people are explicitly warned about possibly being misinformed
at the outset. Advanced warnings put the person cognitively on-guard so
they are less likely to be influenced by the misinformation.
Second, retractions are more effective if they are repeated or strengthened.
Especially if misinformation is encoded strongly, repeating the retraction
helps reduce the misinformation effect although it does not necessarily eliminate it. However, strengthening of the initial misinformation seems to have
a stronger negative effect than strengthening of the retraction has a positive
effect. This unfortunate asymmetry results in an unlevel playing field, with
a seemingly natural advantage ceded to initially encoded misinformation.
Third, corrections should provide an alternative explanation that fills the
gap created by the retraction. An effective alternative explanation is plausible, it explains the causal chains in the initial report, it explains why the
misinformation was initially thought to be correct, and it explains the motivation behind the misinformation. An effective alternative explanation is also
simpler (or at least not more complicated) than the misinformation.
ADDRESSING MISCONCEPTIONS IN EDUCATION
A key element of education is conceptual change, a large part of which
involves the correction of misconceptions. This is all the more important as
misconceptions can interfere with new learning. For these reasons, educators
seek to address misconceptions despite the inherent risks associated with
ineffective or backfiring retractions.
Fortunately, there is a growing literature on the explicit refutation of misinformation as an educational tool. A number of studies have explored the
effectiveness of different classroom interventions designed to reduce misconceptions. Thorough evidence-based refutations were found to be significantly more effective than nonrefutational lessons (Guzzetti, Snyder, Glass,
& Gamas, 1993). That is, in refutation-style lectures, misconceptions were
first activated and then immediately countered with accurate information.
Nonrefutational lectures, by contrast, would teach the accurate information
without any reference to the misconceptions. The former was found to be far
more effective.
Refutation in the classroom can be an opportunity to foster critical thinking, encouraging students to skeptically assess empirical evidence and draw
valid conclusions from the evidence. Use of multimedia in combination with

Misinformation and How to Correct It

7

refutational formats has shown to be more effective than standard lecture formats in reducing physics misconceptions (see Ecker, Swire, & Lewandowsky,
2014, for a review).
Thus, while there is a danger of a familiarity backfire effect by familiarizing students with misconceptions, this research demonstrates that activating
myths followed by immediate refutations—combining a retraction with a
detailed explanation—can be an effective way to induce conceptual change.
CUTTING-EDGE RESEARCH
Research into misinformation has recently extended into other disciplines.
Computer scientists have developed models to simulate the spread of misinformation and detect disinformation in real time. Cognitive scientists are
investigating the role of attitudes and worldviews in accentuating the persistence of misinformation.
COMPUTER SCIENCE AND MISINFORMATION
When Charles Spurgeon quipped in 1859 that “a lie will go round the world
while truth is pulling its boots on,” he could scarcely have imagined the
speed with which information is exchanged in the Twitter age. Spam is one
form of misinformation and is often posted on social media sites such as Twitter. While moderators seek to quickly remove spam URLs, tweets are viewed
with such speed that over 90% of visitors will have viewed a spam tweet
before the link could be removed.
Computer science provides tools that can illuminate the nature and reach
of misinformation. For example, a content analysis of 1000 Twitter status
updates matching terms such as “cold + antibiotics” was used to explore misconceptions related to antibiotics. Tweets demonstrating misunderstanding
or misuse of antibiotics were found to reach 172,571 followers. Conversely,
health providers are being encouraged to use social networks to communicate with patients and people seeking health information.
Computer scientists are developing algorithms that can identify intentionally disseminated misinformation in real time. There are a series of cognitive,
psychological, and emotional cues associated with false intent that make it
possible to automatically detect misinformation without having to rely on
domain knowledge. Software such as a Linguistic Pattern Analyzer can be
programmed to scan linguistic patterns to detect disinformation and locate
the sources (Mack, Eick, & Clark, 2007).
For example, one form of misinformation gaining prominence in recent
years is deceptive opinion spam, such as fictitious consumer reviews written to appear authentic. Deceptive text can be automatically detected using

8

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

a combination of text categorization, classifiers and psycholinguistic deception, and has been found to accurately detect nearly 90% of deceptive opinion
spam (Ott, Choi, Cardie, & Hancock, 2011). This outperforms most human
judges.
Social network analysis allows researchers to simulate the spread of
misinformation through a network with a model adopting traits similar
to the spread of a disease across a population. This approach also allows
researchers to model ways to limit the spread of misinformation. For
example, researchers can simulate how one might select a small number of
“early adopters” in a network in order to trigger the spread of positive information, minimizing the number of people who adopt negative information.
Social network algorithms can compute which nodes in a network are most
effective in blocking negative influences (Nguyen et al., 2012).
An exciting new area of research is the incorporation of other disciplines
into computer science. Social network analysis typically considers who is
connected to whom to determine how information diffuses through a network. However, one must also consider the cultural values of the people in
the network and the relevance of the misinformation to their values. This
is particularly important when culturally relevant information disseminates
through a network. It turns out that research into the role of cultural values
and worldview has taken center stage in advancing our understanding of
how people process misinformation and react to retractions.
THE ROLE OF CULTURAL VALUES AND WORLDVIEW
Worldviews and ideology have been shown to influence basic cognitive processes and shape attitude formation. For example, conservatives pay more
attention to negative information (e.g., threatening or antisocial behavior)
compared to liberals. This causes conservatives to place more weight on negative behavior of numerically smaller groups, which may explain why conservatives are more likely to form negative attitudes toward social minorities.
Research is also revealing a strong role of worldview in how people process
and retain misinformation. For example, Democrats are more likely to believe
statements underplaying the risks of higher oil prices, whereas Republicans
are more likely to believe myths concerning President Obama’s birthplace.
Similarly, retractions of politically relevant misperceptions were found
effective only if the retraction supported the person’s political orientation.
However, when the retraction conflicted with a person’s ideology, a “worldview backfire effect” was sometimes observed where the retraction caused
stronger adherence to the misinformation. For example, correcting the
misconception that President G. W. Bush’s tax cuts in the 2000s increased
government revenue led to a backfire effect among Republican participants.

Misinformation and How to Correct It

9

When confronted with information compellingly debunking a preexisting
belief, only a minute proportion of people—2% of participants in one
study—explicitly acknowledged their beliefs were mistaken. The majority of people, however, displayed some form of motivated reasoning by
counterarguing against the refutation. This is consistent with other research
into “motivated skepticism,” which shows participants expressing active
skepticism to worldview-incongruent information. The most intransigent
people engage in a strategy termed “disputing rationality”: insisting on
one’s right to an opinion without it being supported by factual reasoning.
Associated with the worldview backfire effect is a phenomenon known as
belief polarization. This occurs when the same information results in people
with contrasting prior beliefs to update their beliefs in opposite directions.
For example, when presented with supporting and opposing information
about the death penalty, participants rated arguments that confirmed their
own beliefs to be more convincing and consequently strengthened prior
beliefs. Polarization is also observed across education levels concerning
views on climate change or beliefs that President Obama is a Muslim.
This summary of worldview effects demonstrates how preexisting attitudes
and beliefs can affect the processing of misinformation and its retraction. In
our view, it is the motivated reasoning fueled by worldviews that presents
the main obstacle to efficient debiasing, and hence the greatest challenge for
future research into misinformation.
KEY ISSUES FOR FUTURE RESEARCH
WORLDVIEW
There is a need for further research into interventions that reduce the biasing
influence of worldview. Ecker, Lewandowsky, Fenton, and Martin (2014)
argued that worldview will have a strong influence on the acceptance of
counterattitudinal retractions only if accepting the retraction requires a
change in attitudes. In other words, the worldview backfire effect may
not be ubiquitous, and counterattitudinal retractions will be (relatively)
effective as long as a person can accommodate the retraction within their
more general belief framework. For example, an ethnically prejudiced
person could readily accept that a particular crime was not committed by an
immigrant but still believe that most immigrants are criminals. In contrast,
for a Republican it would actually require some shift in attitude toward
President Bush to acknowledge that his tax cuts were ineffective and his
claims to the contrary were incorrect.
Furthermore, Ecker et al. (2014) proposed that part of the empirical discrepancy regarding worldview effects may lie in the difficulty of measuring

10

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

beliefs. That is, under some circumstances people may change their underlying attitudes but not acknowledge that change in order to “save face.”
Worldview backfire effects could then occur when people overcompensate,
that is, explicitly state that their belief has grown stronger when (or because)
in fact it has decreased.
Some preliminary research indicates that the source of the retraction is
important; for example, corrections of the death-panel myth were effective among Republicans primarily when communicated by a Republican
politician. “Cultural cognition” theory shows that framing information in
worldview-consonant terms can effect positive belief change. For example,
opponents of climate science respond more positively if climate action is
presented as a business opportunity for the nuclear industry rather than a
regulatory burden involving emission cuts. Even simple wording changes
such as “carbon offset” instead of “carbon tax” has a positive effect among
Republicans whose values are challenged by the word “tax.”
One of the underlying cognitive processes that distinguish conservatives
from liberals is an emphasis on different moral principles, with liberals placing more value on harm prevention and equality. Thus, liberals view the
environment in moral terms, whereas conservatives do not. Research has
shown that the effect of ideology on environmental views can be neutralized
by reframing pro-environmental rhetoric in terms of purity, a moral value
highly emphasized by conservatives (Feinberg & Willer, 2013). Exploring the
role of moral intuitions in framing politically charged issues is an area of
future research.
An alternative approach to this kind of “worldview-affirmation” is
self-affirmation. In one study, participants were asked to write about a
time they felt good about themselves because they acted on an important
personal value. Self-affirmed people were more receptive to messages that
threatened their worldviews. Likewise, reminding people of the diversity of
attitudes in their frame of reference can make them more open to consider
counterattitudinal information (Levitan & Visser, 2008).
While these avenues to reduce worldview-associated biases in information processing are worth pursuing, some researchers have also argued that
the effects of worldview are so difficult to overcome that approaches to target behavior-change directly, bypassing attitude and belief change, are more
promising. These approaches include the creation of choice architectures,
such as “opt-out” rather than “opt-in” organ donation schemes, and the use
of government-controlled taxation or financial incentives. For example, using
taxes to raise the price of alcohol has been shown to be an effective means of
reducing drinking (Wagenaar, Salois, & Komro, 2009).
More research is required on experimentally testing different refutation
structures, and more work is needed to create a solid empirical database on

Misinformation and How to Correct It

11

which to base recommendations. For example, evidence for the familiarity
backfire effect in young adults is somewhat mixed, so further research could
clarify its boundary conditions. Existing studies finding an overkill backfire
effect were based on asking participants to generate a small or large number
of counterarguments, but an examination more applicable to real-world
situations would involve presenting prewritten counterarguments to
experimentally measure the relative impact of different refutation formats.
Future research should explore under what conditions the overkill backfire
effect and familiarity backfire effects arise, and it should clarify the role of
expertise and trustworthiness of the source of the refutation.
There is much potential in the interdisciplinary approach of integrating
psychological research with other disciplines. Experimental clarification is
needed concerning the conditions under which the refutation of misconceptions can be expected to be beneficial for educational purposes, as reviewed
earlier, and when refutations run the risk of producing a familiarity backfire effect. Similarly, integrating psychology with computer science presents
exciting opportunities to respond to misinformation in innovative new ways.
FUTURE TRENDS IN COMPUTER SCIENCE AND MISINFORMATION
Social network analysis offers the opportunity to investigate how misinformation propagates through a network and offers methods to reduce the
spread of misinformation across a network. This research can lead to the
development of tools that permit investigation into how misinformation
propagates and persists through social networks. Potentially, this may lead
to practical applications that facilitate the neutralization of or “inoculation”
against misinformation by identifying influential members of a network to
efficiently disseminate accurate information. This approach is of particular
interest, given that it has been shown that the effectiveness of misinformation campaigns can be reduced through preemptive inoculation (Pfau,
Haigh, Sims, & Wigley, 2007).
As seen in the previous section, cultural values and worldview play a
significant role in how people retain misinformation. A further area of
future research is the incorporation of other disciplines such as psychology
into social network analysis. One approach takes into account the impact of
cultural values, as culturally relevant information disseminates through a
network (Yeaman, Schick, & Lehmann, 2012). Another interesting method
is the combination of social network analysis with social and psychological
characteristics of people. An example is the combination of an agent-based
model employing an iterative learning process (where people repeatedly
receive information and gradually update their beliefs) with social network
analysis to determine how nodes (e.g., people) in a social network would be

12

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

influenced by the spread of misinformation through the network (Monakhov
et al., 2012).
An area of future research is the development of more sophisticated and
accurate tools that can detect and respond to online misinformation. An
example of such a tool is Truthy, a system originally designed to detect
orchestrated misinformation campaigns on Twitter. Similarly, the browser
extension Dispute Finder examines text on a webpage, and drawing upon a
database of known disputed claims highlights disputed information. The
advantage of this approach is that tagging misinformation as false at the time
of initial encoding reduces the likelihood that the misinformation shows
persistence. Research should also measure the effectiveness of these tools,
particularly across different demographics, to determine how the effectiveness of such interventions may vary for people of different worldview or
background.
The practice of automatically detecting and responding to misinformation
does come with risks. One experiment that issued real-time corrections of
political misinformation found that the corrections had a positive effect for
people whose attitudes were predisposed against the misinformation. However, the real-time correction was less effective than a delayed correction
among those whose political beliefs were threatened by the correction (Garrett & Weeks, 2013). One approach to mitigate this risk would be to couch
corrections in positive terms.
UNDERSTANDING AND FORMALIZING MISPERCEPTIONS
To design appropriate intervention strategies, researchers need to identify
which misconceptions are most prevalent. A survey of climate views adopting Rahmstorf’s (2004) “trend/attribution/impact” taxonomy found that
different types of skepticism are strongly interrelated (Poortinga, Spence,
Whitmarsh, Capstick, & Pidgeon, 2011): those who were skeptical about
one aspect of climate change (e.g., attribution skepticism, i.e., skepticism
that humans are causing climate change) were more likely to be skeptical
about other aspects of climate change (e.g., trend skepticism, or skepticism
that climate change is occurring). Understanding that it is a minority of
people holding all kinds of misconceptions (rather than many people
holding different, singular misconceptions) is clearly informative for both
intervention strategies and policy implementation.
While taxonomies classify misperceptions into hierarchical categories,
another method of formalizing misinformation is the development of
ontologies. These involve defining a set of properties for specific myths or
misperceptions (e.g., motivation, type, channel, profile of misinformer). The
Web Ontology Language is a standard for defining ontologies and has been

Misinformation and How to Correct It

13

used to develop a digital misinformation library (Zhou & Zhang, 2007).
Such a library can be used to increase public awareness of misinformation
and be imported into algorithms that automatically detect patterns of
misinformation.
In conclusion, the combined contribution of information and computer science to misinformation research is a clear demonstration of the importance
of a multidisciplinary approach to understanding and refuting misinformation. More broadly, the integration of psychological, political, and computer
science offers the potential of implementing the insights of cognitive science
in practical, real-world applications.
REFERENCES
Diethelm, P., & McKee, M. (2009). Denialism: What is it and how should scientists
respond? European Journal of Public Health, 19, 2–4.
Ecker, U. K. H., Lewandowsky, S., Fenton, O., & Martin, K. (2014). Do people keep
believing because they want to? Pre-existing attitudes and the continued influence
of misinformation. Memory & Cognition, 42, 292–304.
Ecker, U. K. H., Swire, B., & Lewandowsky, S. (2014). Correcting misinformation—A
challenge for education and cognitive science. In D. N. Rapp & J. L. G. Braasch
(Eds.), Processing inaccurate information: Theoretical and applied perspectives from cognitive science and the educational sciences (pp. 13–38). Cambridge, MA: MIT Press.
Feinberg, M., & Willer, R. (2013). The moral roots of environmental attitudes. Psychological science, 24(1), 56–62.
Garrett, R. K., & Weeks, B. E. (2013, February). The promise and peril of real-time corrections to political misperceptions. In Proceedings of the 2013 conference on Computer
supported cooperative work (pp. 1047–1058). San Antonio, Texas: ACM.
Gilovich, T. D. (1991). How we know what is not so: the fallibility of human reason in
everyday life? New York, NY: The Free Press.
Guzzetti, B. J., Snyder, T. E., Glass, G. V., & Gamas, W. S. (1993). Promoting conceptual change in science: A comparative meta-analysis of instructional interventions from reading education and science education. Reading Research Quarterly,
117–159.
Jacobson, R. M., Targonski, P. V., & Poland, G. A. (2007). A taxonomy of reasoning
flaws in the anti-vaccine movement. Vaccine, 25(16), 3146–3152.
Levitan, L. C., & Visser, P. S. (2008). The impact of the social context on resistance
to persuasion: Effortful versus effortless responses to counter-attitudinal information. Journal of Experimental Social Psychology, 44, 640–649.
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012).
Misinformation and its correction: Continued influence and successful debiasing.
Psychological Science in the Public Interest, 13, 106–131.
Mack, G. A., Eick, S. G., & Clark, M. A. (2007, March). Models of trust and disinformation in the open press from model-driven linguistic pattern analysis. In Aerospace
Conference, IEEE, 1–12.

14

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

Monakhov, Y., Medvednikova, M., Abramov, K., Kostina, N., Malyshev, R., Oleg,
M., & Semenova, I. (2012). Analytical model of misinformation of a social network
node. arXiv preprint arXiv:1212.0336.
Mwampamba, T. H., Ghilardi, A., Sander, K., & Chaix, K. J. (2013). Dispelling common misconceptions to improve attitudes and policy outlook on charcoal in developing countries. Energy for Sustainable Development, 17, 75–85.
Nguyen, T.H., Tsai, J., Jiang, A., Bowring, E., Maheswaran, R., & Tambe, M. (2012).
Security games on social networks. In 2012 AAAI Fall Symposium Series.
Ott, M., Choi, Y., Cardie, C. and Hancock, J.T. (2011). Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th Annual
Meeting of the Association for Computational Linguistics: Human Language
Technologies—Volume 1, Association for Computational Linguistics, 309–319.
Pfau, M., Haigh, M. M., Sims, J., & Wigley, S. (2007). The influence of corporate
front-group stealth campaigns. Communication Research, 34, 73–99.
Poortinga, W., Spence, A., Whitmarsh, L., Capstick, S., & Pidgeon, N. F. (2011).
Uncertain climate: An investigation into public scepticism about anthropogenic
climate change. Global Environmental Change, 21(3), 1015–1024.
Proctor, R. N. (2008). Agnotology: A missing term to describe the cultural production
of ignorance (and its study). In R. N. Proctor & L. Schiebinger (Eds.), Agnotology:
The making and unmaking of ignorance (pp. 1–33). Stanford, CA: Stanford University
Press.
Rahmstorf, S. (2004). The climate sceptics. Potsdam Institute for Climate Impact
Research, Potsdam. Retrieved from http://www.pik-potsdam.de/news/publicevents/archiv/alter-net/former-ss/2006/programme/28-08.2006/rahmstorf/
literature/rahmstorf_climate_sceptics_2004.pdf (accessed 19.03.13).
Wagenaar, A. C., Salois, M. J., & Komro, K. A. (2009). Effects of beverage alcohol price
and tax levels on drinking: A meta-analysis of 1003 estimates from 112 studies.
Addiction, 104, 179–190.
Yeaman, S., Schick, A., & Lehmann, L. (2012). Social network architecture and the
maintenance of deleterious cultural traits. Journal of the Royal Society Interface, 9(70),
848–858.
Zhou, L., & Zhang, D. (2007). An ontology-supported misinformation model:
Toward a digital misinformation library. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 37(5), 804–813.

FURTHER READING
Branch, G., Scott, E. C., & Rosenau, J. (2010). Dispatches from the evolution wars:
shifting tactics and expanding battlefields. Annual Review of Genomics and Human
Genetics, 11, 317–338.

JOHN COOK SHORT BIOGRAPHY
John Cook is the Research Fellow in Climate Change Communication at the
Global Change Institute, University of Queensland. His research interests

Misinformation and How to Correct It

15

include the biasing influence of worldview on how people process scientific
information, the effectiveness of refutations in correcting misinformation,
and the role of social media in public education. He coauthored the 2011 book
Climate Change Denial with environmental scientist Haydn Washington and
maintains the website Skeptical Science (www.skepticalscience.com), which
won the 2011 Australian Museum Eureka Prize for Advancement of Climate
Change Knowledge.
Personal webpage: http://www.skepticalscience.com/
Curriculum vitae: http:// www.skepticalscience.com/cv.php?u=1
Center for Advanced Study in the Behavioral Science:
http://www.casbs.org/
ULLRICH ECKER SHORT BIOGRAPHY
Ullrich Ecker is an Australian Postdoctoral Fellow of the Australian Research
Council and a Research Associate Professor at the University of Western Australia’s School of Psychology. His research examines memory integration and
memory-updating processes, and he has recently focused on the question of
how and why people are continuously affected by information they know to
be incorrect. He was recently awarded a research grant from the Australian
Research Council to work on computational models of memory. Ecker
received the University of Western Australia’s Outstanding Young Investigator Award in 2011 and the Vice Chancellor’s Mid-Career Research Award
in 2014, as well as an award for Excellence in Coursework Teaching in 2012.
STEPHAN LEWANDOWSKY SHORT BIOGRAPHY
Professor Stephan Lewandowsky is a cognitive scientist at the University
of Bristol. He was an Australian Professorial Fellow from 2007 to 2012, and
he received a Discovery Outstanding Researcher Award from the Australian
Research Council in 2011. He held a Revesz Visiting Professorship at the
University of Amsterdam in 2012. He received a Wolfson Research Merit
Award from the Royal Society in 2013 upon moving to the UK. His research
examines memory, decision making, and knowledge structures, with a
particular emphasis on how people update information in memory. He
has published over 140 scholarly articles, chapters, and books, including
numerous papers on how people respond to corrections of misinformation
(see www.cogsciwa.com for a complete list of scientific publications). He
has also contributed numerous opinion pieces to global media outlets on
issues related to climate-change skepticism and the coverage of science in
the media. A complete list of his public essays can be found at
http://www.shapingtomorrowsworld.org/inthemedia.htm.

16

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

RELATED ESSAYS
To Flop Is Human: Inventing Better Scientific Approaches to Anticipating
Failure (Methods), Robert Boruch and Alan Ruby
Emerging Trends: Asset Pricing (Economics), John Y. Campbell
Heuristic Decision Making (Political Science), Edward G. Carmines and
Nicholas J. D’Amico
Political Ideologies (Political Science), Edward G. Carmines and Nicholas J.
D’Amico
Culture and Cognition (Sociology), Karen A. Cerulo
The Inherence Heuristic: Generating Everyday Explanations (Psychology),
Andrei Cimpian
Delusions (Psychology), Max Coltheart
Applications of Selective Exposure and Attention to Information for Understanding Health and Health Disparities (Psychology), Allison Earl and
Christina Nisson
Insight (Psychology), Brian Erickson and John Kounios
Cognitive Processes Involved in Stereotyping (Psychology), Susan T. Fiske
and Cydney H. Dupree
Controlling the Influence of Stereotypes on One’s Thoughts (Psychology),
Patrick S. Forscher and Patricia G. Devine
Political Advertising (Political Science), Erika Franklin Fowler
Multitasking (Communications & Media), Matthew Irwin and Zheng Wang
The Development of Social Trust (Psychology), Vikram K. Jaswal and Marissa
B. Drell
How Brief Social-Psychological Interventions Can Cause Enduring Effects
(Methods), Dushiyanthini (Toni) Kenthirarajah and Gregory M. Walton
Search and Learning in Markets (Economics), Philipp Kircher
Concepts and Semantic Memory (Psychology), Barbara C. Malt
Implicit Attitude Measures (Psychology), Gregory Mitchell and Philip E.
Tetlock
Data Mining (Methods), Gregg R. Murray and Anthony Scime
Heuristics: Tools for an Uncertain World (Psychology), Hansjörg Neth and
Gerd Gigerenzer
Retrieval-Based Learning: Research at the Interface between Cognitive Science and Education (Psychology), Ludmila D. Nunes and Jeffrey D. Karpicke
Emerging Trends in Culture and Concepts (Psychology), Bethany Ojalehto
and Douglas Medin
Cognitive Remediation in Schizophrenia (Psychology), Clare Reeder and Til
Wykes

Misinformation and How to Correct It

17

Cognitive Bias Modification in Mental (Psychology), Meg M. Reuland
et al.
Education in an Open Informational World (Educ), Marlene Scardamalia
and Carl Bereiter
Stereotype Threat (Psychology), Toni Schmader and William M. Hall
Models of Duality (Psychology), Anand Krishna et al.
Information Politics in Dictatorships (Political Science), Jeremy L. Wallace

Misinformation and How to
Correct It
JOHN COOK, ULLRICH ECKER, and STEPHAN LEWANDOWSKY

Abstract
The increasing prevalence of misinformation in society may adversely affect
democratic decision making, which depends on a well-informed public. False information can originate from a number of sources including rumors, literary fiction,
mainstream media, corporate-vested interests, governments, and nongovernmental
organizations. The rise of the Internet and user-driven content has provided a venue
for quick and broad dissemination of information, not all of which is accurate. Consequently, a large body of research spanning a number of disciplines has sought to
understand misinformation and determine which interventions are most effective in
reducing its influence. This essay summarizes research into misinformation, bringing
together studies from psychology, political science, education, and computer science.
Cognitive psychology investigates why individuals struggle with correcting
misinformation and inaccurate beliefs, and why myths are so difficult to dislodge.
Two important findings involve (i) various “backfire effects,” which arise when
refutations ironically reinforce misconceptions, and (ii) the role of worldviews
in accentuating the persistence of misinformation. Computer scientists simulate
the spread of misinformation through social networks and develop algorithms
to automatically detect or neutralize myths. We draw together various research
threads to provide guidelines on how to effectively refute misconceptions without
risking backfire effects.

INTRODUCTION
Misinformation by definition does not accurately reflect the true state of
the world. In the present context, we apply the term misinformation to
information that is initially presented as true but later found to be false
(cf. Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012). For example,
one might initially believe a news report that a causal link has been found
between use of deodorants and breast cancer but find out later that this is
(most likely) just a myth.

Emerging Trends in the Social and Behavioral Sciences. Edited by Robert Scott and Stephen Kosslyn.
© 2015 John Wiley & Sons, Inc. ISBN 978-1-118-90077-2.

1

2

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

There are several reasons why misinformation has a more potentially
damaging effect than ignorance, that is, the absence of knowledge. (i)
Misinformation can be actively disseminated with an intent to deceive (it
is then sometimes referred to as disinformation). For example, antiscience
campaigns misinform the public on issues that have achieved consensus
among the scientific community, such as biological evolution, and the human
influence on climate change. However, an intention to deceive need not
always be present—for example, news coverage of unfolding events by its
very nature requires regular updating and correcting of earlier information
(e.g., the death toll after a natural disaster). (ii) False beliefs based on misinformation are often held with strong conviction, which is rarely the case with
ignorance. For example, people who reject climate science also believe they
are the best informed about the subject. (iii) Misinformation is often immune
to correction. Despite clear retractions, misinformation and associated false
beliefs may continue to influence people’s reasoning and judgments. This
continued influence can be observed even when people explicitly remember
and believe the retractions. Misinformation may thus adversely affect
decision making in democratic societies that depend on a well-informed
public.
The psychological and social implications of misinformation have been
under investigation for decades, although interest has intensified in recent
years, arguably because misinformation has an increasing presence in
society and its adverse consequences can no longer be overlooked. The
meteoric rise of social media, the acceleration of news cycles, and the
fragmentation of the media landscape have facilitated the dissemination of
misinformation.
Accordingly, much research has explored how misinformation originates
and propagates through society, and what its effects are at a societal level.
We focus on how misinformation unfolds its effects at the level of the individual. This requires research into the psychology of how a person accesses
information and updates memories and beliefs, and how this is affected by
cultural factors and worldviews. Applied research has been looking into the
effectiveness of various intervention techniques to determine which methods are most effective in reducing the influence of misinformation and how
technology can help achieve this.
Understanding misinformation is a multidisciplinary topic, where cultural
values, individual cognition, societal developments, developing technology,
and evolving media all come into play. Therefore, reducing the influence of
misinformation requires a multidisciplinary response, synthesizing the findings of social and political science, information and computer science, and
psychology.

Misinformation and How to Correct It

3

FOUNDATIONAL RESEARCH
SOURCES OF MISINFORMATION
False information can derive from a number of sources, and the analysis
of the origin and dissemination of misinformation has yielded a new field
known as “agnotology”: the study of culturally produced ignorance and
misinformation-driven manufactured doubt (Proctor, 2008).
Misinformation can be disseminated even by seemingly counterintuitive
sources. For example, straightforward fiction is effective at implanting misinformation, even when readers are warned beforehand that the content is
nonfactual. This is especially concerning when a writer pretends to base fictional work on a scientific basis, thereby misrepresenting the science (e.g.,
Michael Crichton’s novel State of Fear, which grossly distorts climate science).
Rumours and urban myths are further significant sources of misinformation that tend to produce “sticky” memes that resist subsequent correction.
Social media websites and blogs, which allow the bypassing of traditional
gatekeepers such as professional editors or peer reviewers, have contributed
to the increased dissemination of such misinformation.
Moreover, Internet content is fast becoming a replacement for expert
advice, with a majority of Americans looking online for health information.
However, numerous analyses of online content have found that a significant
proportion of websites provide inaccurate medical information. Likewise,
the quality of information from mainstream media (e.g., newspapers, TV),
and thus the standard of consumers’ knowledge depends strongly on the
news outlet.
Another potential source of bias, ironically, is the media’s tendency to
present balanced coverage by giving equal weight to both sides of a story.
This can result in “balance as bias,” when domain experts are given equal
voice with nonexperts.
While misinformation can originate inadvertently from all those channels,
they can also be used to plant and disseminate misinformation in a targeted
manner. For example, to promote their case for the invasion of Iraq in 2003,
the Bush administration announced that there was no doubt that Saddam
Hussein had weapons of mass destruction (WMDs) and linked Iraq with the
9/11 terrorist attacks. Even though both assertions are now known to have
been false, a significant percentage of Americans continued to believe that
WMDs had been found in Iraq even after the post-invasion search failed to
turn up any WMD, and around half of Americans endorsed (nonexistent)
links between Iraq and al-Qaida.
Finally, there is evidence that corporate-vested interests have engaged
in deliberate campaigns to disseminate misinformation. The fossil-fuel
industry, for example, has demonstrably campaigned to sow confusion

4

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

about the impact of fossil fuels on the environment, and tobacco manufacturers have promoted misinformation about the public health impacts of
smoking.
IDENTIFYING MYTHS AND MISCONCEPTIONS
Identifying and analyzing the content and rhetorical arguments of misinformation is a necessary step toward understanding misconceptions and
developing appropriate interventions. Taxonomically organizing the misinformation landscape allows deeper exploration of root causes, provides
insights into the psychology of misconceptions, and can assist in identifying
potential policy implications of inaccurate information. Most important, it
provides a framework for developing effective refutation strategies.
Foundational work on taxonomies dates back to Aristotle, who defined
the first taxonomy of logical fallacies by dividing them into those that
are dependent on language (e.g., ambiguity: using a word or phrase that
can have more than one meaning) and those that are not (e.g., sweeping
generalization). Gilovich (1991) sorted reasoning flaws into two main
categories—cognitive (resulting from the tendency to find order in random
data) and motivational/social (wishful thinking or self-serving distortions of
reality). This taxonomy has been applied, for example, to the most common
antivaccine myths (Jacobson, Targonski, & Poland, 2007). In another domain,
Rahmstorf (2004) categorized climate skepticism into three types: trend (climate change is not happening), attribution (climate change is not caused by
humans), and impact (impacts from climate change are inconsequential).
The benefits of the taxonomical approach can be illustrated with an analysis of myths associated with the burning of charcoal in sub-Saharan Africa
(Mwampamba, Ghilardi, Sander, & Chaix, 2013). By taxonomically organizing a diverse set of myths, the authors identified the root problem (conflation
of charcoal with wood-based fuels), provided policy consequences of each
myth, and recommended responses. For example, the myth that “charcoal is
used only by the poor” had resulted in interventions that targeted the wrong
user groups. By dispelling this misconception, communicators were able to
target interventions more appropriately.
Despite the diversity of taxonomies, arguably one of the more useful and
applicable taxonomies is a general approach applied to a number of domains.
A broader synthesis has identified five common characteristics across a number of movements that deny a well-supported scientific fact: fake experts,
cherry picking, unrealistic expectations, logical fallacies, and conspiracy theories (Diethelm & McKee, 2009). There is a deeper psychological reason why
this is a potentially effective approach: providing an alternative explanation
for how misinformation originates is an important element to refutation, as

Misinformation and How to Correct It

5

explored in subsequent sections on retraction techniques. To understand why
this is important, we need to examine the psychological challenges in reducing the influence of misinformation.
CHALLENGES IN RETRACTING MISINFORMATION
Misinformation is surprisingly resilient to correction or retraction. In some
cases, refutations have actually reinforced misconceptions. Such ironic reinforcements of false information are known as “backfire” or “boomerang”
effects. Even when corrections do not backfire, people often cling to misinformation in the face of a retraction, a phenomenon known as the Continued
Influence Effect.
In a commonly used experimental design, participants are presented with
a news report that describes an unfolding event, such as a fire or a robbery.
A critical piece of information (e.g., the cause of the fire) is provided but
later retracted (i.e., the earlier information is identified as being incorrect).
People’s reliance on the retracted information is then measured with inference questions (e.g., “why was there so much smoke?”). Studies using this
paradigm show that retractions rarely have the intended effect of eliminating
reliance on misinformation, even when participants remember the retraction.
People draw inferences from the same discredited information whose correction they explicitly acknowledge.
One explanation of the lingering effects of misinformation invokes the
notion that people build mental models of unfolding events. If a central piece of the model is invalidated, people are left with a gap in their
model, while the invalidated piece of information remains accessible in
memory. When questioned about the event, people often use the still
readily available misinformation rather than acknowledge the gap in their
understanding.
There are several cases in which attempts to correct misinformation have
been shown to actually reinforce them. For example, in an experiment
where people were exposed to health claims that were either labeled valid
or invalid, after a delay of 3 days, older people classified 40% of repeatedly
encountered invalid claims as valid. This represents one instance of the
“familiarity backfire effect,” when refutations make a myth more familiar.
There is also suggestive evidence that refutations may backfire when they
become too complex, an effect described as an “overkill backfire effect.”
For example, researchers have found that asking people to generate a few
arguments for why their belief may be wrong was successful in changing
a belief, whereas generating many counterarguments reinforced the belief.
People generally prefer simple explanations over complicated ones, and
hence when it comes to refutations, less might sometimes be more.

6

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

SUCCESSFUL RETRACTION TECHNIQUES
Three techniques have been identified to date that can make retractions
of misinformation more effective. First, reliance of misinformation can be
reduced if people are explicitly warned about possibly being misinformed
at the outset. Advanced warnings put the person cognitively on-guard so
they are less likely to be influenced by the misinformation.
Second, retractions are more effective if they are repeated or strengthened.
Especially if misinformation is encoded strongly, repeating the retraction
helps reduce the misinformation effect although it does not necessarily eliminate it. However, strengthening of the initial misinformation seems to have
a stronger negative effect than strengthening of the retraction has a positive
effect. This unfortunate asymmetry results in an unlevel playing field, with
a seemingly natural advantage ceded to initially encoded misinformation.
Third, corrections should provide an alternative explanation that fills the
gap created by the retraction. An effective alternative explanation is plausible, it explains the causal chains in the initial report, it explains why the
misinformation was initially thought to be correct, and it explains the motivation behind the misinformation. An effective alternative explanation is also
simpler (or at least not more complicated) than the misinformation.
ADDRESSING MISCONCEPTIONS IN EDUCATION
A key element of education is conceptual change, a large part of which
involves the correction of misconceptions. This is all the more important as
misconceptions can interfere with new learning. For these reasons, educators
seek to address misconceptions despite the inherent risks associated with
ineffective or backfiring retractions.
Fortunately, there is a growing literature on the explicit refutation of misinformation as an educational tool. A number of studies have explored the
effectiveness of different classroom interventions designed to reduce misconceptions. Thorough evidence-based refutations were found to be significantly more effective than nonrefutational lessons (Guzzetti, Snyder, Glass,
& Gamas, 1993). That is, in refutation-style lectures, misconceptions were
first activated and then immediately countered with accurate information.
Nonrefutational lectures, by contrast, would teach the accurate information
without any reference to the misconceptions. The former was found to be far
more effective.
Refutation in the classroom can be an opportunity to foster critical thinking, encouraging students to skeptically assess empirical evidence and draw
valid conclusions from the evidence. Use of multimedia in combination with

Misinformation and How to Correct It

7

refutational formats has shown to be more effective than standard lecture formats in reducing physics misconceptions (see Ecker, Swire, & Lewandowsky,
2014, for a review).
Thus, while there is a danger of a familiarity backfire effect by familiarizing students with misconceptions, this research demonstrates that activating
myths followed by immediate refutations—combining a retraction with a
detailed explanation—can be an effective way to induce conceptual change.
CUTTING-EDGE RESEARCH
Research into misinformation has recently extended into other disciplines.
Computer scientists have developed models to simulate the spread of misinformation and detect disinformation in real time. Cognitive scientists are
investigating the role of attitudes and worldviews in accentuating the persistence of misinformation.
COMPUTER SCIENCE AND MISINFORMATION
When Charles Spurgeon quipped in 1859 that “a lie will go round the world
while truth is pulling its boots on,” he could scarcely have imagined the
speed with which information is exchanged in the Twitter age. Spam is one
form of misinformation and is often posted on social media sites such as Twitter. While moderators seek to quickly remove spam URLs, tweets are viewed
with such speed that over 90% of visitors will have viewed a spam tweet
before the link could be removed.
Computer science provides tools that can illuminate the nature and reach
of misinformation. For example, a content analysis of 1000 Twitter status
updates matching terms such as “cold + antibiotics” was used to explore misconceptions related to antibiotics. Tweets demonstrating misunderstanding
or misuse of antibiotics were found to reach 172,571 followers. Conversely,
health providers are being encouraged to use social networks to communicate with patients and people seeking health information.
Computer scientists are developing algorithms that can identify intentionally disseminated misinformation in real time. There are a series of cognitive,
psychological, and emotional cues associated with false intent that make it
possible to automatically detect misinformation without having to rely on
domain knowledge. Software such as a Linguistic Pattern Analyzer can be
programmed to scan linguistic patterns to detect disinformation and locate
the sources (Mack, Eick, & Clark, 2007).
For example, one form of misinformation gaining prominence in recent
years is deceptive opinion spam, such as fictitious consumer reviews written to appear authentic. Deceptive text can be automatically detected using

8

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

a combination of text categorization, classifiers and psycholinguistic deception, and has been found to accurately detect nearly 90% of deceptive opinion
spam (Ott, Choi, Cardie, & Hancock, 2011). This outperforms most human
judges.
Social network analysis allows researchers to simulate the spread of
misinformation through a network with a model adopting traits similar
to the spread of a disease across a population. This approach also allows
researchers to model ways to limit the spread of misinformation. For
example, researchers can simulate how one might select a small number of
“early adopters” in a network in order to trigger the spread of positive information, minimizing the number of people who adopt negative information.
Social network algorithms can compute which nodes in a network are most
effective in blocking negative influences (Nguyen et al., 2012).
An exciting new area of research is the incorporation of other disciplines
into computer science. Social network analysis typically considers who is
connected to whom to determine how information diffuses through a network. However, one must also consider the cultural values of the people in
the network and the relevance of the misinformation to their values. This
is particularly important when culturally relevant information disseminates
through a network. It turns out that research into the role of cultural values
and worldview has taken center stage in advancing our understanding of
how people process misinformation and react to retractions.
THE ROLE OF CULTURAL VALUES AND WORLDVIEW
Worldviews and ideology have been shown to influence basic cognitive processes and shape attitude formation. For example, conservatives pay more
attention to negative information (e.g., threatening or antisocial behavior)
compared to liberals. This causes conservatives to place more weight on negative behavior of numerically smaller groups, which may explain why conservatives are more likely to form negative attitudes toward social minorities.
Research is also revealing a strong role of worldview in how people process
and retain misinformation. For example, Democrats are more likely to believe
statements underplaying the risks of higher oil prices, whereas Republicans
are more likely to believe myths concerning President Obama’s birthplace.
Similarly, retractions of politically relevant misperceptions were found
effective only if the retraction supported the person’s political orientation.
However, when the retraction conflicted with a person’s ideology, a “worldview backfire effect” was sometimes observed where the retraction caused
stronger adherence to the misinformation. For example, correcting the
misconception that President G. W. Bush’s tax cuts in the 2000s increased
government revenue led to a backfire effect among Republican participants.

Misinformation and How to Correct It

9

When confronted with information compellingly debunking a preexisting
belief, only a minute proportion of people—2% of participants in one
study—explicitly acknowledged their beliefs were mistaken. The majority of people, however, displayed some form of motivated reasoning by
counterarguing against the refutation. This is consistent with other research
into “motivated skepticism,” which shows participants expressing active
skepticism to worldview-incongruent information. The most intransigent
people engage in a strategy termed “disputing rationality”: insisting on
one’s right to an opinion without it being supported by factual reasoning.
Associated with the worldview backfire effect is a phenomenon known as
belief polarization. This occurs when the same information results in people
with contrasting prior beliefs to update their beliefs in opposite directions.
For example, when presented with supporting and opposing information
about the death penalty, participants rated arguments that confirmed their
own beliefs to be more convincing and consequently strengthened prior
beliefs. Polarization is also observed across education levels concerning
views on climate change or beliefs that President Obama is a Muslim.
This summary of worldview effects demonstrates how preexisting attitudes
and beliefs can affect the processing of misinformation and its retraction. In
our view, it is the motivated reasoning fueled by worldviews that presents
the main obstacle to efficient debiasing, and hence the greatest challenge for
future research into misinformation.
KEY ISSUES FOR FUTURE RESEARCH
WORLDVIEW
There is a need for further research into interventions that reduce the biasing
influence of worldview. Ecker, Lewandowsky, Fenton, and Martin (2014)
argued that worldview will have a strong influence on the acceptance of
counterattitudinal retractions only if accepting the retraction requires a
change in attitudes. In other words, the worldview backfire effect may
not be ubiquitous, and counterattitudinal retractions will be (relatively)
effective as long as a person can accommodate the retraction within their
more general belief framework. For example, an ethnically prejudiced
person could readily accept that a particular crime was not committed by an
immigrant but still believe that most immigrants are criminals. In contrast,
for a Republican it would actually require some shift in attitude toward
President Bush to acknowledge that his tax cuts were ineffective and his
claims to the contrary were incorrect.
Furthermore, Ecker et al. (2014) proposed that part of the empirical discrepancy regarding worldview effects may lie in the difficulty of measuring

10

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

beliefs. That is, under some circumstances people may change their underlying attitudes but not acknowledge that change in order to “save face.”
Worldview backfire effects could then occur when people overcompensate,
that is, explicitly state that their belief has grown stronger when (or because)
in fact it has decreased.
Some preliminary research indicates that the source of the retraction is
important; for example, corrections of the death-panel myth were effective among Republicans primarily when communicated by a Republican
politician. “Cultural cognition” theory shows that framing information in
worldview-consonant terms can effect positive belief change. For example,
opponents of climate science respond more positively if climate action is
presented as a business opportunity for the nuclear industry rather than a
regulatory burden involving emission cuts. Even simple wording changes
such as “carbon offset” instead of “carbon tax” has a positive effect among
Republicans whose values are challenged by the word “tax.”
One of the underlying cognitive processes that distinguish conservatives
from liberals is an emphasis on different moral principles, with liberals placing more value on harm prevention and equality. Thus, liberals view the
environment in moral terms, whereas conservatives do not. Research has
shown that the effect of ideology on environmental views can be neutralized
by reframing pro-environmental rhetoric in terms of purity, a moral value
highly emphasized by conservatives (Feinberg & Willer, 2013). Exploring the
role of moral intuitions in framing politically charged issues is an area of
future research.
An alternative approach to this kind of “worldview-affirmation” is
self-affirmation. In one study, participants were asked to write about a
time they felt good about themselves because they acted on an important
personal value. Self-affirmed people were more receptive to messages that
threatened their worldviews. Likewise, reminding people of the diversity of
attitudes in their frame of reference can make them more open to consider
counterattitudinal information (Levitan & Visser, 2008).
While these avenues to reduce worldview-associated biases in information processing are worth pursuing, some researchers have also argued that
the effects of worldview are so difficult to overcome that approaches to target behavior-change directly, bypassing attitude and belief change, are more
promising. These approaches include the creation of choice architectures,
such as “opt-out” rather than “opt-in” organ donation schemes, and the use
of government-controlled taxation or financial incentives. For example, using
taxes to raise the price of alcohol has been shown to be an effective means of
reducing drinking (Wagenaar, Salois, & Komro, 2009).
More research is required on experimentally testing different refutation
structures, and more work is needed to create a solid empirical database on

Misinformation and How to Correct It

11

which to base recommendations. For example, evidence for the familiarity
backfire effect in young adults is somewhat mixed, so further research could
clarify its boundary conditions. Existing studies finding an overkill backfire
effect were based on asking participants to generate a small or large number
of counterarguments, but an examination more applicable to real-world
situations would involve presenting prewritten counterarguments to
experimentally measure the relative impact of different refutation formats.
Future research should explore under what conditions the overkill backfire
effect and familiarity backfire effects arise, and it should clarify the role of
expertise and trustworthiness of the source of the refutation.
There is much potential in the interdisciplinary approach of integrating
psychological research with other disciplines. Experimental clarification is
needed concerning the conditions under which the refutation of misconceptions can be expected to be beneficial for educational purposes, as reviewed
earlier, and when refutations run the risk of producing a familiarity backfire effect. Similarly, integrating psychology with computer science presents
exciting opportunities to respond to misinformation in innovative new ways.
FUTURE TRENDS IN COMPUTER SCIENCE AND MISINFORMATION
Social network analysis offers the opportunity to investigate how misinformation propagates through a network and offers methods to reduce the
spread of misinformation across a network. This research can lead to the
development of tools that permit investigation into how misinformation
propagates and persists through social networks. Potentially, this may lead
to practical applications that facilitate the neutralization of or “inoculation”
against misinformation by identifying influential members of a network to
efficiently disseminate accurate information. This approach is of particular
interest, given that it has been shown that the effectiveness of misinformation campaigns can be reduced through preemptive inoculation (Pfau,
Haigh, Sims, & Wigley, 2007).
As seen in the previous section, cultural values and worldview play a
significant role in how people retain misinformation. A further area of
future research is the incorporation of other disciplines such as psychology
into social network analysis. One approach takes into account the impact of
cultural values, as culturally relevant information disseminates through a
network (Yeaman, Schick, & Lehmann, 2012). Another interesting method
is the combination of social network analysis with social and psychological
characteristics of people. An example is the combination of an agent-based
model employing an iterative learning process (where people repeatedly
receive information and gradually update their beliefs) with social network
analysis to determine how nodes (e.g., people) in a social network would be

12

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

influenced by the spread of misinformation through the network (Monakhov
et al., 2012).
An area of future research is the development of more sophisticated and
accurate tools that can detect and respond to online misinformation. An
example of such a tool is Truthy, a system originally designed to detect
orchestrated misinformation campaigns on Twitter. Similarly, the browser
extension Dispute Finder examines text on a webpage, and drawing upon a
database of known disputed claims highlights disputed information. The
advantage of this approach is that tagging misinformation as false at the time
of initial encoding reduces the likelihood that the misinformation shows
persistence. Research should also measure the effectiveness of these tools,
particularly across different demographics, to determine how the effectiveness of such interventions may vary for people of different worldview or
background.
The practice of automatically detecting and responding to misinformation
does come with risks. One experiment that issued real-time corrections of
political misinformation found that the corrections had a positive effect for
people whose attitudes were predisposed against the misinformation. However, the real-time correction was less effective than a delayed correction
among those whose political beliefs were threatened by the correction (Garrett & Weeks, 2013). One approach to mitigate this risk would be to couch
corrections in positive terms.
UNDERSTANDING AND FORMALIZING MISPERCEPTIONS
To design appropriate intervention strategies, researchers need to identify
which misconceptions are most prevalent. A survey of climate views adopting Rahmstorf’s (2004) “trend/attribution/impact” taxonomy found that
different types of skepticism are strongly interrelated (Poortinga, Spence,
Whitmarsh, Capstick, & Pidgeon, 2011): those who were skeptical about
one aspect of climate change (e.g., attribution skepticism, i.e., skepticism
that humans are causing climate change) were more likely to be skeptical
about other aspects of climate change (e.g., trend skepticism, or skepticism
that climate change is occurring). Understanding that it is a minority of
people holding all kinds of misconceptions (rather than many people
holding different, singular misconceptions) is clearly informative for both
intervention strategies and policy implementation.
While taxonomies classify misperceptions into hierarchical categories,
another method of formalizing misinformation is the development of
ontologies. These involve defining a set of properties for specific myths or
misperceptions (e.g., motivation, type, channel, profile of misinformer). The
Web Ontology Language is a standard for defining ontologies and has been

Misinformation and How to Correct It

13

used to develop a digital misinformation library (Zhou & Zhang, 2007).
Such a library can be used to increase public awareness of misinformation
and be imported into algorithms that automatically detect patterns of
misinformation.
In conclusion, the combined contribution of information and computer science to misinformation research is a clear demonstration of the importance
of a multidisciplinary approach to understanding and refuting misinformation. More broadly, the integration of psychological, political, and computer
science offers the potential of implementing the insights of cognitive science
in practical, real-world applications.
REFERENCES
Diethelm, P., & McKee, M. (2009). Denialism: What is it and how should scientists
respond? European Journal of Public Health, 19, 2–4.
Ecker, U. K. H., Lewandowsky, S., Fenton, O., & Martin, K. (2014). Do people keep
believing because they want to? Pre-existing attitudes and the continued influence
of misinformation. Memory & Cognition, 42, 292–304.
Ecker, U. K. H., Swire, B., & Lewandowsky, S. (2014). Correcting misinformation—A
challenge for education and cognitive science. In D. N. Rapp & J. L. G. Braasch
(Eds.), Processing inaccurate information: Theoretical and applied perspectives from cognitive science and the educational sciences (pp. 13–38). Cambridge, MA: MIT Press.
Feinberg, M., & Willer, R. (2013). The moral roots of environmental attitudes. Psychological science, 24(1), 56–62.
Garrett, R. K., & Weeks, B. E. (2013, February). The promise and peril of real-time corrections to political misperceptions. In Proceedings of the 2013 conference on Computer
supported cooperative work (pp. 1047–1058). San Antonio, Texas: ACM.
Gilovich, T. D. (1991). How we know what is not so: the fallibility of human reason in
everyday life? New York, NY: The Free Press.
Guzzetti, B. J., Snyder, T. E., Glass, G. V., & Gamas, W. S. (1993). Promoting conceptual change in science: A comparative meta-analysis of instructional interventions from reading education and science education. Reading Research Quarterly,
117–159.
Jacobson, R. M., Targonski, P. V., & Poland, G. A. (2007). A taxonomy of reasoning
flaws in the anti-vaccine movement. Vaccine, 25(16), 3146–3152.
Levitan, L. C., & Visser, P. S. (2008). The impact of the social context on resistance
to persuasion: Effortful versus effortless responses to counter-attitudinal information. Journal of Experimental Social Psychology, 44, 640–649.
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012).
Misinformation and its correction: Continued influence and successful debiasing.
Psychological Science in the Public Interest, 13, 106–131.
Mack, G. A., Eick, S. G., & Clark, M. A. (2007, March). Models of trust and disinformation in the open press from model-driven linguistic pattern analysis. In Aerospace
Conference, IEEE, 1–12.

14

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

Monakhov, Y., Medvednikova, M., Abramov, K., Kostina, N., Malyshev, R., Oleg,
M., & Semenova, I. (2012). Analytical model of misinformation of a social network
node. arXiv preprint arXiv:1212.0336.
Mwampamba, T. H., Ghilardi, A., Sander, K., & Chaix, K. J. (2013). Dispelling common misconceptions to improve attitudes and policy outlook on charcoal in developing countries. Energy for Sustainable Development, 17, 75–85.
Nguyen, T.H., Tsai, J., Jiang, A., Bowring, E., Maheswaran, R., & Tambe, M. (2012).
Security games on social networks. In 2012 AAAI Fall Symposium Series.
Ott, M., Choi, Y., Cardie, C. and Hancock, J.T. (2011). Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th Annual
Meeting of the Association for Computational Linguistics: Human Language
Technologies—Volume 1, Association for Computational Linguistics, 309–319.
Pfau, M., Haigh, M. M., Sims, J., & Wigley, S. (2007). The influence of corporate
front-group stealth campaigns. Communication Research, 34, 73–99.
Poortinga, W., Spence, A., Whitmarsh, L., Capstick, S., & Pidgeon, N. F. (2011).
Uncertain climate: An investigation into public scepticism about anthropogenic
climate change. Global Environmental Change, 21(3), 1015–1024.
Proctor, R. N. (2008). Agnotology: A missing term to describe the cultural production
of ignorance (and its study). In R. N. Proctor & L. Schiebinger (Eds.), Agnotology:
The making and unmaking of ignorance (pp. 1–33). Stanford, CA: Stanford University
Press.
Rahmstorf, S. (2004). The climate sceptics. Potsdam Institute for Climate Impact
Research, Potsdam. Retrieved from http://www.pik-potsdam.de/news/publicevents/archiv/alter-net/former-ss/2006/programme/28-08.2006/rahmstorf/
literature/rahmstorf_climate_sceptics_2004.pdf (accessed 19.03.13).
Wagenaar, A. C., Salois, M. J., & Komro, K. A. (2009). Effects of beverage alcohol price
and tax levels on drinking: A meta-analysis of 1003 estimates from 112 studies.
Addiction, 104, 179–190.
Yeaman, S., Schick, A., & Lehmann, L. (2012). Social network architecture and the
maintenance of deleterious cultural traits. Journal of the Royal Society Interface, 9(70),
848–858.
Zhou, L., & Zhang, D. (2007). An ontology-supported misinformation model:
Toward a digital misinformation library. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 37(5), 804–813.

FURTHER READING
Branch, G., Scott, E. C., & Rosenau, J. (2010). Dispatches from the evolution wars:
shifting tactics and expanding battlefields. Annual Review of Genomics and Human
Genetics, 11, 317–338.

JOHN COOK SHORT BIOGRAPHY
John Cook is the Research Fellow in Climate Change Communication at the
Global Change Institute, University of Queensland. His research interests

Misinformation and How to Correct It

15

include the biasing influence of worldview on how people process scientific
information, the effectiveness of refutations in correcting misinformation,
and the role of social media in public education. He coauthored the 2011 book
Climate Change Denial with environmental scientist Haydn Washington and
maintains the website Skeptical Science (www.skepticalscience.com), which
won the 2011 Australian Museum Eureka Prize for Advancement of Climate
Change Knowledge.
Personal webpage: http://www.skepticalscience.com/
Curriculum vitae: http:// www.skepticalscience.com/cv.php?u=1
Center for Advanced Study in the Behavioral Science:
http://www.casbs.org/
ULLRICH ECKER SHORT BIOGRAPHY
Ullrich Ecker is an Australian Postdoctoral Fellow of the Australian Research
Council and a Research Associate Professor at the University of Western Australia’s School of Psychology. His research examines memory integration and
memory-updating processes, and he has recently focused on the question of
how and why people are continuously affected by information they know to
be incorrect. He was recently awarded a research grant from the Australian
Research Council to work on computational models of memory. Ecker
received the University of Western Australia’s Outstanding Young Investigator Award in 2011 and the Vice Chancellor’s Mid-Career Research Award
in 2014, as well as an award for Excellence in Coursework Teaching in 2012.
STEPHAN LEWANDOWSKY SHORT BIOGRAPHY
Professor Stephan Lewandowsky is a cognitive scientist at the University
of Bristol. He was an Australian Professorial Fellow from 2007 to 2012, and
he received a Discovery Outstanding Researcher Award from the Australian
Research Council in 2011. He held a Revesz Visiting Professorship at the
University of Amsterdam in 2012. He received a Wolfson Research Merit
Award from the Royal Society in 2013 upon moving to the UK. His research
examines memory, decision making, and knowledge structures, with a
particular emphasis on how people update information in memory. He
has published over 140 scholarly articles, chapters, and books, including
numerous papers on how people respond to corrections of misinformation
(see www.cogsciwa.com for a complete list of scientific publications). He
has also contributed numerous opinion pieces to global media outlets on
issues related to climate-change skepticism and the coverage of science in
the media. A complete list of his public essays can be found at
http://www.shapingtomorrowsworld.org/inthemedia.htm.

16

EMERGING TRENDS IN THE SOCIAL AND BEHAVIORAL SCIENCES

RELATED ESSAYS
To Flop Is Human: Inventing Better Scientific Approaches to Anticipating
Failure (Methods), Robert Boruch and Alan Ruby
Emerging Trends: Asset Pricing (Economics), John Y. Campbell
Heuristic Decision Making (Political Science), Edward G. Carmines and
Nicholas J. D’Amico
Political Ideologies (Political Science), Edward G. Carmines and Nicholas J.
D’Amico
Culture and Cognition (Sociology), Karen A. Cerulo
The Inherence Heuristic: Generating Everyday Explanations (Psychology),
Andrei Cimpian
Delusions (Psychology), Max Coltheart
Applications of Selective Exposure and Attention to Information for Understanding Health and Health Disparities (Psychology), Allison Earl and
Christina Nisson
Insight (Psychology), Brian Erickson and John Kounios
Cognitive Processes Involved in Stereotyping (Psychology), Susan T. Fiske
and Cydney H. Dupree
Controlling the Influence of Stereotypes on One’s Thoughts (Psychology),
Patrick S. Forscher and Patricia G. Devine
Political Advertising (Political Science), Erika Franklin Fowler
Multitasking (Communications & Media), Matthew Irwin and Zheng Wang
The Development of Social Trust (Psychology), Vikram K. Jaswal and Marissa
B. Drell
How Brief Social-Psychological Interventions Can Cause Enduring Effects
(Methods), Dushiyanthini (Toni) Kenthirarajah and Gregory M. Walton
Search and Learning in Markets (Economics), Philipp Kircher
Concepts and Semantic Memory (Psychology), Barbara C. Malt
Implicit Attitude Measures (Psychology), Gregory Mitchell and Philip E.
Tetlock
Data Mining (Methods), Gregg R. Murray and Anthony Scime
Heuristics: Tools for an Uncertain World (Psychology), Hansjörg Neth and
Gerd Gigerenzer
Retrieval-Based Learning: Research at the Interface between Cognitive Science and Education (Psychology), Ludmila D. Nunes and Jeffrey D. Karpicke
Emerging Trends in Culture and Concepts (Psychology), Bethany Ojalehto
and Douglas Medin
Cognitive Remediation in Schizophrenia (Psychology), Clare Reeder and Til
Wykes

Misinformation and How to Correct It

17

Cognitive Bias Modification in Mental (Psychology), Meg M. Reuland
et al.
Education in an Open Informational World (Educ), Marlene Scardamalia
and Carl Bereiter
Stereotype Threat (Psychology), Toni Schmader and William M. Hall
Models of Duality (Psychology), Anand Krishna et al.
Information Politics in Dictatorships (Political Science), Jeremy L. Wallace