ChapterPDF Available

Post-truth, anti-truth, and can’t-handle-the-truth: how responses to science are shaped by concerns about its impact

Authors:

Abstract

Science is valued for its basic and applied functions: producing knowledge and contributing to the common good. Much of the time, these are perceived to work in harmony. However, science is sometimes seen as capable of subverting the common good, by facilitating dangerous technologies (e.g., weapons of mass destruction) or exerting a malign influence on public policy, opinion, and behaviour. Employing the social functionalist framework of Tetlock (2002), we propose that efforts to censor and suppress scientific findings are motivated by concerns about their societal impact. We begin by covering recent political shifts towards more censorial and punitive responses to scientific research in the U.S. and elsewhere. We go on to propose that beyond traditional explanations of how people evaluate scientific findings, such as cognitive consistency motivations, people’s evaluations are shaped by perceptions of the potential societal impact of scientific data. Our recent studies have shown that independently of the extent to which scientific findings contradict people’s beliefs (as in the confirmation bias), people reject and oppose the publication, application, and funding of research to the extent that they judge its findings as threatening to the public interest. In the final part of the chapter, we outline avenues for further theory and research. Here, we particularly emphasize the importance of establishing whether concerns about impact are themselves rationalizations of opposition to research that are motivated by other moral concerns, such as perceived purity violations (Graham et al., 2009). Finally, we underline that it is crucial that further research examines perceptions of science as a dangerous force that must be neutralized, and the potential of these perceptions to obstruct not only the public understanding of scientific research, but ultimately, the research itself.
10
Post-truth, anti-truth, and cant-handle-the-truth: how responses to
science are shaped by concerns about its impact
Robbie M. Sutton, Aino Petterson, and Bastiaan T. Rutjens
Introduction
Science is the great antidote to the poison of enthusiasm and superstition.
Adam Smith, The Wealth of Nations, 1776
Science is always wrong. It never solves a problem without creating ten more.
George Bernard Shaw
People generally report positive attitudes to science and scientists (Gauchat, 2012). It is valued
for the contribution that it makes to social, cultural, and economic progress. For many people,
indeed, faith in science is akin to religious faith and may serve some of the same psychological
functions (Rutjens, van Harreveld, & van der Pligt, 2013). Science is supported by investments
of large sums of money; according to World Bank statistics, fully 2% of global gross domestic
product (GDP) is spent on research and development, and the richer the country, the higher this
proportion grows (The World Bank, 2018). But, paradoxically, science is also frequently
opposed: scientific findings and conclusions are censored and suppressed, whereas scientists are
silenced, harassed, surveilled, sanctioned, and even persecuted.
Examples abound. Columbia University now hosts a website containing multiple instances (since
November 2016) in which authorities have censored, obstructed, or misrepresented scientific
research and where scientists have censored their own work or that of their colleagues
(Columbia Law School, 2018). Just as happened with the election of Steven Harper in Canada
(The Professional Institute of the Public Service of Canada, 2013), the censorial and obstructive
policy position toward climate science in the United States seems to have stemmed from the
election of Donald J. Trump and his appointment of Scott Pruitt, a vocal critic of climate science
and frequent litigator against the Environmental Protection Agency (EPA) as the head of that
same agency (Chiacu & Volcovici, 2017; McKie, 2017; Nuccitelli, 2017). Also in 2017, the
Turkish government completely removed evolution from the curriculum of 9th graders, with the
explicit aim of introducing a more value-based curriculum (Frayer & Saracoglu, 2017). In
recent years, prominent scientists have complained about being targeted by online abuse, legal
complaints, vexatious and repeated freedom of information requests, and dubious re-analyses of
data designed to delay, censor, and alter the interpretation of published findings. These findings
span not only climate change but also other controversial topics such as false memory and child
abuse (Lewandowsky, Mann, Bauld, Hastings, & Loftus, 2013; Lewandowsky, Mann, Brown, &
Friedman, 2016). Indeed, opposition to science is not the preserve of right-wing and religious
groups. Whatever the scientific merits of Herrnstein and Murrays research on racial differences
in IQ, it has been described in published academic papers as crude and dangerous (Gillborn,
2016, p. 365) and has been silenced by no-platform and other aggressive tactics on university
campuses (Beinart, 2017), echoing the campus attacks on Edward O. Wilson in the 1960s
(Wilson, 1995). Of course, the perception that scientific research can be dangerous and needs to
be silenced and shut down is not new, and stems back (at least) to the persecution of Galileo
Galilei, whose frank observations of planetary movements threatened the view that the earth is
the center of the cosmos and by implication, an entire edifice of theology and power (Dreger,
2015).
What explains this perennial opposition to science? There is surprisingly little research on this
question, despite a long and strong tradition of research into motivated skepticism about
scientific findings (for reviews, see Hornsey & Fielding, 2017; Rutjens, Heine, Sutton, & van
Harreveld, 2018). There is an urgent need for such research because opposition to science
threatens scientific, and therefore social and economic progress, and appears to be gathering pace
in an era of declining support for democratic and enlightenment values. To be sure, motivated
skepticism about science is an important phenomenon: for example, it causes people to leave
themselves and their children unprotected from preventable diseases and encourages them to
make personal and political choices that degrade the environment (Rutjens et al., 2018). But as
much as motivated skepticism matters, it has no chance to operate when scientific advances are
censored or prevented from happening in the first place. Nor, in this case, does anyone have the
opportunity to make choices informed by their own reading of the evidence. Thus, peoples
preferences for policies that support versus oppose science may be at least as important as their
attitudes toward science itself.
In this chapter, we outline some preliminary theoretical and empirical groundwork for the
systematic study of opposition to science. Our core proposal is that people not only doubt the
facts produced by some scientific investigations but that they also perceive them as a threat to
collective interests. In turn, this perception motivates cognitive and behavioral responses that
serve to neutralize the threat. Such responses include motivated skepticism since findings are less
likely to have impact if they are not believed. They also include motivated opposition to science
since findings are less likely to have impact if they remain obscure, are prevented from
informing policy, or from happening at all.
Why science seems dangerous
We suggest that science seems dangerous because it is designed to disrupt the constraints of
other methods of establishing and sharing knowledge. Communication is normally governed by
conventions designed to preserve social relationships, including harmony and hierarchy.
Although politeness takes different forms in different cultures, politeness itself is pancultural,
and in every culture, it mandates that one should be more formal and less frank with strangers
and social superiors (Brown & Gilman, 1960). One of the main aims of normal communication
is to establish a common ground of understanding between communicators (Clark, 1992), and
ultimately, a shared cultural reality within a cultural ingroup (Echterhoff, Higgins, & Levine,
2009). Thus, people tend to inhibit the expression of ideas that deviate from normative
understandings of reality (e.g. Kashima, 2000a, 2000b; Toma & Butera, 2009) and react
negatively when these ideas are shared too openly. For instance, Klar and Bilewicz (2017; see
also Bilewicz, 2016) found that group members belief in the accuracy of their ingroups
historical narrative motivates individuals to act as lay censors of historical accounts that run
counter to this official account. People also inhibit other ideas out of paternalistic concern for
the harm they may do to their audience. Thus, we are normally expected to refrain from telling
people exactly what it is about their intellect, appearance, or character that we find unattractive.
Hate speech is explicitly banned in many countries and frowned upon in most others. Research
on the third-person effect (Davison, 1983) shows that in the domain of mass communication,
people perceive that advertising, pornography, and propaganda may exert an undesirable
influence on others, if not themselves. The more they perceive it to harm others, the more they
support censorship of this material (Chung & Moon, 2016; Davison, 1983; Douglas & Sutton,
2004, 2008).
If normal human communication is polite and strategically economical with the truth, science in
its ideal form is supposed to be impersonal and mercilessly frank. Put differently, perceptions of
reality should be dictated by science in its ideal form, rather than perceptions of reality shaping
which science to accept and which to reject. Results should be reported regardless of what people
generally believe or prefer to believe, and no matter what their implications for social harmony
and hierarchy. Instead of carefully editing their message to suit their own or others interests,
researchers hand over control of their message to the vicissitudes of their data. The studies they
conduct are rolls of the dice, and like oracles or soothsayers (a Middle English term, first
recorded in Kent, which means one who speaks the truth), they are formally obliged to convey
the results.
Freeing science from the conventions of ordinary communication has been crucial to its success
in freeing our understanding of the world from the shackles of prejudice and superstition. But
science is not completely free, and its freedom is viewed with suspicion and resentment. Indeed,
popular representations of science often cast it as dangerous, immoral, or pernicious (see Rutjens
& Heine, 2016). Haynes (2003) examined cultural representations of scientists in Western
literature and film, and identified several pernicious stereotypes. Frankenstein is an example of
an inhuman researcher who puts aside normal human emotions such as empathy in the single-
minded pursuit of knowledge and mastery of nature. In other works, scientists are represented as
foolish and helpless unduly ready to make far-reaching decisions on the basis of a few
scientific observations and unable to predict or control their inventions. In Jurassic Park,
genetically engineered dinosaurs run amok, much to the surprise and chagrin of the scientists
who created them; in Terminator, the same is true of an artificially intelligent defense system
that becomes sentient. Scientists are also sometimes represented straightforwardly as mad, bad,
and dangerous (dangerous in particular still resonates with public opinion; Rutjens & Heine,
2016), like the nuclear scientist Dr Strangelove in Stanley Kubricks film.
We suggest that motivated doubt and opposition to science are best understood within a social
functionalist perspective on motivated cognition (Tetlock, 2002). This theoretical perspective,
like other accounts of motivated cognition, assumes that when people think, feel, and act, they
are pursuing goals in other words, that human psychology should be understood in
functionalist terms. However, other accounts of motivated cognition are concerned with
essentially intrapsychic functions: peoples thoughts, feelings, and actions are designed to make
them feel better about themselves, that they are in control of the world, or that they have a stable
working understanding of reality (Kruglanski, 1990; Kunda, 1990; Landau, Kay, & Whitson,
2015). Research on attitudes to science has, thus far, concerned itself largely with intrapsychic
motives, for example on how people are skeptical of scientific research when it contradicts their
beliefs about a topic (Lord, Ross, & Lepper, 1979) or threatens their self-image (Bastardi,
Uhlmann, & Ross, 2011), their sense of personal optimism (Ditto & Lopez, 1992) or their moral
(Colombo, Bucher, & Inbar, 2016) and ideological (Washburn & Skitka, 2017) convictions. In
contrast, Tetlocks (2002) social functionalist account is concerned with the social functions of
thought, and posits that motivated cognition can be understood only in terms of the
embeddedness of human beings in relations with other people, institutions, and the broader
political and cultural environment [35: p. 452]. This perspective assumes that the pursuit of
collective goals, including social order, requires people to think, feel, and act in certain ways
ways that enable them to cope effectively with the demands of living in complex interdependent
collectives. These demands include the ability to hold others accountable for actions that may
threaten collective interests, and to cope with being held accountable by others. Note that the
distinction made between intrapsychic motives and Tetlocks social functionalist account is not
definite since any ideological or morality-based motivations likely incorporate both (e.g.
Washburn & Skitka, 2017), and these are often difficult to tease apart (Tetlock & Manstead,
1985). Nonetheless, most work on attitudes to science, and especially the classic work, was
informed by cognitive consistency accounts of confirmation bias (e.g. Lord et al., 1979), with
limited attention devoted to the wider social functions of this bias.
Within this overarching perspective, Tetlock (2002) proposed three theoretical models detailing
ways in which social functionalism plays out. People function as intuitive theologians, defending
sacred values such as shared moral foundations, ideological assumptions, and binding myths
from ideas and evidence that contradict them. Data and ideas that contradict these sacred values,
which might include egalitarian ideals about racial equality or fundamentalist beliefs about the
incontrovertible truth of the Bible, are rejected. When peoples concerns about the potential
impact of research lead them to cast doubts on its veracity and to support censorship, they are
acting as intuitive theologians. Second, people function as intuitive prosecutors, defending rules
and regimes that they perceive as legitimate. This includes finding blame and supporting efforts
to punish those who pose a threat to these regimes. When people oppose research by favoring
censorship, defunding, and sanctions, they are acting as intuitive prosecutors. Third, when their
own actions may be under the spotlight, they function as intuitive politicians, and think, feel, and
act in ways that protect and enhance desired impressions of themselves. People may do this by
appealing to cherry-picked scientific findings that support their chosen attitude or policy position
while casting doubt on other findings.
Note that scientists and their work are not passive in these processes. Social functionalism is a
ubiquitous feature of social cognition and motivation and is also displayed by scientists
themselves. Researchers function as intuitive politicians when they selectively pursue research
questions, choose methods, and report results to avoid controversy or accrue available rewards
(Ioannidis, 2012; but see also Nosek et al., 2015). They act as intuitive prosecutors when they
call out fellow researchers who produce work that they perceive as potentially harmful (e.g.
Dominus, 2017). In such cases, the concern is generally not paternalistic concern for impacts on
the public, or concern about dangerous technologies, but harms to the integrity of the scientific
community and its members (e.g. the misdirection of theory and effort by inauthentic findings).
They act as intuitive theologians when their moral and political preferences affect their selection
of research questions, methods, analyses, and interpretations (Duarte et al., 2015; Jussim,
Crawford, Anglin, Stevens, & Duarte, 2016). Indeed, Jussim, Stevens, and Honeycutt (2018; see
also Stevens, Jussim, Anglin, & Honeycutt, 2018) argued that many questions concerning the
accuracy of stereotypes remain unasked in part because researchers fear the negative impact that
certain findings could have on stigmatized groups.
Impact, science skepticism, and censorial responses to science
Viewed from a social functionalist perspective, skepticism and opposition to research are
motivated by concerns about the potential impact of scientific findings on collective interests.
Studies should show, therefore, that this concern affects responses to scientific research over and
above the effect of intrapsychic motivations such as the confirmation bias. We (Sutton, Lee, and
Hartley, 2018) put this hypothesis to the test in the context of pregnant womens alcohol
consumption during pregnancy. Although there is some evidence that high levels of prenatal
alcohol exposure are associated with risks to childrens cognitive development (Flak et al., 2014;
but see also Henderson, Kesmodel, & Gray, 2007), meta-analytic studies have found no harmful
effects of low or moderate levels (Flak et al., 2014; Henderson, Gray, & Brocklehurst, 2007).
Some studies even show the opposite trend: children who have been exposed to low or moderate
levels of alcohol during pregnancy demonstrate higher intelligence later compared to those who
had no prenatal alcohol exposure (Humphriss, Hall, May, Zuccolo, & Macleod, 2013; Lewis et
al., 2012). Nonetheless, public opinion flies in the face of this evidence: there appears to be a
consensus that exposure to even small amounts of alcohol during pregnancy poses a risk to a
childs cognitive development (Murphy, Sutton, Douglas, & McClellan, 2011).
As we shall see below, this might be understood partly as a result of biased and censorious
coverage of relevant science in the media (Lowe, Lee, & Yardley, 2010), and in advice and
communiques issued by official agencies who are explicitly concerned that women do not
become confused about how much might be safe to consume (Gavaghan, 2009). Thus, the
departure of public opinion from the evidence may not reflect the operation of psychological
mechanisms. Sutton, Lee and Hartley (2018), however, also examined whether impact bias might
motivate skepticism even when people are exposed to accurate coverage of scientific findings.
We presented experimental groups of participants with the results of a (real) cohort study (Lewis
et al., 2012) that found 8-year-old children had significantly higher IQs if their mothers had
consumed low-to-moderate amounts of alcohol during pregnancy (vs. if they had abstained
completely from alcohol). Control groups were presented with a fictional variant of the study in
which milk, rather than alcohol, was the substance that mothers had consumed (Study 1), or in
which the actual results of the study were reversed, indicating that children had lower IQs if their
mothers drank moderately (Study 2).
Sutton, Lee, and Hartley (2018) found evidence of impact bias: as predicted, participants in the
experimental groups systematically and consistently devalued the research. They perceived its
methods and its results to be less reliable and convincing than did control participants. Crucially,
participants also indicated that they thought the findings of the actual research (i.e. children
whose mothers drank alcohol were more intelligent) would be bad for mothers, children, and
society, whereas the fictional findings (drinking milk led to higher IQs or drinking alcohol lead
to lower IQs) would be good for them (responses were significantly different from mid-point in
contrasting directions). Results indicated that these perceptions of impact mediated the effect of
the putative study results: participants saw the actual findings as more dangerous than the
fictional findings, and subsequently were more skeptical of them. Perceptions of impact also
appeared to mediate other interesting responses to the alcohol-during-pregnancy studies: people
were less likely to interpret the actual effect in causal terms and were more likely to ascribe it to
some confound (e.g. mothers who drank more were higher in socio-economic status a finding
actually observed in the original study by Lewis et al., 2012). These findings also held when
prior beliefs about the effects of prenatal exposure to alcohol (or milk) on child IQ were adjusted
for. These prior beliefs had a large effect consistent with the confirmation bias, but over and
above this effect, perceptions of impact accounted for differential reactions to the research.
Sutton et al.s (2018) findings also indicate that as we have proposed, people are also motivated
to adopt obstructive, censorial, and even punitive responses to science that they perceive as
dangerous. In these studies, participants opposed the funding, dissemination, and application of
studies showing that alcohol may be associated with higher child IQ. They also tended to show
some desire to see the scientists responsible for the research to be disciplined. In contrast, on the
same measures, they supported the fictional studies in which drinking milk led to higher child IQ
or drinking alcohol led to lower child IQ. Once more, these effects were mediated by the
perceived impact of the research. Participants seemed motivated to protect society from
dangerous scientific results by not only casting doubt on these results but also supporting
measures to prevent similar results from seeing the light of day, including censorship and
punishment of researchers.
Scientists are not merely censored by authorities but also censor their own work. This is
especially apparent in studies that touch upon controversial topics such as climate change, where
researchers are careful to manage their terminology and draw causal conclusions from their data
to protect their funding (Hersher, 2017). Scientists report that fear of negative reactions both
from the public and fellow researchers influence what they study and how they report findings
(Kempner, Perlis, & Merz, 2005). Seen through Tetlocks (2002) social functionalist perspective,
scientists therefore act as intuitive politicians, managing accountability demands by strategically
presenting their work to the world.
Alcohol consumption during pregnancy is controversial topic surrounded, as we have seen, by
concerns about the impact of the research. Lee, Sutton, and Hartley (2016) analyzed media
coverage of Lewis et al.s (2012) study into child intelligence and maternal drinking during
pregnancy. Lee et al. (2016) found that the researchers played an important role in media
misrepresentations of their work. One of its key and most incendiary findings that mothers who
drank some (vs. no) alcohol had more intelligent children was reported in the article. However,
Lewis et al. (2012) attributed this result to a socio-demographic confound (expectant mothers
were less likely to abstain from alcohol if they were older, more educated, or higher income),
despite running no analysis adjusting for this confound. More strikingly, the press release issued
by the researchers institution made no mention of this result (University of Bristol, 2012).
Instead, it contained a quote from the senior researcher to the effect that the studys results gave
grounds for women not to drink during pregnancy. Only a third of the subsequent media stories
mentioned the empirical relationship between maternal drinking and child intelligence, and of
those, two-thirds reversed the direction of the effect, stating that mothers who abstained had less
intelligent children. A near-universal theme in the coverage was that women should abstain from
alcohol. These misrepresentations were not entirely media inventions but could be traced back to
the scientific paper and especially the press release. Scientists commonly complain that their
work is misrepresented because of the sensationalism, political agenda, and scientific illiteracy of
media outlets. The analysis by Lee et al. illustrates that scientists may also be involved in
misrepresenting their work.
Participants responses to the target studies presented by Sutton, Lee and Hartley (2018) reflect a
consensus that if these studies lead pregnant women to drink alcohol, this would be a bad
outcome. In contrast, the value attached to other impacts of research may differ markedly across
participants, which is in line with more general notions derived from work on the ideological-
conflict hypothesis (Brandt et al., 2014). Liberals, for example, are likely to loathe the idea that a
scientific finding could lend support to the death penalty, or undermine permissive immigration
policies by indicating that immigration undermines neighborhood cohesion. Conservatives, in
contrast, are likely to view both of these outcomes rather favorably.
McConnell and Sutton (2018) tested this possibility and examined whether these politically
loaded perceptions of impact also produce impact bias effects. Similar to Washburn and Skitka
(2017; see also Kahan, 2013; Skitka & Washburn, 2016), they showed that participants on both
sides of the left-right political spectrum were skeptical of research that contradicted their views.
In line with Sutton, Lee and Hartley (2018), they showed that this effect was mediated by the
perception that politically uncongenial findings could be harmful to society. Indeed, McConnell
and Sutton (2018) observed the third-person effect in relation to politically uncongenial findings:
liberals thought that conservative-friendly policies would have larger effects on others than
themselves, and perceptions of impact on others, rather than the self, were related to skepticism.
Furthermore, as observed by Sutton et al. (2018), McConnell and Sutton found that perceptions
of harmful impacts also mediated between the political congeniality of research results and
censorious and punitive responses to the research.
One limitation of these studies is that they use correlational methods to isolate the effects of
perceived harmful consequences of research (impact bias) from effects of contradictions of prior
beliefs (confirmation bias). It is possible, in principle, to manipulate perceived impact
orthogonally to prior beliefs about a research topic. Campbell and Kay (2014) took such an
approach in their study of politically motivated skepticism about climate science. It is well
documented that conservatives tend to be more skeptical of climate science than liberals. This
has been explained in terms of various motivations such as higher national- rather than global-
level identification (Devine-Wright, Price, & Leviston, 2015), system justification (Hennes,
Hampton, Ozgumus, & Hamor, 2018; see also Feygina, Jost, & Goldsmith, 2010), dominance
motives (Jylhä, Cantal, Akrami, & Milfont, 2016), and endorsement of free-market ideology
(Lewandowsky, Gignac, & Oberauer, 2013), but in line with the latter finding of Campbell and
Kay (2014) suggested that it might be motivated by solution aversion: typically, measures
proposed to mitigate climate change involve government intervention in the form of taxes and
regulation. When Campbell and Kay (2014) presented free-market solutions to participants, in
the form of private-sector innovations in energy technology, conservatives indicated no more
skepticism about climate change than liberals. This finding indicates that concern about the
policy impact of climate science motivates climate skepticism: people doubt climate science if it
looks like it will lead to unwanted policy outcomes.
Scientific malpractice and conspiracy
In their social functionalist role as intuitive prosecutors, people are more punitive toward
harmdoers whose actions are intentional. Indeed, people prefer to perceive harmdoing as
intentional insofar as it enables collectives to exert control over negative outcomes by blaming,
punishing, and incapacitating wrongdoers (McClure, Hilton, & Sutton, 2007). This suggests that
findings that are seen as dangerous are more likely to be seen as the product of intentional
wrongdoing, rather than an innocent mistake or incompetence. It also suggests that once
represented as intentional wrongdoing, science is much more likely to be opposed.
We have obtained preliminary evidence for both of these suggestions. McConnell and Sutton
(2018) found that people on both the left and right sides of the political spectrum tended to
perceive ideologically uncongenial results as the product of a conspiracy by researchers. In
another line of work, we (Sutton, Douglas, & Petterson, 2018) found that after adjusting for
skepticism about climate change, belief in conspiracy theories about climate science (e.g. that
scientists exaggerate the danger of climate science to secure funding) predicted support for the
censorship, surveillance, and punishment of climate scientists. In a subsequent experiment, we
found that experimentally exposing participants to these conspiracy theories increased their
opposition to climate science on the same measures.
Conspiracy theories explain socially significant phenomena as the outcome of covert plots,
generally orchestrated by powerful elites to serve their own interests (Douglas, Sutton, &
Cichocka, 2017). Conspiracy theories surround several topics of scientific inquiry, most
famously vaccination and climate change. Conspiracy theories are widespread in the general
population, with over a third of Americans agreeing that global warming is a hoax in a recent
survey (Jensen, 2013). Conspiracy theories about science are not a peculiarly American or
conservative problem: Bessi et al. (2015) found that conspiracy content with anti-science
messages was shared among Italian Facebook users about three times as often as scientific
content. Their relation to skepticism about scientific research is well established and is likely
bidirectional: implausible findings fuel conspiracy beliefs, and conspiracy beliefs fuel skepticism
(Lewandowsky, Oberauer, & Gignac, 2013; see also Lewandowsky, Gignac, & Oberauer, 2013)
while exposure to conspiracy theories has been found to reduce inclination to vaccinate ones
children and mitigate climate change (Jolley & Douglas, 2014; van der Linden, 2015). However,
conspiracy theories may also provide a powerful impetus to anti-science politics beyond the
tendency for political leaders and spokespeople to merely cast doubt on or ignore scientific
findings.
Conclusion
In this chapter, we have reviewed anecdotal and empirical evidence that skepticism and efforts to
suppress scientific findings are motivated by concerns about their societal impact. We have
attempted to lay the groundwork for further theory and research by suggesting that these
phenomena are best understood from a social functionalist perspective. In this perspective,
people act, think, and feel not only to satisfy internal motivations such as cognitive consistency
but also to achieve social objectives. Our recent work illustrates that these phenomena can be
approached with established methods for studying support for censorship and motivated
skepticism of science. Much more specific theoretical work is needed to uncover the specific
mechanisms that lead scientific findings to be perceived as harmful. This work might draw on
advances in moral reasoning and the perception of harm (Graham, Haidt, & Nosek, 2009; Gray,
Schein, & Ward, 2014). A critical question is whether judgments of harmfulness may themselves
be rationalizations of opposition to research that are motivated by other moral concerns, such as
perceived purity violations (cf. Graham et al., 2009), or more parochial concerns such as the
perceived interests of the self or a relevant ingroup. Another critical question is what (exactly)
different types of scientific research is perceived to harm an abstract conception such as
society, or specific constituencies within society and whether this affects the degree and form
of opposition to science. Further theoretical work is also required to understand boundary
conditions notably, when people perceive scientific findings to have potentially dangerous
impacts but nonetheless do not support efforts to suppress them or to punish researchers.
Science is routinely and quite appropriately judged according to the good that it can do us
(Massey & Barreras, 2013). Funders consider not only the scientific but also the social and
economic value of research. Ethics panels, before they approve research, weigh its scientific
benefits against the potential harms to its participants. The phenomena we have examined in this
chapter, however, are different. They pose a threat to the integrity of science, and ironically
to its contribution to society. There is an urgent need for research to examine the apparently all-
too-common perception that science is a danger that must be counteracted. As long as science is
perceived as a danger, we are prone to letting belief systems and ideologies dictate how science
is judged rather than letting science shape how we should perceive reality.
References
Bastardi, A., Uhlmann, E. L., & Ross, L. (2011). Wishful thinking: Belief, desire, and the
motivated evaluation of scientific evidence. Psychological Science, 22(6), 731732.
doi:10.1177/0956797611406447.
Beinart, P. (2017, March 6). A violent attack on free speech at Middlebury. The Atlantic.
Retrieved from www.theatlantic.com/politics/archive/2017/03/middlebury-free-speech-
violence/518667/.
Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2015).
Science vs conspiracy: Collective narratives in the age of misinformation. PloS One, 10(2),
e0118093. doi:10.1371/journal.pone.0118093.
Bilewicz, M. (2016). The dark side of emotion regulation: Historical defensiveness as an
obstacle in reconciliation. Psychological Inquiry, 27(2), 8995.
doi:10.1080/1047840x.2016.1162130.
Brandt, M. J., Reyna, C., Chambers, J., Crawford, J., & Wetherell, G. (2014). The ideological-
conflict hypothesis: Intolerance among both liberals and conservatives. Current Directions in
Psychological Science, 23, 2734.
Brown, R., & Gilman, A. (1960). The pronouns of power and solidarity. In T. A. Sebeok (Ed.),
Style in language (pp. 253276). Cambridge: MIT Press.
Campbell, T. H., & Kay, A. C. (2014). Solution aversion: On the relation between ideology and
motivated disbelief. Journal of Personality and Social psychology, 107(5), 809824.
doi:10.1037/a0037963.
Chiacu, D., & Volcovici, V. (2017, March 19). EPA chief unconvinced on CO2 link to global
warming. Reuters. Retrieved from www.reuters.com/article/us-usa-epa-pruitt/epa-chief-
unconvinced-on-co2-link-to-global-warming-idUSKBN16G1XX.
Chung, S., & Moon, S. I. (2016). Is the thirdperson effect real? A critical examination of
rationales, testing methods, and previous findings of the thirdperson effect on censorship
attitudes. Human Communication Research, 42(2), 312337. doi:10.1111/hcre.12078.
Clark, H. H. (1992). Arenas of language use. Stanford, CA; Chicago, IL: Center for the Study of
Language & Information.
Colombo, M., Bucher, L., & Inbar, Y. (2016). Explanatory judgment, moral offense and value-
free science. Review of Philosophy and Psychology, 7(4), 743763. doi:10.1007/s13164-015-
0282-z.
Columbia Law School. Sabin Center for Climate Change Law. (2018). Silencing science tracker.
Retrieved from http://columbiaclimatelaw.com/resources/silencing-science-tracker/about/.
Davison, W. P. (1983). The third-person effect in communication. Public Opinion Quarterly,
47(1), 115. doi:10.1086/268763.
Devine-Wright, P., Price, J., & Leviston, Z. (2015). My country or my planet? Exploring the
influence of multiple place attachments and ideological beliefs upon climate change attitudes
and opinions. Global Environmental Change, 30, 6879.
doi:10.1016/j.gloenvcha.2014.10.012.
Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: Use of differential decision criteria
for preferred and nonpreferred conclusions. Journal of Personality and Social Psychology,
63(4), 568. doi:10.1037/0022-3514.63.4.568.
Dominus, S. (2017, October 18). When the revolution came for Amy Cuddy. The New York
Times Magazine. Retrieved from www.nytimes.com/2017/10/18/magazine/when-the-
revolution-came-for-amy-cuddy.html.
Douglas, K. M., & Sutton, R. M. (2004). Right about others, wrong about ourselves? Actual and
perceived selfother differences in resistance to persuasion. British Journal of Social
Psychology, 43(4), 585603. doi:10.1348/0144666042565416.
Douglas, K. M., & Sutton, R. M. (2008). The hidden impact of conspiracy theories: Perceived
and actual influence of theories surrounding the death of Princess Diana. The Journal of
Social Psychology, 148(2), 210222. doi:10.3200/socp.148.2.210-222.
Douglas, K. M., Sutton, R. M., & Cichocka, A. (2017). The psychology of conspiracy theories.
Current Directions in Psychological Science, 26(6), 538542.
doi:10.1177/0963721417718261.
Dreger, A. (2015). Galileo’s middle finger: Heretics, activists and the search for justice in
science. New York, NY: Penguin Group.
Duarte, J. L., Crawford, J. T., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. E. (2015). Political
diversity will improve social psychological science. Behavioral and Brain Sciences, 38, 158.
doi:10.1017/s0140525x14000430.
Echterhoff, G., Higgins, E. T., & Levine, J. M. (2009). Shared reality: Experiencing
commonality with others inner states about the world. Perspectives on Psychological
Science, 4(5), 496521. doi:10.1111/j.1745-6924.2009.01161.x.
Feygina, I., Jost, J. T., & Goldsmith, R. E. (2010). System justification, the denial of global
warming, and the possibility of system-sanctioned change. Personality and Social
Psychology Bulletin, 36(3), 326338. doi:10.1177/0146167209351435.
Flak, A. L., Su, S., Bertrand, J., Denny, C. H., Kesmodel, U. S., & Cogswell, M. E. (2014). The
association of mild, moderate, and binge prenatal alcohol exposure and child
neuropsychological outcomes: A metaanalysis. Alcoholism: Clinical and Experimental
Research, 38(1), 214226. doi:10.1111/acer.12214.
Frayer, L., & Saracoglu, G. (2017, August 20). In Turkey, schools will stop teaching evolution
this fall. National Public Radio. Retrieved from
www.npr.org/sections/parallels/2017/08/20/540965889/in-turkey-schools-will-stop-teaching-
evolution-this-fall.
Gauchat, G. (2012). Politicization of science in the public sphere: A study of public trust in the
United States, 1974 to 2010. American Sociological Review, 77(2), 167187.
doi:10.1177/0003122412438225.
Gavaghan, C. (2009). You cant handle the truth”: Medical paternalism and prenatal alcohol
use. Journal of Medical Ethics, 35(5), 300303. doi:10.1136/jme.2008.028662.
Gillborn, D. (2016). Softly, softly: Genetics, intelligence and the hidden racism of the new
geneism. Journal of Education Policy, 31(4), 365388. doi:10.1080/02680939.2016.1139189.
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of
moral foundations. Journal of Personality and Social Psychology, 96(5), 1029.
doi:10.1037/a0015141.
Gray, K., Schein, C., & Ward, A. F. (2014). The myth of harmless wrongs in moral cognition:
Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology:
General, 143(4), 16001615.
Haynes, R. (2003). From alchemy to artificial intelligence: Stereotypes of the scientist in
Western literature. Public Understanding of Science, 12(3), 243253.
doi:10.1177/0963662503123003.
Henderson, J., Gray, R., & Brocklehurst, P. (2007). Systematic review of effects of low
moderate prenatal alcohol exposure on pregnancy outcome. BJOG: An International Journal
of Obstetrics & Gynaecology, 114(3), 243252. doi:10.1111/j.1471-0528.2006.01163.x.
Henderson, J., Kesmodel, U., & Gray, R. (2007). Systematic review of the fetal effects of
prenatal binge-drinking. Journal of Epidemiology and Community Health, 61(12), 1069
1073. doi:10.1136/jech.2006.054213.
Hennes, E. J., Hampton, A. J., Ozgumus, E., & Hamori, T. J. (2018). System-level biases in the
production and consumption of information: Implications for system resilience and radical
change. In B. T. Rutjens, & M. J. Brandt (Eds.), Belief systems and the perception of reality.
Oxon: Routledge.
Hersher, R. (2017, November 29). Climate scientists watch their words, hoping to stave off
funding cuts. National Public Radio. Retrieved from www.npr.org/sections/thetwo-
way/2017/11/29/564043596/climate-scientists-watch-their-words-hoping-to-stave-off-
funding-cuts.
Hornsey, M. J., & Fielding, K. S. (2017). Attitude roots and Jiu Jitsu persuasion: Understanding
and overcoming the motivated rejection of science. American Psychologist, 72(5), 459.
doi:10.1037/a0040437.
Humphriss, R., Hall, A., May, M., Zuccolo, L., & Macleod, J. (2013). Prenatal alcohol exposure
and childhood balance ability: Findings from a UK birth cohort study. BMJ Open, 3(6),
e002718. doi:10.1136/bmjopen-2013-002718.
Ioannidis, J. P. A. (2012). Why science is not necessarily self-correcting. Perspectives on
Psychological Science, 7(6), 645654. doi:10.1177/1745691612464056.
Jensen, T. (2013, April 2). Democrats and Republicans differ on conspiracy theory beliefs.
Retrieved from www.publicpolicypolling.com/polls/democrats-and-republicans-differ-on-
conspiracy-theory-beliefs/.
Jolley, D., & Douglas, K. M. (2014). The social consequences of conspiracism: Exposure to
conspiracy theories decreases intentions to engage in politics and to reduce ones carbon
footprint. British Journal of Psychology, 105(1), 3556. doi:10.1111/bjop.12018.
Jussim, L., Crawford, J. T., Anglin, S. M., Stevens, S. T., & Duarte, J. L. (2016). Interpretations
and methods: Towards a more effectively self-correcting social psychology. Journal of
Experimental Social Psychology, 66, 116133. doi:10.1016/j.jesp.2015.10.003.
Jussim, L., Stevens, S. T., & Honeycutt, N. (2018). Forbidden and unasked questions about
stereotype accuracy. Manuscript submitted for publication.
Jylhä, K. M., Cantal, C., Akrami, N., & Milfont, T. L. (2016). Denial of anthropogenic climate
change: Social dominance orientation helps explain the conservative male effect in Brazil and
Sweden. Personality and Individual Differences, 98, 184187.
doi:10.1016/j.paid.2016.04.020.
Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection: An experimental
study. Judgment and Decision Making, 8(4), 407424. doi:10.2139/ssrn.2182588.
Kashima, Y. (2000a). Recovering Bartletts social psychology of cultural dynamics. European
Journal of Social Psychology, 30(3), 383403. doi:10.1002/(SICI)1099-
0992(200005/06)30:3<383::AID-EJSP996>3.0.CO;2-C.
Kashima, Y. (2000b). Maintaining cultural stereotypes in the serial reproduction of narratives.
Personality and Social Psychology Bulletin, 26(5), 594604.
doi:10.1177/0146167200267007.
Kempner, J., Perlis, C. S., & Merz, J. F. (2005). Forbidden knowledge. Science, 307(5711), 854.
doi:10.1126/science.1107576.
Klar, Y., & Bilewicz, M. (2017). From socially motivated lay historians to lay censors:
Epistemic conformity and defensive group identification. Memory Studies, 10(3), 334346.
doi:10.1177/1750698017701616.
Kruglanski, A. W. (1990). Lay epistemic theory in social-cognitive psychology. Psychological
Inquiry, 1(3), 181197. doi:10.1207/s15327965pli0103_1.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480498.
doi:10.1037/0033-2909.108.3.480.
Landau, M. J., Kay, A. C., & Whitson, J. A. (2015). Compensatory control and the appeal of a
structured world. Psychological Bulletin, 141(3), 694.
Lee, E., Sutton, R. M., & Hartley, B. L. (2016). From scientific article to press release to media
coverage: Advocating alcohol abstinence and democratising risk in a story about alcohol and
pregnancy. Health, Risk & Society, 18(56), 247269. doi:10.1080/13698575.2016.1229758.
Lewandowsky, S., Gignac, G. E., & Oberauer, K. (2013). The role of conspiracist ideation and
worldviews in predicting rejection of science. PLoS One, 8(10), e75637.
doi:10.1371/journal.pone.0075637.
Lewandowsky, S., Mann, M. E., Bauld, L., Hastings, G., & Loftus, E. F. (2013). The
subterranean war on science. APS Observer, 26(9). Retrieved from
www.psychologicalscience.org/observer/the-subterranean-war-on-science/comment-page-1.
Lewandowsky, S., Mann, M. E., Brown, N. J., & Friedman, H. (2016). Science and the public:
Debate, denial, and skepticism. Journal of Social and Political Psychology, 4(2), 537553.
doi:10.5964/jspp.v4i2.604.
Lewandowsky, S., Oberauer, K., & Gignac, G. E. (2013). NASA faked the moon landing
therefore, (climate) science is a hoax: An anatomy of the motivated rejection of science.
Psychological Science, 24(5), 622633. doi:10.1177/0956797612457686.
Lewis, S. J., Zuccolo, L., Davey Smith, G., Macleod, J., Rodriquez, S., Draper, E. S., & … Gray,
R. (2012). Fetal alcohol exposure and IQ at age 8: Evidence from a population-based birth-
cohort study. PLoS One, 7(11), e49407. doi:10.1371/journal.pone.0049407.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The
effects of prior theories on subsequently considered evidence. Journal of Personality and
Social Psychology, 37(11), 20982109. doi:10.1037/0022-3514.37.11.2098.
Lowe, P., Lee, E., & Yardley, L. (2010). Under the influence? The construction of foetal alcohol
syndrome in UK newspapers. Sociological Research Online, 15(4), 2. doi:10.5153/sro.2225.
Massey, S. G., & Barreras, R. E. (2013). Introducing impact validity. Journal of Social Issues,
69(4), 615632. doi:10.1111/josi.12032.
McClure, J., Hilton, D. J., & Sutton, R. M. (2007). Judgments of voluntary and physical causes
in causal chains: Probabilistic and social functionalist criteria for attributions. European
Journal of Social Psychology, 37(5), 879901. doi:10.1002/ejsp.394.
McConnell, P. & Sutton, R. M. (2018). The intolerable truth: Perceptions of malfeasance,
harmful impact, and the desire to censor ideologically dissonant research findings.
Manuscript in preparation.
McKie, R. (2017, February 20). Scientists attack their muzzling by government. The Observer.
Retrieved from www.theguardian.com/science/2016/feb/20/scientists-attack-muzzling-
government-state-funded-cabinet-office.
Murphy, A. O., Sutton, R. M., Douglas, K. M., & McClellan, L. M. (2011). Ambivalent sexism
and the dos and donts of pregnancy: Examining attitudes toward proscriptions and the
women who flout them. Personality and Individual Differences, 51(7), 812816.
doi:10.1016/j.paid.2011.06.031.
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., &
Contestabile, M. (2015). Promoting an open research culture. Science, 348(6242), 14221425.
doi:10.1126/science.aab2374.
Nuccitelli, D. (2017, January 31). Trump is copying the Bush censorship playbook. Scientists
arent standing for it. The Guardian. Retrieved from
www.theguardian.com/environment/climate-consensus-97-per-cent/2017/jan/31/trumps-
copying-the-bush-censorship-playbook-scientists-arent-standing-for-it.
The Professional Institute of the Public Service of Canada. (2013). Most federal scientists feel
they can’t speak out, even if public health and safety at risk, says new survey. Retrieved from
www.pipsc.ca/portal/page/portal/website/issues/science/bigchill.
Rutjens, B. T., & Heine, S. J. (2016). The immoral landscape? Scientists are associated with
violations of morality. PLoS One, 11(4), e0152798. doi:10.1371/journal.pone.0152798.
Rutjens, B. T., Heine, S. J., Sutton, R., & van Harreveld, F. (2018). Attitudes towards science.
Advances in Experimental Social Psychology, 57. doi:10.1016/bs.aesp.2017.08.001.
Rutjens, B. T., van Harreveld, F., & van der Pligt, J. (2013). Step by step: Finding compensatory
order in science. Current Directions in Psychological Science, 22(3), 250255.
doi:10.1177/0963721412469810.
Skitka, L. J., & Washburn, A. (2016). Are conservatives from Mars and liberals from Venus?
Maybe not so much. In P. Valdesolo, & J. Graham (Eds.), Social psychology of political
polarization (pp. 78101). New York, NY: Routledge.
Stevens, S. T., Jussim, L., Anglin, S. M., & Honeycutt, N. (2018) Direct and indirect influences
of political ideology on perceptions of scientific findings. In B. T. Rutjens, & M. J. Brandt
(Eds.), Belief systems and the perception of reality. Oxon: Routledge.
Sutton, R. M., & Douglas, K. M. (2014). Examining the monological nature of conspiracy
theories. In J. W. van Prooijen, & P. A. M. van Lange (Eds.), Power, politics, and paranoia:
Why people are suspicious of their leaders (pp. 254272). Cambridge: Cambridge University
Press.
Sutton, R. M., Douglas, K. M., & Petterson, A. (2018). A tale of two conspiracies: Similarities
and differences between conspiracy theories on either side of the climate debate. Manuscript
in preparation.
Sutton, R. M., Lee, E., & Hartley, B. L. (2018). Could studies of drinking during pregnancy
encourage drinking during pregnancy? Reactions to scientific research are shaped by concerns
about its impact. Manuscript in preparation.
Tetlock, P. E. (2002). Social functionalist frameworks for judgment and choice: Intuitive
politicians, theologians, and prosecutors. Psychological Review, 109(3), 451471.
doi:10.1037/0033-295x.109.3.451.
Tetlock, P. E., & Manstead, A. S. (1985). Impression management versus intrapsychic
explanations in social psychology: A useful dichotomy? Psychological Review, 92(1), 5977.
doi:10.1037/0033-295X.92.1.59.
Toma, C., & Butera, F. (2009). Hidden profiles and concealed information: Strategic information
sharing and use in group decision making. Personality and Social Psychology Bulletin, 35(6),
793806. doi:10.1177/0146167209333176.
University of Bristol (2012). Even moderate levels of drinking in pregnancy can affect a child’s
IQ [Press release]. Retrieved from www.bristol.ac.uk/news/2012/8936.html.
van der Linden, S. (2015). The conspiracy-effect: Exposure to conspiracy theories (about global
warming) decreases pro-social behavior and science acceptance. Personality and Individual
Differences, 87, 171173. doi:10.1016/j.paid.2015.07.045.
Washburn, A. N., & Skitka, L. J. (2017). Science denial across the political divide: Liberals and
conservatives are similarly motivated to deny attitude-inconsistent science. Social
Psychological and Personality Science. Advance online publication.
doi:10.1177/1948550617731500.
Wilson, E. O. (1995). Science and ideology. Academic Questions, 8(3), 7381.
doi:10.1007/bf02683222.
The World Bank. (2018). Graph illustration of research and development expenditure from
19962015. Retrieved from https://data.worldbank.org/indicator/GB.XPD.RSDV.GD.ZS.
... We also included a novel predictor, spirituality, which we will briefly elaborate in the next section. In addition, to be as complete as possible in determining the antecedents of science skepticism across domains, we incorporated measures that have previously been shown or argued to inform science skepticism: Conspiracy thinking (Hornsey et al., , 2018bRutjens et al., 2018a;Sutton et al., 2018), perceived corruption of science (Pechar et al., 2018), concerns about the societal impact of accepting scientific conclusions (Sutton et al., 2018), alongside various demographic variables. Although we took an in principle exploratory approach in the current work, we formulated four general predictions based on various literatures and earlier work (bearing in mind the caveat that most of the work discussed is based on data collected among US samples). ...
... We also included a novel predictor, spirituality, which we will briefly elaborate in the next section. In addition, to be as complete as possible in determining the antecedents of science skepticism across domains, we incorporated measures that have previously been shown or argued to inform science skepticism: Conspiracy thinking (Hornsey et al., , 2018bRutjens et al., 2018a;Sutton et al., 2018), perceived corruption of science (Pechar et al., 2018), concerns about the societal impact of accepting scientific conclusions (Sutton et al., 2018), alongside various demographic variables. Although we took an in principle exploratory approach in the current work, we formulated four general predictions based on various literatures and earlier work (bearing in mind the caveat that most of the work discussed is based on data collected among US samples). ...
... A more robust predictor of vaccine and GM skepticism is the extent to which participants were concerned about the societal impact of accepting the mainstream scientific conclusions that these are safe technologies (Sutton et al., 2018). In other words, perceptions of how dangerous acceptance is uniquely contributed to skepticism. ...
Article
Full-text available
Recent work points to the heterogeneous nature of science skepticism. However, most research on science skepticism has been conducted in the United States. The current work addresses the generalizability of the knowledge acquired so far by investigating individuals from a Western European country (The Netherlands). Results indicate that various previously reported findings hold up: Mirroring North American patterns, climate change skepticism is associated with political conservatism (but only modestly), and scientific literacy does not contribute to skepticism, except about genetic modification (Study 1 only) and vaccine skepticism (Study 2 only). Results also reveal a crucial difference: Religiosity does not consistently contribute to science skepticism, except about evolution. Instead, spirituality is found to most consistently predict vaccine skepticism and low general faith in science—which in turn predicts willingness to support science. Concerns about societal impact play an additional role. These findings speak to the generalizability of previous findings, improving our understanding of science skepticism.
Article
Full-text available
As science continues to progress, attitudes towards science seem to become ever more polarized. Whereas some put their faith in science, others routinely reject and dismiss scientific evidence. The current chapter provides an integration of recent research on how people evaluate science. We organize our chapter along three research topics that are most relevant to this goal: ideology, motivation, and morality. We review the relations of political and religious ideologies to science attitudes, discuss the psychological functions and motivational underpinnings of belief in science, and describe work looking at the role of morality when evaluating science and scientists. In the final part of the chapter, we apply what we know about science evaluations to the current crisis of faith in science and the open science movement. Here, we also take into account the increased accessibility and popularization of science and the (perceived) relations between science and industry.
Article
Full-text available
What psychological factors drive the popularity of conspiracy theories that explain important events as secret plots by powerful and malevolent groups? What are the psychological consequences of adopting these theories? We review the current research, and find that it answers the first of these questions more thoroughly than the second. Belief in conspiracy theories appears to be driven by motives that can be characterized as epistemic (understanding one’s environment), existential (being safe and in control of one’s environment) and social (maintaining a positive image of the self and the social group). However, little research has investigated the consequences of conspiracy belief, and to date, this research does not indicate that conspiracy belief fulfills people’s motivations. Instead, for many people conspiracy belief may be more appealing than satisfying. Further research is needed to determine for whom, and under what conditions, conspiracy theories may satisfy key psychological motives.
Article
Full-text available
This article examines why people cooperate with the silencing and censorship efforts of authorities that deprive them of historical knowledge. We analyze two motivational factors that account for people’s adherence to the "official" historical narrative and their willingness to serve as lay censors silencing and suppressing alternative historical narratives of the group. The first factor is epistemic conformity which is the motivation to believe in the veridicality of the consensual ingroup's historical narrative. The second factor is a defensive form of identification with the group in glorifying and narcissistic ways. Polish and Israeli examples are discussed to illustrate societal backlash to historical discoveries that present the national ingroup in a negative manner.
Article
Full-text available
In this article, we follow the approach taken by Riesch and Spiegalhalter in ?Careless pork costs lives?: Risk storiesfrom science to press release to media? published in this journal, anoffer an assessment of one example of a ?risk story?. Using content and thematic qualitative analysis, we consider how the findings of an article ?Fetal Alcohol Exposure and IQ at Age 8: Evidence from a Population-Based Birth-Cohort Study? were framed in the article itself, the associated press release, and the subsequent extensive media coverage. We contextualise this consideration of a risk story by discussing a body of work that critically engages with the development and global proliferation of efforts to advocate for alcohol abstinence to pregnant (and pre-pregnant) women. This work considers the ?democratisation? of risk, a term used to draw attention to the expansion of the definition of the problem of drinking in pregnancy to include any drinking and all women. We show here how this risk story contributed a new dimension to the democratisation of risk through claims that were made about uncertainty and certainty. A central argument we make concerns the contribution of the researchers themselves (not just lobby groups or journalists) to this outcome. We conclude that the democratisation of risk was advanced in this case not simply through journalists exaggerating and misrepresenting research findings, but that communication to the press and the initial interpretation of findings played their part. We suggest that this risk story raises concerns about the accuracy of reporting of research findings, and about the communication of unwarrantedly worrying messages to pregnant women about drinking alcohol.
Article
SCIENTIFIC In this article, we argue that there are many unanswered questions crucial to scientific understanding about stereotypes and stereotype accuracy. We present a review and analysis suggesting that a set of cognitive, motivational, and social factors conspired to prevent psychologists from asking serious questions about stereotype accuracy for decades and may help explain why many reviews in the field foreclose on presumptive answers to questions that actually require empirical data. We review the history of the first unasked question in this area, “Are stereotypes inaccurate?” which went unaddressed for about 70 years after the initial social science interest in stereotypes. Current unasked questions include (a) When and how does relying on a stereotype increase the accuracy of person perception? (b) Why are some stereotypes more accurate than others? (c) How accurate are implicit stereotypes? (d) Do people ever actually ignore individuating information? We conclude with testable hypotheses about the sources of not asking certain questions, and with recommendations for overcoming biases and blind spots in research on stereotypes.
Article
We tested whether conservatives and liberals are similarly or differentially likely to deny scientific claims that conflict with their preferred conclusions. Participants were randomly assigned to read about a study with correct results that were either consistent or inconsistent with their attitude about one of several issues (e.g., carbon emissions). Participants were asked to interpret numerical results and decide what the study concluded. After being informed of the correct interpretation, participants rated how much they agreed with, found knowledgeable, and trusted the researchers’ correct interpretation. Both liberals and conservatives engaged in motivated interpretation of study results and denied the correct interpretation of those results when that interpretation conflicted with their attitudes. Our study suggests that the same motivational processes underlie differences in the political priorities of those on the left and the right.
Article
There is a worryingly large chasm between scientific consensus and popular opinion. Roughly one third of Americans are skeptical that humans are primarily responsible for climate change; rates of some infectious diseases are climbing in the face of antiimmunization beliefs; and significant numbers of the population worldwide are antievolution creationists. It is easy to assume that resistance to an evidence-based message is a result of ignorance or failure to grasp evidence (the "deficit model" of science communication). But increasingly, theorists understand there are limits to this approach, and that if people are motivated to reject science, then repeating evidence will have little impact. In an effort to create a transtheoretical language for describing these underlying motivations, we introduce the notion of "attitude roots." Attitude roots are the underlying fears, ideologies, worldviews, and identity needs that sustain and motivate specific "surface" attitudes like climate skepticism and creationism. It is the antiscience attitude that people hear and see, but it is the attitude root-what lies under the surface-that allows the surface attitudes to survive even when they are challenged by evidence. We group these attitude roots within 6 themes- worldviews, conspiratorial ideation, vested interests, personal identity expression, social identity needs, and fears and phobias-and review literature relevant to them. We then use these insights to develop a "jiu jitsu" model of persuasion that places emphasis on creating change by aligning with (rather than competing with) these attitude roots.