ArticlePDF Available

What is the “science of science communication”?

Authors:

Abstract and Figures

This essay seeks to explain what the “science of science communication” is by doing it. Surveying studies of cultural cognition and related dynamics, it demonstrates how the form of disciplined observation, measurement, and inference distinctive of scientific inquiry can be used to test rival hypotheses on the nature of persistent public conflict over societal risks; indeed, it argues that satisfactory insight into this phenomenon can be achieved only by these means, as opposed to the ad hoc story-telling dominant in popular and even some forms of scholarly discourse. Synthesizing the evidence, the essay proposes that conflict over what is known by science arises from the very conditions of individual freedom and cultural pluralism that make liberal democratic societies distinctively congenial to science. This tension, however, is not an “inherent contradiction”; it is a problem to be solved — by the science of science communication understood as a “new political science” for perfecting enlightened self-government.
What is the “science of science communication”?
Dan M. Kahan
This essay seeks to explain what the “science of science communication” is
by doing it. Surveying studies of cultural cognition and related dynamics, it
demonstrates how the form of disciplined observation, measurement, and
inference distinctive of scientific inquiry can be used to test rival hypotheses
on the nature of persistent public conflict over societal risks; indeed, it
argues that satisfactory insight into this phenomenon can be achieved only
by these means, as opposed to the ad hoc story-telling dominant in popular
and even some forms of scholarly discourse. Synthesizing the evidence,
the essay proposes that conflict over what is known by science arises from
the very conditions of individual freedom and cultural pluralism that make
liberal democratic societies distinctively congenial to science. This tension,
however, is not an “inherent contradiction”; it is a problem to be solved —
by the science of science communication understood as a “new political
science” for perfecting enlightened self-government.
Abstract
Risk communicationKeywords
Introduction Public opinion on societal risks presents a disorienting spectacle. Is the earth
warming up as a result of human activity? Can nuclear wastes be safely stored in
deep underground rock formations? Can natural gas be safely extracted by
hydraulic fracturing of bedrock? Will inoculating adolescent girls against the
human papilloma virus — an extremely common sexually transmitted disease
responsible for cervical cancer — lull them into engaging in unprotected sex,
thereby increasing their risk of pregnancy or of other STDs? Does allowing citizens
to carry concealed handguns in public increase crime — or reduce it by deterring
violent predation?
Never have human societies known so much about mitigating the dangers they face
but agreed so little about what they collectively know. Because this disjunction
features the persistence of divisive conflict in the face of compelling scientific
evidence, we can refer to it as the “science communication paradox” (Figure 2).
Resolving this paradox is the central aim of a new science of science communication.
Its central findings suggest that intensifying popular conflict over collective
knowledge is in fact a predictable byproduct of the very conditions that make free,
democratic societies so hospitable to the advancement of science. But just as science
Essay Journal of Science Communication 14(03)(2015)Y04 1
has equipped society to repel myriad other threats, so the science of science
communication can be used to fashion tools specifically suited to dispelling the
science communication paradox.
Figure 1. Polarization over risk. Scatterplots relate risk perceptions to political outlooks for
members of nationally representative sample (N = 1800), April–May 2014 [Kahan, 2015].
The “public
irrationality
thesis”
What is the “science of science communication”? One could easily define it with
reference to some set of signature methods and aims [Fischhoff and Scheufele,
2013]. But more compelling is simply to do the science of science communication —
to show what it means to approach the science communication paradox scientifically.
The most popular explanation for the science communication paradox can be called
the “public irrationality thesis” or “PIT.” Members of the public, PIT stresses, are
not very science literate. In addition, they do not think like scientists. Scientists
assess risk in a conscious, deliberate fashion, employing the analytical reasoning
necessary to make sense of empirical evidence. Members of the public, in contrast,
appraise hazards intuitively, on the basis of fast-acting unconscious emotions. As a
result, members of the public overestimate dramatic or sensational risks like
terrorism and discount more remote but more consequential ones — like climate
change [Weber, 2006; Marx et al., 2007; Sunstein, 2007; Sunstein, 2005].
PIT features genuine cognitive mechanisms known to be important in various
settings [Kahneman, 2003; Frederick, 2005]. It therefore supplies a very plausible
explanation of the science communication paradox.
But there will inevitably be a greater number of plausible accounts of any complex
social phenomenon than can actually be true [Watts, 2011]. Cognitive psychology
supplies a rich inventory of dynamics — “dissonance avoidance”, “availability
cascades”, “tipping points”, “emotional numbing”, “fast vs. slow cognition”, and
the like. Treating these as a grab bag of argument templates, any imaginative op-ed
writer can construct a seemingly “scientific” account of public conflict over risk.
Conjectures of this sort are not a bad thing. But those who offer them should
acknowledge that they are only hypotheses, in need of empirical testing, and not
hold them forth as genuine empirical “explanations.” Otherwise, our understanding
of the science communication paradox will drown in a sea of just-so stories.
So does PIT withstand empirical testing? If the reason members of the public fail to
take climate change as seriously as scientists think they should is that the public
JCOM 14(03)(2015)Y04 2
lacks the knowledge and capacity necessary to understand empirical information,
then we would expect the gap between public and expert perceptions to narrow as
members of the public become more science literate and more proficient in critical
reasoning.
But that does not happen (Figure 2). Members of the public who score highest in
one or another measure of science comprehension, studies show, are no more
concerned about global warming than those who score the lowest [Kahan, 2015;
Kahan et al., 2012]. The same pattern, moreover, characterizes multiple other
contested risks, such as the ones posed by nuclear power, fracking, and private
possession of firearms [Kahan, 2015].
Figure 2. Impact of science comprehension on climate change polarization. Error bars are
0.95 confidence intervals (N = 1540) [Kahan et al., 2012].
The “cultural
cognition
thesis”
Another plausible conjecture — another hypothesis about the science
communication paradox — is the “cultural cognition thesis” (CCT). CCT posits that
certain types of group affinities are integral to the mental processes ordinary
members of the public use to assess risk [Kahan et al., 2010].
“Motivated reasoning” refers to the tendency of people to conform their
assessments of all sorts of evidence to some goal unrelated to accuracy [Sood, 2013;
Kunda, 1990]. Students from rival colleges, for example, can be expected to form
opposing perceptions when viewing a film of a disputed officiating call in a football
game between their schools, consistent with their stake in experiencing emotional
solidarity with their peers [Hastorf and Cantril, 1954].
CCT says this same thing occurs when members of the public access information
about contested societal risks. When positions on facts become associated with
opposing social groups — not universities but rather everyday networks of people
linked by common moral values, political outlooks, and social norms —
individuals selectively assess evidence in patterns that reflect their group
identities [Kahan, 2011].
Numerous studies support CCT. In one, my colleagues and I examined the impact
of cultural cognition on perceptions of scientific consensus [Kahan, Jenkins-Smith
JCOM 14(03)(2015)Y04 3
and Braman, 2013]. We asked our subjects — a large, nationally representative
sample of U.S. adults — to indicate whether they regarded particular scientists as
“experts” whose views an ordinary citizen ought to take into account on climate
change, nuclear waste disposal, and gun control. We picked these issues precisely
because they feature disputes over empirical, factual issues among opposing
cultural groups.
The scientists were depicted as possessing eminent qualifications, including
degrees from, and faculty appointments at, prestigious universities. However, half
the study subjects saw a book excerpt in which the featured scientist took the “high
risk” position (global warming is occurring; underground disposal of nuclear waste
is unsafe; permitting carrying of concealed handguns increases crime) and half an
excerpt in which the same scientist took the “low risk” position (there’s no clear
evidence human-caused global warming; underground disposal of nuclear wastes
is safe; permitting concealed carry reduces crime).
The subjects’ assessments of the scientists’ expertise, we found, depended on the fit
between the position attributed to the expert and the position held by most of the
subjects’ cultural peers. If the featured scientist was depicted as endorsing the
dominant position in a subject’s cultural group, the subject was highly likely to
classify that scientist as an “expert” on that issue; if not, then not (Figure 3). Like
sports fans motivated to see the officiating replay as supporting their team, the
subjects selectively credited or discredited the evidence we showed them — the
position of a highly qualified scientist — in a manner supportive of their group’s
position.
Figure 3. Biased perceptions of scientific expertise. Colored bars reflect 0.95 confidence
intervals (N = 1336) [Kahan, Jenkins-Smith and Braman, 2013].
If this is how members of the public assess evidence of “expert consensus” outside
the lab, we should expect members of diverse cultural groups to be polarized not
just on particular risks but also on the weight of scientific opinion on those risks. In
a survey component of the study, we found exactly that: subjects of diverse
affiliations all strongly believed that the position that predominated in their group
was consistent with “scientific consensus.” In relation to National Academy of
Sciences “expert consensus reports”, all the groups were as likely to be right as
wrong across the run of issues.
JCOM 14(03)(2015)Y04 4
Science
comprehension
and polarization
PIT and CCT have also squared off face-to-face. Under PIT, one should expect
individuals who are high in science comprehension to use their knowledge and
reasoning proficiency to form risk perceptions supported by the best available
scientific evidence. Individuals who lack such knowledge and reasoning
proficiencies must “go with their gut”, relying on intuitive heuristics like “what do
people like me believe?” [Weber and Stern, 2011; Sunstein, 2006]. Accordingly,
under PIT one would predict that as members of opposing cultural groups become
more science literate and more adept at analytical reasoning — and thus less
dependent on heuristic substitutes for science comprehension — they should
converge in beliefs on climate change.
But the evidence refutes this prediction. In fact, the most science-comprehending
members of opposing cultural groups, my colleagues and other researchers [Kahan
et al., 2012; Hamilton, Cutler and Schaefer, 2012] have found, are the most polarized
(Figure 4).
This is the outcome CCT predicts. If people can be expected to fit their assessments
of evidence to the dominant position within their cultural groups, then those
individuals most adept in reasoning about scientific data should be even “better” at
forming culturally congenial beliefs than their less adept peers. This hypothesis is
borne out by experiments showing that individuals who score highest on tests of
one or another reasoning disposition opportunistically use that disposition to
search out evidence supportive of their cultural predispositions and explain away
the rest.
Figure 4. Polarizing impact of science comprehension on climate-change risk perceptions.
Nationally representative sample (N = 1540). Shaded areas represent 0.95 confidence inter-
vals [Kahan et al., 2012].
Pathological vs.
normal cases
Scientific investigation of the science communication paradox, then, suggests that
CCT furnishes a more satisfactory explanation than PIT. But it also reveals
something else: such conflict — including the magnification of it by science
comprehension — is not the norm. From the dangers of consuming artificially
sweetened beverages to the safety of medical x-rays to the carcinogenic effect of
exposure to power-line magnetic fields, the number of issues that do not culturally
polarize the public is orders of magnitude larger than the number that do (Figure 5
and Figure 6).
JCOM 14(03)(2015)Y04 5
Figure 5. “Polarized” vs. “unpolarized” risk perceptions. Scatterplots relate risk perceptions
to political outlooks for members of nationally representative sample (N = 1800), [Kahan,
2015].
Members of the public definitely do not have a better grasp of the science on the
myriad issues that don’t polarize them than they have of the few that do. In order
simply to live — much less live well — individuals need to accept as known by
science much more than they could comprehend or verify on their own. They do
this by becoming experts at figuring out who knows what about what. It does not
matter, for example, that half the U.S. population (science literacy tests show)
believe “antibiotics kill viruses as well as bacteria” [National Science Foundation,
2014]: they know they should go to the doctor and take the medicine she prescribes
when they are sick.
The place in which people are best at exercising this knowledge-recognition skill,
moreover, is inside of identity-defining affinity groups. Individuals spend most of
their time with people who share their basic outlooks, and thus get most of their
information from them. They can also read people “like them” better — figuring
out who genuinely knows what’s known by science and who is merely pretending
to [Watson, Kumar and Michaelsen, 1993].
This strategy is admittedly insular. But that is not usually a problem either: all the
major cultural groups with which people identify are amply stocked with highly
science-comprehending members and all enjoy operational mechanisms for
JCOM 14(03)(2015)Y04 6
Figure 6. Science comprehension and polarization. Nationally representative sample (N =
1800), April-May 2014. Shaded areas represent 0.95 confidence intervals [Kahan, 2015].
transmitting scientific knowledge to their members. Any group that consistently
misled its members on matters known to science and of consequence to their
well-being would soon die out. Thus, ordinary members of diverse groups
ordinarily converge on what is known by science.
Persistent nonconvergence — polarization — is in fact pathological. It occurs when
factual issues become entangled in antagonistic cultural meanings that transform
positions on them into badges of loyalty to opposing groups. In that circumstance,
the same process that usually guides ordinary members of the public to what’s
known by science will systematically deceive them.
Popper’s
revenge. . .
It’s no accident that the best philosophical exposition of science’s distinctive way of
knowing — The Logic of Scientific Discovery [Popper, 1959] — and one of if not the
best philosophical expositions of liberal democracy — The Open Society and its
Enemies [Popper, 1966] — were both written by Karl Popper. Only in a society that
denies any institution the authority to stipulate what must be accepted as true,
Popper recognized, can individuals be expected to develop the inquisitive and
disputatious habits of mind that fuel the scientific engine of conjecture and
refutation.
JCOM 14(03)(2015)Y04 7
But as Popper understood, removing this barrier to knowledge does not dispense
with the need for reliable mechanisms for certifying what science knows. What’s
distinctive of the Popperian “liberal republic of science” is not the absence of a
social process for certifying valid knowledge but the multiplication of potential
certifiers in the form of the pluralistic communities entered into by freely reasoning
citizens.
Again, these communities typically will converge on what’s known to science. But
as the volume of knowledge and number of cultural certifiers both continue to
grow, the occasions for disagreement among cultural groups necessarily increases.
An expanding number of conflicts is thus guaranteed by sheer fortuity alone,
although the occurrence of them can no doubt be instigated for strategic gain as
well. Thus, the science communication paradox — the simultaneous increase in
knowledge and conflict over what’s known — is built into the constitution of the
liberal republic of science. The science communication paradox is Popper’s
revenge.
The
disentanglement
principle
But as Popper also taught, there are no immutable forces at work in human history.
The same tools used to fashion a scientific account of the source of the science
communication paradox can be used to dispel it. The fundamental source of the
paradox, empirical study suggests, is the entanglement of opposing factual beliefs
with people’s identities as members of one or another cultural group. It’s logical to
surmise, then, that the solution is to disentangle knowledge and identity when
communicating scientific information [Kahan, 2015].
Lab experiments have been used to model this dynamic. In one, my research group
tested U.S. and U.K. subjects’ assessments of valid evidence on global
warming [Kahan et al., 2015]. As expected, those we had first exposed to
information on carbon-emission reductions were even more polarized on the
validity of the global-warming evidence than were members of a control group.
The images and language used to advocate carbon-emission limits triggered
cultural cognition by accentuating the symbolic association between belief in
climate change and conflict between groups defined by their opposing moral
attitudes toward commerce, industry, and free markets.
Polarization dissipated, however, among subjects who had first been exposed to
information on plans to study geoengineering. This technology resonates with the
values of cultural groups whose members prize the use of human ingenuity to
overcome environmental limits. By affirming rather than denigrating their cultural
identities, the information on geoengineering dissolved the conflict those
individuals experienced between crediting human-caused global warming and
forming stances that express their defining commitments.
This lab-study insight comports with studies of “disentanglement” strategies in
real-world settings. For example, research shows that standardized test questions
that assess “belief” in evolution don’t genuinely measure knowledge of either
evolutionary science or science generally. Instead, they measure commitment to a
form of cultural identity that features religiosity (Figure 7) [Kahan, 2015; Roos,
2012; Bishop and Anderson, 1990].
JCOM 14(03)(2015)Y04 8
Figure 7. Disentangling identity from knowledge. Colored bars are 0.95 levels of confidence.
Standardized test items on evolution generate biased results when administered to highly
religious persons, but the effect can be erased by “disentangling” identity and knowledge in
the item wording [Kahan, 2015].
Consistent with this finding, education researchers have devised instructional
protocols that avoid conflating students’ knowledge of evolutionary science with
their professions of “belief in” it. By disentangling acquisition of knowledge from
the obligation to make an affirmation that denigrates religious students’ identities,
these instructional methods enable students who say they “don’t disbelieve in”
evolution to learn the elements of the modern synthesis — natural selection,
random mutation, and genetic variance — just as readily as nonreligious students
who say they “do believe in” it [Lawson and Worsnop, 1992; Lawson, 1999].
Real-world communicators have also successfully used disentanglement to
promote public engagement with climate science. Members of the Southeast
Florida Regional Climate Compact — a coalition of local governments in Broward,
Miami-Dade, Monroe, and Palm Beach Counties — have adopted a “Regional
Climate Action Plan” containing over 100 distinct mitigation and adaption
measures.
As it happens, the residents of Southeast Florida are as polarized on whether
human activity is causing global warming as are those in the rest of the U.S. But the
deliberative process that generated the Regional Climate Action Plan didn’t put that
JCOM 14(03)(2015)Y04 9
question; instead, officials, guided by evidence-based methods, focused,
relentlessly, on how communities could use scientific knowledge to address the
region’s practical, everyday needs.
The highly participatory process that led to adoption of the Regional Climate
Action Plan enveloped residents with vivid, genuine examples of diverse local
stakeholders — including businesses and local homeowner associations — evincing
confidence in climate science through their words and actions. That process
disentangled “what should we do with what we know”, a question that unifies
Southeast Floridians, from “whose side are you on”, the divisive question that
shapes the national climate science debate [Kahan, 2015].
These examples teach a common lesson — the science communication
disentanglement principle. To negotiate the dynamics that form Popper’s Revenge,
science communication professionals must protect citizens from having to choose
between knowing what’s known by science and being who they are as members of
diverse cultural communities.
A “new political
science. . . ”
But like other forms of scientific insight geared to protecting human societies from
danger, the disentanglement principle cannot be expected to implement itself.
Government regulatory procedures will need to be revised, programs of education
reorganized, and professional norms updated to refine and exploit the knowledge
generated by the science of science communication.
Identifying the precise nature of these reforms and the means for implementing
them, moreover, will likewise require empirical study and not mere imaginative
story-telling. These were the central themes of a pair of historic colloquia on the
science of science communication recently sponsored by the National Academy of
Sciences in 2012 and 2013.
As aristocratic forms of government yielded to modern democratic ones in the
early 19th century, Tocqueville famously called for a “new political science for a
world itself quite new” [Tocqueville, Reeve and Spencer, 1838]. Today, mature
liberal democracies require a “new political science”, too, one suited to the
distinctive challenge of enabling citizens to reliably recognize the enormous stock
of knowledge that their freedom and diversity make possible.
The science of science communication is that new political science.
References Bishop, B. A. and Anderson, C. W. (1990). ‘Student conceptions of natural selection
and its role in evolution’. Journal of Research in Science Teaching 27 (5),
pp. 415–427.
Fischhoff, B. and Scheufele, D. A. (2013). ‘The science of science communication’.
Proceedings of the National Academy of Sciences 110 (Supplement 3),
pp. 14031–14032.
Frederick, S. (2005). ‘Cognitive Reflection and Decision Making’. Journal of Economic
Perspectives 19 (4), pp. 25–42.
Hamilton, L. C., Cutler, M. J. and Schaefer, A. (2012). ‘Public knowledge and
concern about polar-region warming’. Polar Geography 35 (2), pp. 155–168.
JCOM 14(03)(2015)Y04 10
Hastorf, A. H. and Cantril, H. (1954). ‘They saw a game: A case study’. The Journal
of Abnormal and Social Psychology 49 (1), pp. 129–134.
Kahan, D. M., Jenkins-Smith, H. and Braman, D. (2013). ‘Cultural Cognition of
Scientific Consensus’. J. Risk. Res. 14, pp. 147–174.
Kahan, D. M. (2011). ‘Fixing the Communications Failure’. Nature 463, pp. 296–297.
(2015). ‘Climate-Science Communication and the Measurement Problem’.
Advances in Political Psychology 36, pp. 1–43.
Kahan, D. M., Braman, D., Cohen, G. L., Slovic, P. and Gastil, J. (2010). ‘Who Fears
the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the
Mechanisms of Cultural Cognition’. Law Human Behav. 34 (6), pp. 501–516. DOI:
10.1007/s10979-009-9201-0.
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D. and
Mandel, G. (2012). ‘The polarizing impact of science literacy and numeracy on
perceived climate change risks’. Nature Climate Change 2, pp. 732–735. DOI:
10.1038/nclimate1547.
Kahan, D. M., Jenkins-Smith, H., Tarantola, T., Silva, C. L. and Braman, D. (2015).
‘Geoengineering and Climate Change Polarization: Testing a Two-Channel
Model of Science Communication’. Annals of the American Academy of Political
and Social Science 658, pp. 192–222. DOI:10.1177/0002716214559002.
Kahneman, D. (2003). ‘Maps of Bounded Rationality: Psychology for Behavioral
Economics’. Am. Econ. Rev. 93 (5), pp. 1449–1475.
Kunda, Z. (1990). ‘The Case for Motivated Reasoning’. Psychological Bulletin 108,
pp. 480–498.
Lawson, A. E. (1999). ‘A scientific approach to teaching about evolution & special
creation’. The American Biology Teacher 61 (4), pp. 266–274. DOI:
10.2307/4450669.
Lawson, A. E. and Worsnop, W. A. (1992). ‘Learning about evolution and rejecting a
belief in special creation: Effects of reflective reasoning skill, prior knowledge,
prior belief and religious commitment’. Journal of Research in Science Teaching 29
(2), pp. 143–166.
Marx, S. M., Weber, E. U., Orlove, B. S., Leiserowitz, A., Krantz, D. H., Roncoli, C.
and Phillips, J. (2007). ‘Communication and mental processes: Experiential and
analytic processing of uncertain climate information’. Global Environ. Chang. 17
(1), pp. 47–58. DOI:10.1016/j.gloenvcha.2006.10.004.
National Science Foundation (2014). Science and Engineering Indicators. Arlington,
VA, U.S.A.: National Science Foundation.
Popper, K. R. (1959). The logic of scientific discovery. New York, U.S.A.: Basic
Books.
(1966). The open society and its enemies. 5th ed. London, U.K.: Routledge and
K. Paul.
Roos, J. M. (2012). ‘Measuring science or religion? A measurement analysis of the
National Science Foundation sponsored science literacy scale 2006–2010’. Public
Understanding of Science 23 (7), pp. 797–813. DOI:10.1177/0963662512464318.
Sood, A. M. (2013). ‘Motivated Cognition in Legal Judgments-An Analytic Review’.
Annual Review of Law and Social Science 9, pp. 307–325.
Sunstein, C. R. (2005). Laws of Fear: Beyond the Precautionary Principle.
Cambridge, U.K.; New York, U.S.A.: Cambridge University Press.
(2006). ‘Misfearing: A reply’. Harvard Law Review 119 (4), pp. 1110–1125.
(2007). ‘On the Divergent American Reactions to Terrorism and Climate
Change’. Columbia Law Rev. 107 (2), pp. 503–557.
JCOM 14(03)(2015)Y04 11
Tocqueville, A. de, Reeve, H. and Spencer, J. C. (1838). Democracy in America. New
York, U.S.A.: G. Dearborn & Co.
Watson, W. E., Kumar, K. and Michaelsen, L. K. (1993). ‘Cultural Diversity’s Impact
on Interaction Process and Performance: Comparing Homogeneous and
Diverse Task Groups’. The Academy of Management Journal 36 (3), pp. 590–602.
Watts, D. J. (2011). Everything Is Obvious: *Once You Know the Answer. How
Common Sense Fails. London, U.K.: Atlantic Books.
Weber, E. (2006). ‘Experience-Based and Description-Based Perceptions of
Long-Term Risk: Why Global Warming does not Scare us (Yet)’. Climatic Change
77 (1), pp. 103–120.
Weber, E. U. and Stern, P. C. (2011). ‘Public Understanding of Climate Change in the
United States’. Am. Psychologist 66, pp. 315–328.
Author Dan Kahan is the Elizabeth K. Dollard Professor of Law and Professor of
Psychology at Yale Law School. He is a member of the Cultural Cognition Project
(www.culturalcognition.net), an interdisciplinary team of scholars who use
empirical methods to examine the impact of group values on perceptions of risk
and science communication. E-mail: dan.kahan@yale.edu.
Dan M. Kahan (2015). ‘What is the “science of science communication”?’.How to cite
JCOM 14 (03), Y04.
This article is licensed under the terms of the Creative Commons Attribution - NonCommercial -
NoDerivativeWorks 4.0 License.
ISSN 1824 – 2049. Published by SISSA Medialab. http://jcom.sissa.it/.
JCOM 14(03)(2015)Y04 12
... The politicization of science by DEI also erodes the trust in scientists and the scientific enterprise itself (Kahan, 2010(Kahan, , 2015 that is required for experts, the public, and legislators to effectively work together to solve pressing problems, such as climate, energy, and pandemics. Mistrust in science also provides fertile ground for science denial, conspiracy theories, and political opportunism. ...
Article
Full-text available
This commentary documents how federal funding agencies are changing the criteria by which they distribute taxpayer money intended for scientific research. Increasingly, STEMM (Science, Technology, Engineering, Mathematics, and Medicine) funding agencies are requiring applicants for funding to include a plan to advance DEI (“Diversity, Equity, and Inclusion”) in their proposals and to dedicate a part of the research budget to its implementation. These mandates undermine the academic freedom of researchers and the unbiased generation of knowledge needed for a well-functioning democracy. Maintaining excellence in science is fundamental to the continuation of the U.S. as a global economic leader. Science provides a basis for solving important global challenges such as security, energy, climate, and health. Diverting funding from science into activities unrelated to the production of knowledge undermines science's ability to serve humankind. When funding agencies politicize science by using their power to further a particular ideological agenda, they contribute to public mistrust in science. Hijacking science funding to promote DEI is thus a threat to our society.
Article
Full-text available
A challenge for educators is how to teach in a “post-truth” world. Lies, fake news, and a gleeful disregard for facts – what I collectively term mis/information – all seem to undermine the very project of education. The pragmatism of Richard Rorty holds promise to address such issues. I first argue that Rorty’s philosophy of education is of limited use, whereas his broader thoughts on a philosophy without foundations are more relevant. I then suggest that a way forward is to evade the fight against post-truth dynamics altogether. An insistence on rational objectivity and foundational truth are at the root of the challenges of mis/information. We can instead embrace uncertainty and doubt, shifting our educational goals to help students of diverse backgrounds avoid alienation and humiliation at the hands of schooling. Education can instead affirm the dignity of each student through an acculturation into inclusive, coherent narratives, replacing the self-defeating quest for finding the right truths with the more uplifting purpose of forging a common cultural commitment to reducing human suffering.
Article
Imagine a Bayesian decision agent who is keen to invest only in technologies or businesses that are conductive to achieving a sustainable economic future. Being initially keen to invest in a certain technology, subsequently two pieces of new information are received that both individually reduce the agent’s inclination to make that investment. In such a situation, it would be natural to assume that the simultaneous consideration of both pieces of information should further reduce the agent’s initial enthusiasm; after all the Bayesian method has been called “ nothing but common sense reduced to calculation.” Somewhat surprisingly, making general statements like this about the double conditional probability requires substantial additional assumptions in classical Bayesian probability theory. We investigate four schemes of assumptions that allow ‘reasonable’ conclusions about the double conditional probability. We compare this with two schemes for the quantum probability model where the double conditional results from sequential updating through projections.
Chapter
Full-text available
We live in a world where science and technology (S&T) are ingrained in every aspect of our daily lives. Consequently, people are increasingly required to integrate information from science with their values and other considerations as they make critical decisions based on that information, such as decisions regarding vaccination, food safety, climate change, etc. However, effective communication of science and technology (S&T) requires skills that are often lacking, leading to incomplete and ineffective science communication (SciComm) efforts that hinder the understanding of scientific knowledge. SciComm is crucial for accelerating scientific temper and bridging the gap between science and society. Undoubtedly SciComm can empower science, technology and innovation (STI) ecosystems to address global challenges. As a result, globally it improves relationships by improving the relationships with stakeholders within the STI ecosystem. With SciComm, public interests can be placed at the centre of how knowledge is generated, shared and applied, thus maximising the benefits of science and technology while mitigating its limitations. When it comes to raising awareness about issues such as climate change, vaccination, etc., scientists are increasingly expected to engage with the public, although the majority of them often feel underprepared for such activities and report a lack of adequate training and clear and consistent policy/guidelines. SciComm needs to be more focused on policies and practices to continue to evolve. This chapter looks at the institutional practices of communicating S&T in the context of training and policy. It is well-known that SciComm continues to grow, develop and change as a practice and field of research. SciComm practices have been transformed by digitalisation. In this chapter, there is an additional focus on communicating S&T in a changing landscape. This chapter not only delves into the institutional practices but also explores the efforts, motivations, challenges and resources available for SciComm. It provides a glimpse into contemporary SciComm and its participants. It also covers India’s sociohistorical context, Including scientific institutions, government policies, and initiatives, and sheds light on how this field is evolving.
Article
Full-text available
A substantial body of research has demonstrated that science knowledge is correlated with attitudes towards science, with most studies finding a positive relationship between the two constructs; people who are more knowledgeable about science tend to be more positive about it. However, this evidence base has been almost exclusively confined to high and middle-income democracies, with poorer and less developed nations excluded from consideration. In this study, we conduct the first global investigation of the science knowledge-attitude relationship, using the 2018 Wellcome Global Monitor survey. Our results show a positive knowledge-attitude correlation in all but one of the 144 countries investigated. This robust cross-national relationship is consistent across both science literacy and self-assessed measures of science knowledge.
Article
How do cultural biases, trust in government, and perceptions of risk and protective actions influence compliance with regulation of COVID-19? Analyzing Chinese ( n = 646) and American public opinion samples ( n = 1,325) from spring 2020, we use Grid–Group Cultural Theory and the Protective Action Decision Model to specify, respectively, cultural influences on public risk perceptions and decision-making regarding protective actions. We find that cultural biases mostly affect protective actions indirectly through public perceptions. Regardless of country, hierarchical cultural biases increase protective behaviors via positive perceptions of protective actions. However, other indirect effects of cultural bias via public perceptions vary across both protective actions and countries. Moreover, trust in government only mediates the effect of cultural bias in China and risk perception only mediates the effect of cultural bias in the United States. Our findings suggest that regulators in both countries should craft regulations that are congenial to culturally diverse populations.
Article
Information technologies have been developed and used by government agencies and public authorities to address societal issues, but their effectiveness often hinges on public support and participation. This is evidenced in the use of digital contact tracing (DCT) technology to contain the spread of the coronavirus. Despite the efforts of public authorities and technology firms to develop and promote DCT, its adoption in the United States had been low and uneven. This research resolves the puzzle by showing that the public’s mixed views on DCT are caused by their cultural worldviews, which represent their values and attitudes toward collective responsibility in addressing personal needs as well as social hierarchies and established norms in regulating behaviors. These worldviews influence not only their perceptions of the risks and benefits of the technology but also how they interpret information about the technology. Being more aware of the technology may contribute to, rather than correct, the biases resulting from individuals’ prominent cultural worldviews. This research has practical implications for policymakers and technology developers, highlighting the importance of considering cultural worldviews in communication strategies and technology design. It offers a unique perspective on the interplay between worldviews, technology, and public perception, providing valuable insights for navigating the complex landscape of emerging technologies addressing diverse societal issues.
Article
Full-text available
The interaction processes of culturally homogeneous and culturally diverse groups were studied for 17 weeks. Initially, homogeneous groups scored higher on both process and performance effectiveness. Over time, both homogeneous and heterogeneous groups showed improvement on process and performance, and between-group differences converged. By week 17, there were no differences in process or overall performance, but the heterogeneous groups scored higher on two task performance measures. Implications for management and future research are given.
Article
Full-text available
The cultural cognition thesis posits that individuals rely extensively on cultural meanings in forming perceptions of risk. The logic of the cultural cognition thesis suggests that a two-channel science communication strategy, combining information content (Channel 1) with cultural meanings (Channel 2), could promote open-minded assessment of information across diverse communities. We test this kind of communication strategy in a two-nation (United States, n = 1,500; England, n = 1,500) study, in which scientific information content on climate change was held constant while the cultural meaning of that information was experimentally manipulated. We found that cultural polarization over the validity of climate change science is offset by making citizens aware of the potential contribution of geoengineering as a supplement to restriction of CO2 emissions. We also tested the hypothesis, derived from a competing model of science communication, that exposure to information on geoengineering would lead citizens to discount climate change risks generally. Contrary to this hypothesis, we found that subjects exposed to information about geoengineering were slightly more concerned about climate change risks than those assigned to a control condition.
Article
Full-text available
High scientific literacy is widely considered a public good. Methods of assessing public scientific knowledge or literacy are equally important. In an effort to measure lay scientific literacy in the United States, the National Science Foundation (NSF) science literacy scale has been a part of the last three waves of the General Social Survey. However, there has been debate over the validity of some survey items as indicators of science knowledge. While many researchers treat the NSF science scale as measuring a single dimension, previous work (Bann and Schwerin, 2004; Miller, 1998, 2004) suggests a bidimensional structure. This paper hypothesizes and tests a new measurement model for the NSF science knowledge scale and finds that two items about evolution and the big bang are more measures of a religious belief dimension termed "Young Earth Worldview" than they are measures of scientific knowledge. Results are replicated in seven samples.
Article
Full-text available
In 2006 and 2010, before and after the International Polar Year, the General Social Survey asked cross-sections of the US public for their knowledge and opinions about polar regions. The opinion items sought respondents’ levels of concern about global warming in polar regions, and whether they favored opening Antarctica for development or reserving it for science. Polar knowledge scores show significant improvement from 2006 to 2010, while general science literacy scores and opinions remain largely unchanged. Regression of concern and Antarctic items on background characteristics, ideology, education and the two knowledge tests finds that ideology and knowledge have the most consistent effects. Conservative ideology negatively predicts all six concern items and supports for reserving the Antarctic. Polar knowledge exhibits a positive effect on most of the concern items and on support for reserving the Antarctic. General science knowledge has mainly positive effects on concern and Antarctic opinions as well, but its effects are moderated by ideology. These findings support two contrasting but not mutually exclusive views about the role of information: that more science information generally leads to greater concern about environmental changes, or greater support for science; but also that some informed but strongly ideological respondents acquire information selectively in ways that reinforce their existing beliefs.
Article
How and when do legal decision makers' preferred outcomes inadvertently drive their judgments? This psychological phenomenon, known as motivated cognition or motivated reasoning, has become an important topic of investigation among scholars conducting experimental research at the intersection of law and psychology. This article presents an overview of that literature, discusses some of its legal applications and implications, highlights areas that require further investigation, and considers some potential ways to curtail the covert operation of motivated cognition in the legal arena.
Article
This article examines the science-of-science-communication measurement problem. In its simplest form, the problem reflects the use of externally invalid measures of the dynamics that generate cultural conflict over risk and other policy-relevant facts. But at a more fundamental level, the science-of-science-communication measurement problem inheres in the phenomena being measured themselves. The “beliefs” individuals form about a societal risk such as climate change are not of a piece; rather they reflect the distinct clusters of inferences that individuals draw as they engage information for two distinct ends: to gain access to the collective knowledge furnished by science and to enjoy the sense of identity enabled by membership in a community defined by particular cultural commitments. The article shows how appropriately designed “science comprehension” tests—one general and one specific to climate change—can be used to measure individuals’ reasoning proficiency as collective-knowledge acquirers independently of their reasoning proficiency as cultural-identity protectors. Doing so reveals that there is in fact little disagreement among culturally diverse citizens on what science knows about climate change. The source of the climate-change controversy and like disputes over societal risks is the contamination of the science-communication environment with forms of cultural status competition that make it impossible for diverse citizens to express their reason as both collective-knowledge acquirers and cultural-identity protectors at the same time.
Article
Seeming public apathy over climate change is often attributed to a deficit in comprehension. The public knows too little science, it is claimed, to understand the evidence or avoid being misled. Widespread limits on technical reasoning aggravate the problem by forcing citizens to use unreliable cognitive heuristics to assess risk. An empirical study found no support for this position. Members of the public with the highest degrees of science literacy and technical reasoning capacity were not the most concerned about climate change. Rather, they were the ones among whom cultural polarization was greatest. This result suggests that public divisions over climate change stem not from the public’s incomprehension of science but from a distinctive conflict of interest: between the personal interest individuals have in forming beliefs in line with those held by others with whom they share close ties and the collective one they all share in making use of the best available science to promote common welfare.