ArticlePDF Available

Abstract and Figures

The explosive usage in recent years of the terms “fake news” and “posttruth” reflects worldwide frustration and concern about rampant social problems created by pseudo-information. Our digital networked society and newly emerging media platforms foster public misunderstanding of social affairs, which affects almost all aspects of individual life. The cost of lay citizens’ misunderstandings or crippled lay informatics can be high. Pseudo-information is responsible for deficient social systems and institutional malfunction. We thus ask questions and collect knowledge about the life of pseudo-information and the cognitive and communicative modus operandi of lay publics, as well as how to solve the problem of pseudo-information through understanding the changing media environment in this “truth-be-damned” era of information crisis.
Content may be subject to copyright.
American Behavioral Scientist
1 –16
© 2020 SAGE Publications
Article reuse guidelines:
DOI: 10.1177/0002764220950606
Pseudo-Information, Media,
Publics, and the Failing
Marketplace of Ideas: Theory
Jeong-Nam Kim1,2 and Homero Gil de Zúñiga3,4,5
The explosive usage in recent years of the terms “fake news” and “posttruth”
reflects worldwide frustration and concern about rampant social problems created
by pseudo-information. Our digital networked society and newly emerging media
platforms foster public misunderstanding of social affairs, which affects almost all
aspects of individual life. The cost of lay citizens’ misunderstandings or crippled
lay informatics can be high. Pseudo-information is responsible for deficient social
systems and institutional malfunction. We thus ask questions and collect knowledge
about the life of pseudo-information and the cognitive and communicative modus
operandi of lay publics, as well as how to solve the problem of pseudo-information
through understanding the changing media environment in this “truth-be-damned”
era of information crisis.
disinformation, fake news, information crisis, lay informatics, misinformation, pseudo-
information, publics, situational theory of problem solving, social media
In the story of the Tower of Babel, all people spoke the same language. Their civi-
lization became capable of building a tower to reach heaven. Seeing the tower but
also the human arrogance that built it, God cursed the people by splitting their
1The University of Oklahoma, Norman, OK, USA
2Debiasing and Lay Informatics (DaLI) Lab, Norman, OK, USA
3University of Salamanca, Salamanca, Spain
4The Pennsylvania State University, State College, PA, USA
5Diego Portales University, Santiago, Chile
Corresponding Author:
Jeong-Nam Kim, Gaylord College of Journalism and Mass Communication, University of Oklahoma,
Norman, OK 73019, USA.
950606ABSXXX10.1177/0002764220950606American Behavioral ScientistKim and de Zúñiga
2 American Behavioral Scientist 00(0)
common language into many. He confounded communication, and we were no
longer able to understand each other. We fell into incommunicado and were scat-
tered physically as well.
In our digital networked society, information and communications technologies
(ICTs) and network technologies connect almost all the dots for communicators in
much the same way as the Tower of Babel. This seemingly equalizes all people and
levels the playing field for lay publics, as opposed to experts or elites. Like the Tower,
these new communication and network technologies elevate our capacity to control
and exchange knowledge. But then we came on a curse again: another sort of incom-
municado, an indiscernibility of fact and fiction in the sea of information. Information
is often obscured and leads us to erroneous conceptions of the world and events
(Bimber & Gil de Zúñiga, 2020; Kim, 2018). We are now cognitively lost in an infor-
mational paradise and left to virtually scatter.
This special issue is about problems of pseudo-information, media, and publics in a
digitalized network society.1 Oxford Dictionary selected “posttruth” as its word of the
year in 2016 (Flood, 2016), and since then, the explosive usage of the terms “fake
news” and “posttruth” has reflected worldwide frustration and concerns about social
problems caused by misinformation and disinformation. In today’s digital networked
society and media environment, it is all too easy to confuse opinion, fiction, and
incomplete or inaccurate ideas with established historical or scientific canonical facts.
The costs of pseudo-information and public confusion are high. Pseudo-information in
particular is responsible for deficient social systems and institutional malfunction due
to distrust. We thus aim to ask questions and collect knowledge about pseudo-informa-
tion and the crippled lay informatics of its users in order to understand the information
crisis that we live in.
Information and Pseudo-Information
We refer to information as bits of evaluated knowledge and data available and shared
among communicative actors in a problematic situation (Kim & Grunig, 2011).
Misinformation” (2020) is defined by Merriam-Webster as “incorrect or misleading
information,” whereas “disinformation” (2020) is defined as “false information delib-
erately and often covertly spread (as by the planting of rumors) in order to influence
public opinion or obscure the truth.” The term misinformation has been used in a more
generic and inclusive sense to describe all false and inaccurate information at its ori-
gin. However, in the recent information crisis, the usage of misinformation has become
taxonomically narrower to distinguish between incorrect information which is acci-
dentally or unintentionally false.
Misinformation and disinformation are similar in that both refer to information
lacking in veracity. But they are different in terms of the purposefulness of their
development and sharing. Misinformation refers to an accidental lack of veracity;
disinformation is deliberately false or inaccurate to serve its creator’s interest. The
evolving usage of the term misinformation reflects the changes to communicative
environments and the frequent problems created by false information over emerging
media and communication networks. In this special issue, we suggest an umbrella
Kim and de Zúñiga 3
term, pseudo-information, to include all types false or inaccurate information, as well
as the trafficking of it in various ways and fields. We acknowledge that pseudo-infor-
mation is still a kind of “information,” despite the likely harm or problematic conse-
quences to its bearers (Figure 1).
Pseudo-information is not as a counterconcept to information. Rather, it is still
under the umbrella of “information,” but discerns information causing harmful conse-
quences or social externalities on information subscribers (Table 1). Information in
itself can vary from accurate to inaccurate, beneficial to depriving, factual to fictional,
and realistic to illusory. The interaction between information and its subscribers gener-
ates unique interpretations and utility in a given individual’s life situation. Thus, infor-
mation and its related phenomena should be understood in terms of the individuality
and subjectivity of its creators, traffickers, and users—the information actors—and
their contextual environments and life situations. Therefore, the meaning, value, util-
ity, and benefit or harm of given information are subject to intersubjectivity and con-
structed contextually with that information’s presence in information actors’ personal
and social time-space. In fact, information is the joint product of interactions among
people, environments, and situations.
This notion is revealed in many examples. For instance, a lie intended to comfort
an anxious person is false. As it is delivered to the person, however, the bits of evalu-
ated knowledge or data could generate a utility, such as calming down. For an ill
patient and their family, wishful conversation about the outcome of treatment is illu-
sory, but could generate a substantial coping effect for the patient. Wartime disinfor-
mation tactics by Russia generated gains for its crafters and losses among Nazi
commanders. More dangerously, rumormongering of conspiracy theories related to
Corona virus or linking 5G network towers with the spread of COVID-19 sparked
arson attacks (Cerulus, 2020) and bolster stigma and violence against Asian neighbors
(Do, 2020). Such troubling states result from the interactions of information actors
(creator, trafficker, or subscriber) and their situated conditions.
Figure 1. Information classification.
4 American Behavioral Scientist 00(0)
A Failing Digital Marketplace of Ideas and Lay Informatics
In organizing our collection, we refer to the information environment using the ana-
logic term “marketplace.” Our conception inherits the analogy of the “marketplace of
ideas” in trading information. Originally coined John Stewart Mill’s 1869 book On
Liberty, this famous analogy to the economic marketplace has provided the foundation
of legal philosophy regarding freedom of speech. It is Mill’s analogy around which
First Amendment jurisprudence spins (van Mill, 2018). It and its underlying assump-
tions have helped society secure and advocate for the human right of uncensored and
almost unlimited freedom of expression for individuals.
In a digital society, our conception of the marketplace of ideas has evolved and
expanded into virtual space. Accordingly, the social nature of expressing and trading
ideas has quantitatively and qualitatively changed (Gil de Zúñiga, 2015; Grunig,
2009). There are too many traders of information, and they are hyper-lively when it
comes to social problems and world affairs. The explosive rise and fall of local mar-
ketplaces have increased participants’ choices and control on market behaviors, and
the participating traders crisscross many marketplaces concurrently using multiple
identities. With anonymity and the burying effect of identity due to innumerable par-
ticipants, people are less afraid to express their ideas.
This so-called fourth industrial revolution fused the physical, biological, and
digital worlds. All things are connected, and those connections enable participants to
increase their communicative behaviors in digital information markets (Kim et al.,
2010; Kim et al., 2018). Much of this revolution is indebted to the explosion of the
Table 1. Pseudo-Information: Types and Its Consequences to Information Subscribers.
Social consequence
(“externalities”): For
example, imminent risk
on safety, well-being, or
interest of information
High Harmful without serving
crafter’s interest: Could
create emotional
states such as joy,
terror, anger, rage, and
loathing for information
Harmful and serving
crafter’s interest: Could
create emotional
states such as joy,
terror, anger, rage, and
loathing for information
Low Unharmful without serving
crafter’s interest: Could
create emotional
states such as pleasure,
discomfort, fear, disgust,
annoyance, surprise,
and contempt for
information subscribers
Unharmful but serving
crafter’s interest: Could
create emotional
states such as pleasure,
discomfort, fear, disgust,
annoyance, surprise,
and contempt for
information subscribers
Kim and de Zúñiga 5
digital marketplace of ideas which, enabled the era of big data. Lay people and the
contents created and exchanged by their communicative actions generated an inde-
pendent media economy and the blooming byproduct industries of data analytics
(Gagliardone, 2018).
In the predigital marketplace of ideas, the traditional traders of information, such as
social institutions (e.g., politicians or mass media) and intellectual authorities (e.g.,
scientists), commanded louder voices and exercised greater influence on the general
populace (Kim, 2014). Hence, there was a need for protective intervention for lay
individuals to preserve their freedom to speak out. The marketplace of ideas evolved
on the rationale of harnessing control from the elite and eliminating pressure on lay
citizens and publics, such as censorship from formal and powerful social institutions.
Yet in the digital society, the sheer number of expressive lay individuals, as
opposed to social elites or experts, and the amount of their ideas has created an
overtrading problem of pseudo-information. Frequently, Mill’s assumption of the
economic market—that superior products will survive over inferior ones by compe-
tition—fails (Stanley, 2018). Truth prevails less often than expected. Various alter-
native ideas challenge and compete with the traditional authority of institutional
participants such as governments, universities, or scientists. A significant number
of lay participants in the digital information markets adhere to troubling claims and
fragmented facts. In some extreme cases, traditional actors with endowed source
credibility are ousted or overwhelmed.
The failure of the digital marketplace of ideas could be defined as the ability of
invalid or uncertain ideas—pseudo-information—to spread. Still the failure is partial
and local. In marketplaces sustained with scientific epistemology, such as research
fora or governmental institutions, the exchange of information is relatively secure and
advanced by the evaluative process of competing merits. There, open debates take
place and scientific epistemology guides knowledge creation through systematic pro-
cedures for scrutinizing evidence and claims for collective interpretation. What really
distinguish scientific epistemics from lay epistemics or lay informatics are metacogni-
tion and metacommunication (Kim & Grunig, 2019).
Scientists or those delegates for policy and decisions are in the habit of thinking
about their thinking and communicating their methods of information use. In contrast,
the greater number of lay information traders rely on lay informatics—subjective or
arbitrary processes of ideation and evaluation (Kim, 2014; Kim & Krishna, 2014)—
and enjoy self-exemption from metacognition and metacommunication in the use of
information regarding their claims (Kim & Grunig, 2019). Thus, pseudo-information
prevails, and incomplete or inadequate interpretations become predominant.
First and Secondary Information Markets in Digital
Society and the Problems
Given the evolution of media and ICT and the devolving nature of lay publics’ use of
information, we distinguish two layers in the digital marketplace of ideas: the first and
6 American Behavioral Scientist 00(0)
secondary information markets. The first information market refers to the physical and
virtual time-spaces wherein the traders of ideas crafted by formal social institutions
such as governments, media, and science organizations or organized social interests
(e.g., corporations or interest groups). The market selection of ideas in the first infor-
mation market is guided by scientific epistemics, despite occasional imperfections,
and subject to the systematic procedures of counterevidence-based, refutation-focused,
entirety-focused, and collective interpretation of merits, as well as tentativeness of
superiority. It is imperfect, yet vying for Mill’s ideal state of market competition.
In contrast, the secondary information market refers to those physical and virtual
places wherein individuals come together to trade ideas (contents) crafted by nonex-
pert lay citizens, either individually or collectively. Market selection of ideas in the
secondary information market is frequently informal and relatively evidence-based,
confirmatory-focused, individualistic or factional in its interpretation of merits, and
conclusiveness of its superiority. Here, in the secondary marketplace of ideas, the pro-
cess of competition is vibrant, with a lack of censorship, but blurs the merits and flaws
of each claimed idea. This different interpretation of the merits of claims and the sub-
sequent selection of ideas could be multifarious through lay individuality, and often
reflects fractions of the general populace and their subjective preferences.
The failure of the first market is in the rapid takeover of traditional information
institutions by ICT and networking (Figure 2), and in the market’s subsumption (cf.
capitalist subsumption of labor and economic process, Karl Marx, 1867) to the
emerging secondary marketplace. There, individual citizens can participate in trades
across networked digital marketplaces. Traditional information producers and trad-
ers (e.g., legacy media) are drowned out by the billions of information traffickers
and millions of local clusters crowded by nonexpert lay people. Social systems of
delegation to experts are sometimes dethroned, and the source credibility of tradi-
tional information traders can be overthrown by the networked myriad of individual
information traders.
In contrast, the secondary information market fails due to crippled lay informatics.
Newly endowed with the power of communication and connection, people could not
match cognitively or communicatively. The amount of available knowledge or data
only becomes information through evaluative tasks by the communicator (Kim &
Krishna, 2014). The quality and quantity of evaluative products—that is, information—
is largely defined by one’s cognitive and communicative actions (e.g., justificatory
information forefending; Kim et al., 2018). The modus operandum of the lay individual
is obfuscated easily, unless extraordinary care is taken to conduct thoughtful judging
(cognitive retrogression in problem solving; Kim & Grunig, 2019).
Defining Questions for the Information Crisis
Two Types of Knowledge on Pseudo-Information, Publics, and Media
To understand and work for an eventual resolution of those market failures, we aim
to begin scholarly discussions and assemble theoretical bases to describe the state of
and sources of the problems surrounding this information crisis. In particular, in this
Kim and de Zúñiga 7
introductory theoretical piece of the special issue, we seek to elaborate on two types
of knowledge: “descriptive knowledge,” to acquire understanding of the “process” of
the phenomenon, and “prescriptive knowledge,” to acquire understanding of and
guidance for what “procedures” we would take for problem resolution (Carter, 1972;
Grunig, 2003). Furthermore, we examine the two responsible actors—both media and
publics who traffic and enact pseudo-information in their daily life and social actions
(Stearns, 2016).
Specifically, we seek systematic inquiries and answers for the descriptive (“what
is”) questions, to capture and diagnose the troubling phenomena of pseudo-informa-
tion. In doing so, we pursue a better understanding of causes, processes, and conse-
quences of misinformation and disinformation, as well as the roles, mechanisms, and
motives of agents and actors who craft and traffic it. Following this rational, we also
seek prescriptive knowledge (“what should be done”) relating to solutions to improve
the social problems arising from pseudo-information. Better procedures can only be
constructed when we first know “what is” as well as how and why it happens. Laying
the ground for a better understanding of these two sets of knowledge is one of the aims
sought from researchers in this special issue.
This special issue will not provide answers to all possible theoretical lingering
issues and questions with regards to pseudo-information. Yet it will prove useful at
identifying key questions to open fora for theoretical and practical solutions. To that
end, here we propose some central research questions:
Descriptive Knowledge of Pseudo-Information, Media, and Publics
What are the nature and impacts of pseudo-information in the digital networked
social environment?
What actors and factors contribute to the troubling consequences of
Informaon Market
Informaon Market
Networked Digitalizaon
Market Failure
Figure 2. Marketplace of ideas and changes: Market subsumption to market failure.
8 American Behavioral Scientist 00(0)
What are the underlying processes or mechanisms associated with the accepting
and sharing of misinformation and disinformation with others? How and why
do lay publics become receptive to pseudo-information, both in terms of sub-
scribing to it and spreading it among their social networks?
What are the functions or values of pseudo-information for its crafters, traf-
fickers, and adopters? What are the incentives for carrying and spreading
pseudo-information for different actors (e.g., campaigners, rivals, opinion
leaders, neighbors)?
What public and media factors increase and decrease gullibility toward pseudo-
information among lay citizens?
What are the hardware aspects (e.g., ICT systems, network policy, mass media,
online platforms) and software aspects, such as social conditions (e.g., network
traits, sociodemographic attributes), political conditions (e.g., political cli-
mates, ideological dominance, trust), personal psychological conditions (e.g.,
lay epistemics, sociocognitive traits), conducive to the crafting, trafficking, and
subscribing to of pseudo-information?
Prescriptive Knowledge of Pseudo-information, Media, and Publics
What are the preventive and promotive factors of improved use of information
(e.g., news literacy, digital information literacy)?
How can we equip individual citizens to depreciate pseudo-information such as
lies and fake news? What can we do to increase immunity to pseudo-informa-
tion among lay audiences (e.g., inoculation strategy)? What countermeasures
can we employ to enhance information literacy among individual opinion lead-
ers and influential social media users (e.g., power bloggers)?
What are the roles and boundaries of government intervention to resolve the
problems of information crises?
What will be the role of marketers of information such as Google, Facebook,
Twitter, or traditional media, and how should ‘information patrolling’ be
approached in the marketplace of ideas?
What are the ethical considerations and limits of “information policing” in the
marketplace of ideas?
What strategies should traditional and social media use to restore credibility in
the fight against pseudo-information?
Theoretical Essays in the Current Volume. In the pursuit of cutting-edge theoretical
accounts that would move pseudo-information research forward, we purposively
edited a handful of novel, thought-provoking theoretical pieces. These essays encap-
sulated many of the pressing issues revolving misinformation and disinformation
today, with a cross-disciplinarity angle.
Molina et al. (2019) kick off the special issue, underscoring how difficult it is for
computer-aided and automated machine learning mechanisms to detect pseudo-infor-
mation online. Drawing on a concept explication paradigm (Chaffee, 1991), their goal
Kim and de Zúñiga 9
is to provide scholars and policy makers alike with a theoretical foundation of what
may be considered fake news. In doing so, the article introduces an eight-dimension
typology to single out fake news content (i.e., false news, polarized content, satire,
etc.) by contrasting them with factual, legitimate news at four levels: message, source,
structure, and network.
For instance, for real news to be detected at the message and linguistic level, read-
ers should pay attention to proofed, fact-checked, and impartial reporting that will
attribute sources with names. The content may also include evidence or scientific
grounded content and reports about research-based information and statistical data.
Source and information (message) intention also matters. Are the sources of informa-
tion verified? Do they include heterogeneous, balanced and fair information sources
and quotations? Factual professional news should also include a certain degree of
“source pedigree,” of the organization behind the information, and transparency
regarding from whom specifically the reporting or “news” is coming from may also
be helpful.
Likewise, the article argues that structural and network characteristics could poten-
tially give away symptoms of what the authors refer to as features of real news. Data
such as whether the information derives from a reputed URL, includes metadata with
authentic indicators, has clear contact or about sections, and shares active credible
e-mail addresses where content creators may be reached, are all valuable and practical
mechanisms to more accurately spot factual news. All in all, the essay provides a very
useful route log for academics, policy makers, and citizens to better detect misinfor-
mation in today’s digital world.
Kim and Grunig’s (2019) essay takes a much broader scope when dealing with
information behavior and human problem solving. Based on the situational theory of
problem solving (Kim & Grunig, 2011; Kim & Krishna, 2014), the authors explicate
which information is distinct from data and knowledge, and how cognitive problem
solving occurs in problematic life situations. Kim and Grunig conceptualize “cogni-
tive retrogression in problem solving” to account for lay individual’s crippled infor-
matics—the less-than-ideal communicative and cognitive modus operandi that lay
individuals use in everyday life.
Initially, the essay clarifies why and how information is indeed beneficial to society
at large, particularly when accurate and factual information serves as a sine qua non
condition for citizens and lay publics to understand complex issues and solve their
daily problems. Under this condition, the more information, the better. Paradoxically,
in today’s world where ordinary citizens encounter practically unlimited amounts of
information at their disposal, the axiom that more information is better may be at odds
with the most efficient way to interpret complex issues and solve problems. To accom-
modate and examine this disparity, the authors introduce two cognitive processes that
shed light on individuals’ means of dealing with vast amounts of information to
unravel complex issues and solve daily informational quandaries. On the one hand,
cognitive retrogression takes place when individuals attempt to solve problems swiftly,
reach a rapid conclusion, and engage after the fact in a cognitive effort only once a
conclusion is already grasped. On the other hand, cognitive progression represents a
10 American Behavioral Scientist 00(0)
reasoning strategy that encompasses greater mental involvement and processing
before a conclusion is reached. The former strategy helps individuals optimize prob-
lem-solving situations for a decision based on limited or sufficient evidence.
Conversely, the mental processes of cognitive progression contribute to solving any
daily problem or quandary in the most optimal informational context.
Kim and Grunig then explain cognitive arrest in problem solving: an epistemic
endeavor made retrogressively by a problem solver which only heaps self-warranting
evidence and lead to growing epistemic inertia. This theorization accounts for how
and why lay publics are entrapped and paralyzed by justificatory information and
become susceptible to pseudo-information. At the end, they apply a theoretical account
to conspiratorial thinking as an exemplary case of public close-mindedness.
Further building on the characteristics of misinformation and fake news, Chiu and
Oh (2020) propose a creative theoretical distinction between what may constitute fake
news and what could be identified as a personal lie. This distinction is not trivial. As
the authors argue, social media moguls can easily spot fake news based on this char-
acteristic distinction, and thus seamlessly eliminate fake information from their plat-
forms. Although both personal lies and fake news in social media contexts seek to
deceive the audience or the public, they do so in different ways. The key element lies
in the ultimate goal or purpose. Fake news craves attention and seeks to overwhelm-
ingly capture and spark action on the part of the audience. On the contrary, personal
lies pursue unnoticeable means of misinformation dissemination to maintain social
ties and primarily elicit audience inaction.
More specifically, drawing on many distinct theories, the article reveals six dimen-
sions on which fake news and personal lies differ: speaker–audience relationship,
goal, emotion, information, number of participants, and citation of sources. Usually,
when it comes to personal lies, the personal relationship between the speaker and the
audience is a close one, whereas in the case of fake news, this personal connection is
more distant. The main purpose of a personal lie is a defensive one, to achieve audi-
ence inaction. Yet when fake news is generated, the main goal is the opposite. The
more impact and greater audience engagement with the content, the more successful
the fake news will be. Based on appraisal theory, the level of emotional involvement
in the context of personal lies is low, as the speaker seeks somewhat to achieve emo-
tional distraction from the audience. For fake news, emotions play a much more rele-
vant role, where information usually includes massive doses of emotional manipulation
and arousal.
In terms of the information dimension, the authors explain that personal lies tend to
be quite mundane and include vague or little information. Fake news, on the other
hand, will contain vivid and detailed information to persuade the audience. Finally,
personal lies will typically make use of fewer sources and a lower number of partici-
pants, while fake news will attempt to capture the largest number of participants pos-
sible and will embrace the use of more sources.
Lee and Shin (2019) delve into some of the most germane factors to understand
why misinformation is disseminated online. Borrowing from strands of political per-
suasion literature, as well as from works on the credibility of online information and
Kim and de Zúñiga 11
digital deception, the article focuses on pragmatic factors that cause ordinary citizens
to take the information they are exposed to through social and digital media as truthful.
They highlight some traits and attributes about the sources of information, the mes-
sage, the channel, and about the receiver. For instance, the number of sources used, the
perceived expertise of the sources, and how similar the source appears to be with
respect to the information receiver, are all important persuasive instruments for fake
news to thrive. Similarly, when a message is congruent with the receiver’s views, is
presented in a repetitive and easy-to-understand manner, and is based on information
that may be perceived as factual, the audience will be more likely to take the message
as believable. Finally, some features about the channel through which the misinforma-
tion is spread are also observed.
Building on prior theoretical accounts included in this volume, Weeks and Gil de
Zúñiga (2019) offer six major areas for scholars to hone their efforts when dealing
with pseudo-information. According to the authors, this list will not be exhaustive,
and many other observations may be equally worth pursuing. However, in order to
foster and encourage an interdisciplinary literature on pseudo-information, academ-
ics should also focus on discovering how misleading pseudo-information permeates
through society, whether it matters for society, and what its level of influence is over
regular citizens. In other words, where and how do people get exposed to pseudo-
information, and when people are exposed to misleading or bad information, what
kind of effects does it have? For instance, it stands to reason that today, a large portion
of this information and the extent to which individuals believe it, greatly depends on
individuals’ social media networks. Having a deeper understanding of the composi-
tion of these networks and how they work would be helpful to better understand this
phenomenon. Another area of improvement revolves around pseudo-information dis-
tribution. Politicians, political elites, opinion leaders, and government officials are all
part of an information conglomerate prone to creating or disseminating falsehoods.
Studies seldom pay attention to these influential figures and their relationships with
misinformation. The essay concludes with suggestions geared toward improving
today’s modern informational ecosystem, and toward better engraining citizens with
healthier democratic habits. On the one hand, understanding people’s emotions and
how these characteristics may influence levels of false information acceptance should
also be analyzed. Furthermore, studies need to help identify more efficient tools and
mechanisms to confront false information, because solely providing corrective mes-
sages that simply highlight factual or contextual information may not be enough to
facilitate accurate beliefs.
Ha et al. (2019) seek to integrate and describe the growing importance of an aca-
demic body of research on fake news and misinformation. Relying on articles pub-
lished on Google Scholar in the past 10 years with the keywords “fake news” and
“misinformation,” the authors construct a pragmatic explorative coding scheme to
showcase the current research state across disciplines. Most of the studies come
from Communication or Psychology journals, with Journal of Communication and
Memory & Cognition leading the trend. Furthermore, the studies published tend to
be either quantitative or conceptual, with a marginal proportion of studies (a little
12 American Behavioral Scientist 00(0)
over 20%) using either qualitative or mixed-method approaches. The article also
provides many other benchmarks to better understand how research around these
issues continues to evolve.
With a similar goal in mind, Krishna and Thompson (2019) attempt to capture how
salient misinformation research has become with regards to health-related topics.
Within this framework, the authors specifically review studies dealing with this sub-
ject in Health Communication and the Journal of Health Communication. From this
work, we learn that one of the earliest strands of the literature dealt with misinforma-
tion and medication issues, particularly, the disconnect between what some studies
find and how the media report or frame the main results (e.g., the misuse of aspirin
among heart disease patients). Other topics of interest underscored in the study include
food and nutrition as a battleground where misleading information thrived; misinfor-
mation about an array of cancer types, treatments, and pseudo-cures; dissemination of
information on wrongfully attributed or exaggerated epidemics such as Ebola; and
false information revolving vaccines and autism or vaping e-cigarettes. All in all, the
authors depict a realistic if somewhat gloomy vision of the pervasive pseudo-informa-
tion epidemic in which health science is currently embroiled.
Drawing on an innovative strategic amplification paradigm, Donovan and Boyd
(2019) establish a suggestive guideline to better address what, to their eyes, is an
endemic journalistic problem: strategic silence. According to the authors, the spe-
cific gatekeeping rules and reporting strategies enacted by media corporations and
journalists when reporting certain issues may encapsulate an evolution in disinfor-
mation and misinformation. Specifically, the authors visibly expand this notion with
regards to the ways in which news on violence and suicide have historically been
reported in the United States. The essay sustains and clarifies strategies for profes-
sionals and the media in general to abide to greater levels of deontological responsi-
bility, which in turn may facilitate and cultivate a stronger, healthier, and egalitarian
informed public opinion.
How does health misinformation affect the politics of a general election campaign?
How are the media dealing with these politically divisive and polarizing topics? Lovari
et al. (2020) center their article on the vaccination debate in online media and politics
in Italy. Relying on a mixed-method approach that combined social media trace data
as well as community experts’ in-depth interviews, the authors shed light on the influ-
ence of the “vaccine information crisis” political agenda during the 2018 Italian gen-
eral elections. Overall, the study represents a persuasive account on the opportunities
and interventions to be followed by policy makers to establish better information dis-
semination patterns among citizens. Their study highlights potential solutions to health
communication crises, which ultimately may also contribute to diminishing misinfor-
mation waves regarding broader topics of public opinion and interest.
In the pursuit of efficient tools to challenge and contest misinformation in digital
and social media, Jang et al. (2019) present a study by which citizens will be well-
positioned to face, identify, and confront false information. In their study, different
types of media literacy interventions are tested to observe whether participants’ self-
reported literacy scales serve to help recognize fake news online. Among all the
Kim and de Zúñiga 13
possible competences that most citizens can develop, information literacy seems to be
the one that most efficaciously prevents the assimilation of pseudo-information among
individuals. As opposed to media literacy, news readership, or digital skills, informa-
tion literacy clearly helps individuals understand complex informational dilemmas or
quandaries, evaluate information, and then search and find useful and factual evidence
and facts, which in turn will also help make more sophisticated and savvier use of the
overall information gathered. All in all, helping educate citizens to cultivate their
information literacy abilities may become a powerful tool in inoculating public opin-
ion against fake news and falsehoods.
The volume wraps up with a piece by Oh and Park (2019), who develop a machine
learning algorithm-based technique to detect misinformation embedded in deceptive
comments. The authors identify a plain pervasive problem across society: people are
simply not very good at detecting deception. This problem is particularly salient in the
distinction between fact-checked information and users’ opinions. Accordingly, the
authors develop an algorithm to make the distinction between truthful and deceptive
news comments. They do so in the Korean language, which is also largely understud-
ied. Furthermore, their proposed machine learning technique achieves a promising
accuracy rate in predicting and classifying untruthful opinions (fake comments) about
different social issues.
This special issue is about the information crisis that we live in today. We have assem-
bled 11 essays as the first volume to develop theoretical ground for building possible
solutions. We first look for theory-based propositions, supported by empirical tests
when possible beyond anecdotal or episodic snapshots. In doing so, we focus on two
main actors, media and publics, and their interplay as it relates to pseudo-information.
Media, both traditional mass media and emerging social media platforms, shape infor-
mation environments for publics and are shaped by publics’ communicative actions.
An understanding of these key actors will be necessary to combat the obstinacy of
In addition, this special issue looks toward a common body of knowledge about the
nature of pseudo-information (its birth, growth, and death) in our digital networked
society. Descriptive theorizing on the phenomena of misinformation and disinforma-
tion and their agents and actors helps devise prescriptive procedures to reduce their
prevalence among lay citizens, to make lay informatics more resistant to pseudo-infor-
mation, and to develop and test policies and institutional countermeasures against
social problems that arise from pseudo-information.
Still we need more and better defining theoretical and empirical works about
pseudo-information, publics, and media from thought leaders in the areas of commu-
nication, public opinion, journalism, and other media, and should address the details
of conceptual challenges and unnoticed factors regarding the phenomena (also see
Bimber & Gil de Zúñiga, 2020; Kim, 2018).
14 American Behavioral Scientist 00(0)
All in all, in this special issue, we present theoretical bases about the life of pseudo-
information—its birth, growth, and death in the marketplace of ideas. We hope this
issue sparks deeper thinking among readers and all stakeholders, such as journalists,
researchers, and policy makers. Likewise, we hope this collection will elicit these
agents to engage into further and more meaningful communication about pseudo-
information and a failing digital marketplace of ideas. Finally, the articles in this vol-
ume can shed light on the intertwined roles of publics and media, both of which are
responsible for the crippling of lay informatics in our digital networked society.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship,
and/or publication of this article.
The author(s) received no financial support for the research, authorship, and/or publication of
this article.
1. We refer to information as bits of evaluated knowledge and data available and shared
among communicative actors in a problematic situation (Kim & Grunig, 2011; Kim &
Krishna, 2014). Publics as a plural form refers to some problem- or issue-specific situ-
ational entities (vs. general population) who are epistemically motivated, communicatively
active, and create social dynamics (Grunig & Kim, 2017). Media refers to both traditional
and digital mass media and new social, people’s network-based media.
Bimber, B., & Gil de Zúñiga, H. (2020). The unedited public sphere. New Media & Society,
22(4), 700-715.
Carter, R. F. (1972, September). A general system characteristic of systems in general [Paper pre-
sentation]. Far West Region Meeting, Society for General Systems Research, Portland, OR.
Chaffee, S. (1991). Explication. Sage.
Cerulus, L. (2020, April 29). How anti-5G anger sparked a wave of arson attacks. Politico.
Chiu, M. M., & Oh, Y. W. (2020). How fake news differ from personal lies. American Behavioral
Scientist. Advance online publication.
disinformation. (2020). In
Do, A. (2020, July 5). “You started the corona!” As anti-Asian hate incidents explode, climbing
past 800, activists push for aid. Los Angeles Times.
Donovan, J., & Boyd, D. (2019). Stop the presses? Moving from strategic silence to strategic
amplification in a networked media ecosystem. American Behavioral Scientist. Advance
online publication.
Flood, A. (2016, November 15). “‘Post-truth’ named word of the year by Oxford Dictionaries.”
The Guardian.
Kim and de Zúñiga 15
Gagliardone, I. (2018). World trends in freedom of expression and media development
2018: Global Report 2017/2018.
Gil de Zúñiga, H. (2015). Toward a European public sphere? The promise and perils of modern
democracy in the age of digital and social media. International Journal of Communication,
9, 3152-3160.
Grunig, J. E. (2003). Constructing public relations. In B. Dervin, S. H. Chaffee, & L. Foreman-
Wernet (Eds.), Communication, a different kind of horse race: Essays honoring Richard F.
Carter (pp. 85-115). Hampton Press.
Grunig, J. E. (2009). Paradigms of global public relations in an age of digitalisation. Prism, 6(2).
Grunig, J. E., & Kim, J-N. (2017). Publics approaches to health and risk message design and
processing. Oxford encyclopedia of health and risk message design and processing. https://
Ha, L. H., Perez, L. A., & Ray, R. (2019). Mapping recent development in scholarship on
fake news and misinformation 2008-2017: Disciplinary contribution, topics and impact.
American Behavioral Scientist. Advance online publication.
Jang, S. M., Mortensen, T., & Liu, J. (2019). Does media literacy help identification of fake
news? Information literacy helps, but other literacies don’t. American Behavioral Scientist.
Advance online publication.
Kim, J.-N. (2014). Lay consumer informatics and fast-choicism: Instant trust and prompt loy-
alty in digitalized, networked marketplace. Communication Insight, 3, 10-31. [Written in
Kim, J.-N. (2018). Digital networked information society and public health: Problems and
promises of networked health communication of lay publics. Health Communication,
33(1), 1-4,
Kim, J.-N., Grunig, J. E., & Ni, L. (2010). Reconceptualizing the communicative action of
publics: Acquisition, selection, and transmission of information in problematic situations.
International Journal of Strategic Communication, 4(2) 126-154.
Kim, J.-N., & Grunig, J. E. (2011). Problem solving and communicative action: A situa-
tional theory of problem solving. Journal of Communication, 61(1), 120-149. https://doi.
Kim, J.-N., & Grunig, J. E. (2019). Lost in informational paradise: Cognitive arrest to epistemic
inertia in problem solving. American Behavioral Scientist. Advance online publication.
Kim, J.-N., & Krishna, A. (2014). Publics and lay informatics: A review of the situational theory
of problem solving. Annals of the International Communication Association, 38(1), 71-105.
Kim, J.-N., Oh, Y. W., & Krishna, A. (2018). Justificatory information forefending in digital
age: Self-sealing informational conviction of risky health behavior, Health Communication,
33(1), 85-93,
Krishna, A., & Thompson, T. (2019). Misinformation about health: A review of health commu-
nication and misinformation scholarship. American Behavioral Scientist. Advance online
Lee, E. J., & Shin, S. Y. (2019). Mediated misinformation: Questions answered, more ques-
tions to ask. American Behavioral Scientist. Advance online publication. https://doi.
16 American Behavioral Scientist 00(0)
Lovari, A., Martino, V., & Righetti, N. (2020). Blurred shots: Investigating information crisis
around vaccination in Italy. American Behavioral Scientist. Advance online publication.
Marx, K. (1867). Das Kapital: Kritik der politischen Oekonomie: Der Produktionsprozess des
Kapitals (Vol. 1, 1st ed.) [Capital: A Critique of Political Economy. Volume I: The Process
of Capitalist Production]. Verlag von Otto Meissner.
Mill, J. S. (1869). On liberty. Longman, Roberts & Green.
misinformation. (2020). In
Molina, M., Sundar, S., Le, T., & Lee, S. (2019). “Fake news” is not simply false information:
A concept explication and taxonomy of online content. American Behavioral Scientist.
Advance online publication.
Oh, Y. W., & Park, C. H. (2019). Machine-cleaning of online opinion spam: Developing a
machine-learning algorithm for detecting deceptive comments. American Behavioral
Scientist. Advance online publication.
Stanley, J. (2018, September 4). What John Stuart Mill got wrong about freedom of speech.
Boston Review.
Stearns, J. (2016, September 30). Why do people share rumours and misinformation in breaking
news? First Draft.
van Mill, D. (2018). Freedom of speech. In E. N. Zalta (Ed.), The Stanford encyclopedia of phi-
losophy (Summer 2018 ed.).
Weeks, B., & Gil de Zúñiga, H. (2019). What’s next? Six observations for the future of politi-
cal misinformation research. American Behavioral Scientist. Advance online publication.
Author Biographies
Jeong-Nam Kim is the gaylord family endowed Chair in Strategic Communications and the
founder of the Debiasing and Lay Informatics (DaLI) lab. Kim has studied public relations
and communication focusing on “public behavior”. Kim constructed the situational theory of
problem solving (STOPS) with James E. Grunig, and the situational theory has been applied to
multiple fields and disciplines across the globe. His work identifies both opportunities and chal-
lenges generated by lay publics and how their communicative actions either contribute to or
detract from a civil society.
Homero Gil de Zúñiga, Ph.D., in politics at Universidad Europea de Madrid and Ph.D. in Mass
Communication at University of Wisconsin – Madison, serves as Distinguished Research Professor
at University of Salamanca where he directs the Democracy Research Unit (DRU), as Professor at
Pennsylvania State University, and as Senior Research Fellow at Universidad Diego Portales,
Chile. His research addresses the influence of new technologies and digital media over people’s
daily lives, as well as the effect of such use on the overall democratic process. He has published
nearly a dozen books/volumes and over 100 JCR peer reviewed journal articles (i.e. Journal
of Communication, Journal of Computer-Mediated Communication, Political Communication,
Human Communication Research, New Media & Society, Communication Research, etc).
... Even, the capitalization on user-generated contents is reorganizing marketing and advertising priorities in the secondary marketplaces (e.g., Google-YouTube pays individual YouTubers by popularity). The secondary information markets have dominated the first information market, giving rise to the problem of market subsumption (Kim & Gil de Zúñiga, 2021). And in the debilitated marketplace of ideas, the primary pathogen is pseudo-information. ...
... This broad amalgam of overlapping and inconsistencies could result in a problem for the field if it fails to focus on the problem and diverts on terminological debates (Weeks & Gil de Zúñiga, 2021). We proposed pseudo-information (Kim & Gil de Zúñiga, 2021) as an umbrella term to encompass all kinds of incorrect information, regardless of its ultimate intent to harm. This term helps acknowledge the differences between information, which can range from accurate to inaccurate, and the remaining bits of evaluated knowledge and data (Kim & Grunig, 2011) that do cause harmful consequences in spite of their original fact-based intent. ...
Today’s public sphere is largely shaped by a dynamic digital public space where lay people conform a commodified marketplace of ideas. Individuals trade, create, and generate information, as well as consume others’ content, whereby information as public space commodity splits between this type of content and that provided by the media, and governmental institutions. This paper first explains how and why our current digital media context opens the door to pseudo-information (i.e., misinformation, disinformation, etc.). Furthermore, the paper introduces several concrete empirical efforts in the literature within a unique volume that attempt to provide specific and pragmatic steps to tackle pseudo-information, reducing the potential harm for established democracies that today’s digital environment may elicit by fueling an ill-informed society.
... The problems of propaganda, misinformation, media manipulations and fake news are widely analyzed in scientific research (Aguaded, Romero-Rodriguez, 2015;Azzimonti, Fernandes, 2021;Balmas, 2012;Bean, 2017;Berghel, 2017;Bertin et al, 2018;Bharali, Goswami, 2018;Bradshaw, Howard, 2018;Bradshow et al., 2021;Carson, 2021;Colomina et al., 2021;Conroy et al., 2015;Dentith, 2017;Derakhshan Wardle, 2017;Farkas, Schou, 2018;Figueira, Oliveira, 2017;Goering, Thomas, 2018;Hofstein Grady et al., 2021;Howard et al., 2021;Janze, Risius, 2017;Kim, de Zúñiga, 2020;Marwick, 2018;Mihailidis, Viotty, 2017;Quandt et al, 2019;Ruchansky et al., 2017;van der Linden et al., 2021;Vamany, 2019;Vargo et al., 2018 and others). ...
... The cost of lay citizens' misunderstandings or crippled lay informatics can be high. Pseudo-information is responsible for deficient social systems and institutional malfunction" (Kim, de Zúñiga, 2020). ...
Full-text available
This monograph analyzes numerous types of media manipulations, the criteria and methods of evaluating the effectiveness of the activities developed by the authors that contribute to the development of students’ media competence in the analysis of media manipulative influences; on the basis of synthesis and analysis the theoretical model of the development of media competence of students of universities and faculties of education in the analysis of media manipulative influences (including the definition of essential signs, qualities and properties, differentiation of media and manipulative influences) is presented. The monograph is intended for teachers of higher education, students, graduate students, researchers, school teachers, journalists, as well as for the circle of readers who are interested in the problems of media education and media manipulative influences.
... Considering the ambiguity of the term "alternative media" to identify the far-right media sphere that rely on disinformation-given its original link with left-wing activism since the 1970s (Haller and Holt 2019)-, the pseudo-media concept better captures the fraudulent logic of sites that mimic the appearance of legacy media to present unconventional or unorthodox coverage of the social reality (Rathnayake 2018). However, far beyond being a counterbalance (Kim and Gil de Zúñiga 2021), criticism towards mainstream media is not based on a rational, democratic dialogue, but on "an emotional judgement that seeks to create mistrust" (Figenschou andIhlebaek 2019, 1224). The notion allows a more comprehensive understanding of the multi-sided nature of disinformation and the blended character of such texts that combine sensationalism, disinformation, and partisanship to provide anti-establishment narratives (Mourão and Robertson 2019). ...
Full-text available
Information disorder involves wide-ranging content that challenges democratic rules and social harmony. Pseudo-media that relies on conspiracy theories and misleading versions of the social reality contribute to feeding the disinformation ecosystem by reinforcing biased messages with expressive patterns and polarising practices. This article focuses on the content published (N = 1,396) by seven far-right wing Spanish pseudo-media. Based on qualitative and quantitative methods, it analyses headlines, types of text and sources, as well as the distortion strategies of journalistic conventions. Results show that the emotional component is expressed by means of polarised headlines that rely on clickbait to gain attention and build a particular jargon, exacerbated by disinformation and populist practices. The absolute dependence of conspicuous headlines is evidenced by the limited resources of pseudo-media, whose production lies in a mix of opinion text and the processing of online content. Plagiarism from mainstream—mainly conservative—media, social networks and website siblings fuel these outlets that play the role of the ambiguity, mimicking journalistic conventions and mocking them by means of disinformation practices, with a particular focus on reframing social issues, progressive policies and measures to manage the pandemic.
... STOPS could also be used to study their communicative behaviors as a looping causal chain that gives rise to misinformation, fake news, pseudo-information, and close-mindedness (J.-N. Kim & Gil de Zuniga, 2021;J.-N. Kim & J. Grunig, 2021). ...
Full-text available
Publics can be understood as a subset of stakeholders. Unlike stakeholders, which refers to any individuals or entities that affect or are affected by organizations, publics is used in place of stakeholders in public relations as subsets of stakeholders who are segmented based on common characteristics (Rawlins, 2006). Segmentation is necessary because organizations need to prioritize their resources to build and cultivate relationships with certain publics, not all publics (J.-N. Kim et. al., 2008). The need to understand publics, how their opinions are formed, and the role they play in civil society was first developed in the 1920s as a result of the prevalence of the mass media in the formation of public opinion and individuals’ participation in civil society (J. Grunig & Kim, 2017). Some 80 years later, the situational theory of publics (STP) (J. Grunig, 2003), and the situational theory of problem solving (STOPS) (J.-N. Kim_& J. Grunig, 2011) were developed. Both theories were proposed to guide public relations practice in identifying and segmenting publics by explaining the situational factors which motivate individuals to act for or against organizations. While there are other public relations frameworks that guide segmentation in other ways, the situational factors proposed in STP and STOPS theorize why people become active in certain situations and the effects of their communicative behaviors on their cognitions, attitudes, and behaviors (Rawlins, 2006). By segmenting individuals into subsets of publics based on their motivations and communicative behaviors, organizations can better understand how to coadjust their communication and behaviors for collaborative problem-solving and mutually beneficial organization-public relationships.
... 17 In the era of digital networks, this inevitably funnels people towards social media platforms, increasing exposure to misinformation and polarising opinions. 18 We note calls for social media companies to regulate information deemed inaccurate, false or malicious. 19 However, that is unlikely to satisfy information needs as people will most likely default to word of mouth. ...
Full-text available
Objectives To understand how essential workers with confirmed infections responded to information on COVID-19. Design Qualitative analysis of semistructured interviews conducted in collaboration with the national contact tracing management programme in Ireland. Setting Semistructured interviews conducted via telephone and Zoom Meetings. Participants 18 people in Ireland with laboratory confirmed SARS-CoV-2 infections using real-time PCR testing of oropharyngeal and nasopharyngeal swabs. All individuals were identified as part of workplace outbreaks defined as ≥2 individuals with epidemiologically linked infections. Results A total of four high-order themes were identified: (1) accessing essential information early, (2) responses to emerging ‘infodemic’, (3) barriers to ongoing engagement and (4) communication strategies. Thirteen lower order or subthemes were identified and agreed on by the researchers. Conclusions Our findings provide insights into how people infected with COVID-19 sought and processed related health information throughout the pandemic. We describe strategies used to navigate excessive and incomplete information and how perceptions of information providers evolve overtime. These results can inform future communication strategies on COVID-19.
This article aims to study the modern system of archives development in China and to determine a promising model of its adaptation to global challenges in the post‐truth era. The research methodology is based on a mixed qualitative and quantitative analysis of statistical data that show the development dynamics of archives system in China concerning public access and the formation of modern views on the problem of fact‐checking, as well as their reliability in the new historical post‐truth era. For effective verification of archival facts, researchers developed an adaptation model of China's archives to global challenges of information reliability in the post‐truth era. It is based on the idea of using artificial intelligence and the expert opinion of archivists. The practical use of the proposed model will contribute to improving the efficiency of archives, as well as the development of digital technologies in this area.
Misinformation, misunderstanding, and rumors are not foreign to organizations. The cost of pseudo-information can be critical for the organization in terms of profit, stakeholder relationships, and reputation. For those reasons, organizations should make efforts to detect and prevent the spread of pseudo-information. This piece of research proposes and finds support in a model to gatekeep pseudo-information in the workplace, in which two-way symmetrical communication is an essential element for the model, predicting employees’ gatekeeping behaviors, and mediating the relationship between quality of the employee–organization relationship and gatekeeping behaviors. Then, the cultivation of relationships with the employees and the adherence to two-way symmetrical communication are cost-effective methods for the organization. Loyal and satisfied employees voluntarily debunk and combat pseudo-information.
This study expands on existing research about correcting misinformation on social media. Using an experimental design, we explore the effects of three truth signals related to stories shared on social media: whether the person posting the story says it is true, whether the replies to the story say it is true, or whether the story itself is actually true. Our results suggest that individuals should not share misinformation in order to debunk it, as audiences assume sharing is an endorsement. Additionally, while two responses debunking the post do reduce belief in the post’s veracity and argument, this process occurs equally when the story is false (thereby reducing misperceptions) as when it is true (thus reinforcing misperceptions). Our results have implications for individuals interested in correcting health misinformation on social media and for the organizations that support their efforts.
As misinformation is common in the digital media environment, it has become more important to understand risk communication in the context of communicative behaviors of publics that affect public opinion and policymaking. Focusing on food safety issues such as genetically modified food and food additives in China, this study aims to understand the communicative action of publics and the role of organizational trust in the conspiratorial thinking of publics and their perceptions of food safety issues. Using a national sample of 1,089 citizens living in China, this study examines situational theory of problem solving (STOPS) to understand when and how publics become active in communicative actions to take, select, and transmit information regarding food safety issues. In addition, this study tests the role of organizational trust in the food industry between conspiratorial thinking of publics and their situational perceptions, which are antecedent variables to increase communicative action of publics in problem solving. The results demonstrate that STOPS can be applied to the food safety issue to predict communicative actions of publics, and organizational trust plays a vital role in reducing individuals’ concerns about the food safety issue.
We use a unique, nationally representative, survey of UK social media users ( n = 2,005) to identify the main factors associated with a specific and particularly troubling form of sharing behavior: the amplification of exaggerated and false news. Our conceptual framework and research design advance research in two ways. First, we pinpoint and measure behavior that is intended to spread, rather than correct or merely draw attention to, misleading information. Second, we test this behavior’s links to a wider array of explanatory factors than previously considered in research on mis-/disinformation. Our main findings are that a substantial minority—a tenth—of UK social media users regularly engages in the amplification of exaggerated or false news on UK social media. This behavior is associated with four distinctive, individual-level factors: (1) increased use of Instagram, but not other public social media platforms, for political news; (2) what we term identity-performative sharing motivations; (3) negative affective orientation toward social media as a space for political news; and (4) right-wing ideology. We discuss the implications of these findings and the need for further research on how platform affordances and norms, emotions, and ideology matter for the diffusion of dis-/misinformation.
Full-text available
The health of democratic public spheres is challenged by the circulation of falsehoods. These epistemic problems are connected to social media and they raise a classic problem of how to understand the role of technology in political developments. We discuss three sets of technological affordances of social media that facilitate the spread of false beliefs: obscuring the provenance of information, facilitating deception about authorship, and providing for manipulation of social signals. We argue that these do not make social media a “cause” of problems with falsehoods, but explanations of epistemic problems should account for social media to understand the timing and widespread occurrence of epistemic problems. We argue that “the marketplace of ideas” cannot be adequate as a remedy for these problems, which require epistemic editing by the press.
Full-text available
Personal lies (girl on date lying to dad) and fake news (Obama Bans Pledge of Allegiance) both deceive but in different ways, so they require different detection methods. People in long-term relationships try to tell undetectable lies to encourage, often, audience inaction. In contrast, unattached fake news welcome attention and try to ignite audience action. Thus, they differ in six ways: (a) speaker–audience relationship, (b) goal, (c) emotion, (d) information, (e) number of participants, and (f) citation of sources. To detect personal lies, a person can use their intimate relationship to heighten emotions, raise the stakes, and ask for more information, participants, or sources. In contrast, a person evaluates the legitimacy of potential fake news by examining the websites of its author, the people in the news article, and/or reputable media sources. Large social media companies have suitable expertise, data, and resources to reduce fake news. Search tools, rival news media links to one another’s articles, encrypted signature links, and improved school curricula might also help users detect fake news.
Full-text available
Research on political misinformation is booming. The field is continually gaining more key insights about this important and complex social problem. Academic interest on misinformation has consistently been a multidisciplinary effort. But perhaps political communication researchers are particularly well situated to be the leading voices on the public’s understanding of misinformation and many are heeding the call. With that responsibility in mind, in this brief article we offer six observations for the future of political misinformation research that we believe can help focus this line of inquiry to better ensure we address some of the most pressing problems. Our list is not exhaustive, nor do we suggest that areas we do not cover are not important. Rather, we make these observations with the goal of spurring a conversation about the future of political misinformation research.
This article aims at exploring a case of information crisis in Italy through the lens of vaccination-related topics. Such a controversial issue, dividing public opinion and political agendas, has received diverse information coverage and public policies over time in the Italian context, whose situation appears quite unique compared with other countries because of a strong media spectacularization and politicization of the topic. In particular, approval of the “Lorenzin Decree,” increasing the number of mandatory vaccinations from 4 to 10, generated a nationwide debate that divided public opinion and political parties, triggering a complex informative crisis and fostering the perception of a social emergency on social media. This resulted in negative stress on lay publics and on the public health system. The study adopted an interdisciplinary framework, including political science, public relations, and health communication studies, as well as a mixed-method approach, combining data mining techniques related to news media coverage and social media engagement, with in-depth interviews to key experts, selected among researchers, journalists, and communication managers. The article investigates reasons for the information crisis and identifies possible solutions and interventions to improve the effectiveness of public health communication and mitigate the social consequences of misinformation around vaccination.
As the scourge of “fake news” continues to plague our information environment, attention has turned toward devising automated solutions for detecting problematic online content. But, in order to build reliable algorithms for flagging “fake news,” we will need to go beyond broad definitions of the concept and identify distinguishing features that are specific enough for machine learning. With this objective in mind, we conducted an explication of “fake news” that, as a concept, has ballooned to include more than simply false information, with partisans weaponizing it to cast aspersions on the veracity of claims made by those who are politically opposed to them. We identify seven different types of online content under the label of “fake news” (false news, polarized content, satire, misreporting, commentary, persuasive information, and citizen journalism) and contrast them with “real news” by introducing a taxonomy of operational indicators in four domains—message, source, structure, and network—that together can help disambiguate the nature of online news content.
We conceptualize two cognitive modi operandi by which lay individuals (cf. experts) solve everyday life problems: cognitive retrogression and cognitive progression. The key demarcation between these two strategies is when a conclusion is finalized and how one’s cognitive and communicative efforts are expended in a problematic situation. Using these two concepts of cognitive strategies in problem solving, we explicate the emerging processes of cognitive arrest and epistemic inertia in the digital age and changing information environment. We apply the cognitive and communicative account to an exemplary case of cognitive arrest among lay publics: that of conspiracism and close-mindedness.
Humans are not very good at detecting deception. The problem is that there is currently no other particular way to distinguish fake opinions in a comments section than by resorting to poor human judgments. For years, most scholarly and industrial efforts have been directed at detecting fake consumer reviews of products or services. A technique for identifying deceptive opinions on social issues is largely underexplored and undeveloped. Inspired by the need for a reliable deceptive comment detection method, this study aims to develop an automated machine-learning technique capable of determining opinion trustworthiness in a comment section. In the process, we have created the first large-scale ground truth dataset consisting of 866 truthful and 869 deceptive comments on social issues. This is also one of the first attempts to detect comment deception in Asian languages (in Korean, specifically). The proposed machine-learning technique achieves nearly 81% accuracy in detecting untruthful opinions about social issues. This performance is quite consistent across issues and well beyond that of human judges.
In a media ecosystem besieged with misinformation and polarizing rhetoric, what the news media chooses not to cover can be as significant as what they do cover. In this article, we examine the historical production of silence in journalism to better understand the role amplification plays in the editorial and content moderation practices of current news media and social media platforms. Through the lens of strategic silence (i.e., the use of editorial discretion for the public good), we examine two U.S.-based case studies where media coverage produces public harms if not handled strategically: White violence and suicide. We analyze the history of journalistic choices to illustrate how professional and ethical codes for best practices played a key role in producing a more responsible field of journalism. As news media turned to online distribution, much has changed for better and worse. Platform companies now curate news media alongside user generated content; these corporations are largely responsible for content moderation on an enormous scale. The transformation of gatekeepers has led an evolution in disinformation and misinformation, where the creation and distribution of false and hateful content, as well as the mistrust of social institutions, have become significant public issues. Yet it is not just the lack of editorial standards and ethical codes within and across platforms that pose a challenge for stabilizing media ecosystems; the manipulation of search engines and recommendation algorithms also compromises the ability for lay publics to ascertain the veracity of claims to truth. Drawing on the history of strategic silence, we argue for a new editorial approach—“strategic amplification”—which requires both news media organizations and platform companies to develop and employ best practices for ensuring responsibility and accountability when producing news content and the algorithmic systems that help spread it.
As more people choose to get health information online, health-related topics continue to be the target of misinformation. From targeted misinformation campaigns about the safety of tobacco, the mainstreaming and subsequent adoption of scientifically flawed research about vaccines, to the misinformation-driven stigmatization of HIV, health communication as an academic discipline has been faced with the challenge of stemming the flow of misinformation and correcting individuals’ misinformed beliefs. To that end, scholars have devoted much time and effort to understand the antecedents and consequences of health-related misinformation, as well as strategies to correct misinformation and inoculate others from misinformation. In this essay, we review research on health-related misinformation, with a special emphasis on two major journals in the field, that is, Health Communication and the Journal of Health Communication, and interrogate the nature of health-related misinformation. We close this essay with a conceptualization of misinformed yet vocal health communicators, whom we term health misliterates.
This review article examines 142 journal articles on fake news and misinformation published between 2008 and 2017 and the knowledge generated on the topic. Although communication scholars and psychologists contributed almost half of all the articles on the topic of fake news and misinformation in the past 10 years, the wide variety of journals from various disciplines publishing the topic shows that it has captured interest from the scholarly community in general. Male scholars outnumbered female scholars in both productivity and citations on the topic, but there are variations by fields. There are very few scholars who have produced a large body of work on the topic yet. Effects of fake news/misinformation is the most common topic found in journal articles. A research agenda by the different roles in the production, spreading, and using fake news/misinformation is suggested.