ArticlePDF Available

Truth Bias and Partisan Bias in Political Deception Detection

Authors:

Abstract and Figures

This study tests the effects of political partisanship on voters’ perception and detection of deception. Based on social identity theory, in-group members should consider their politician’s message truthful while the opposing out-group would consider the message deceptive. Truth-default theory predicts that a salient in-group would be susceptible to deception from their in-group politician. In an experiment, partisan voters in the United States (N = 618) watched a news interview in which a politician was labeled Democratic or Republican. The politician either answered all the questions or deceptively evaded a question. Results indicated that the truth bias largely prevailed. Voters were more likely to be accurate in their detection when the politician answered and did not dodge. Truth-default theory appears robust in a political setting, as truth bias holds (as opposed to deception bias). Accuracy in detection also depends on group affiliation. In-groups are accurate when their politician answers, and inaccurate when he dodges. Out-groups are more accurate than in-groups when a politician dodges, but still exhibit truth bias.
Content may be subject to copyright.
https://doi.org/10.1177/0261927X17744004
Journal of Language and Social Psychology
2018, Vol. 37(4) 407 –430
© The Author(s) 2017
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/0261927X17744004
journals.sagepub.com/home/jlsp
Article
Truth Bias and Partisan
Bias in Political Deception
Detection
David E. Clementson1
Abstract
This study tests the effects of political partisanship on voters’ perception and detection
of deception. Based on social identity theory, in-group members should consider their
politician’s message truthful while the opposing out-group would consider the message
deceptive. Truth-default theory predicts that a salient in-group would be susceptible to
deception from their in-group politician. In an experiment, partisan voters in the United
States (N = 618) watched a news interview in which a politician was labeled Democratic
or Republican. The politician either answered all the questions or deceptively evaded a
question. Results indicated that the truth bias largely prevailed. Voters were more likely
to be accurate in their detection when the politician answered and did not dodge. Truth-
default theory appears robust in a political setting, as truth bias holds (as opposed to
deception bias). Accuracy in detection also depends on group affiliation. In-groups are
accurate when their politician answers, and inaccurate when he dodges. Out-groups
are more accurate than in-groups when a politician dodges, but still exhibit truth bias.
Keywords
deception detection, truth-default theory, social identity theory, political news
interview
Political scientists, psychologists, sociologists, and communication researchers have
long wondered about the biased processing of political messages by partisan voters.
One effect on democracy is the presumption that one’s in-group politician is believ-
able, while the out-group is deceptive. The present study is inspired by two theories
relevant to partisan processing: truth-default theory (TDT; Levine, 2014) and social
1California State University, Sacramento, CA, USA
Corresponding Author:
David E. Clementson, Department of Communication Studies, California State University, 5024
Mendocino Hall, 6000 J Street, Sacramento, CA 95819-6016, USA.
Email: davidclementson@gmail.com
744004JLSXXX10.1177/0261927X17744004Journal of Language and Social PsychologyClementson
research-article2017
408 Journal of Language and Social Psychology 37(4)
identity theory (SIT; Tajfel & Turner, 1979). TDT emphasizes the cognitive default of
believing other people’s messages, which results in truth bias toward processing mes-
sages as honest versus deceptive. SIT emphasizes a psychological attachment to
believing in-group members and disbelieving out-groups. TDT further holds that
salient in-groups are susceptible to inaccurate perceptions of deception from their own
members. Political interactions present a suspicious context in which the truth bias
could falter and give way to partisan bias in people’s judgments of political messages
(Harwood, 2014; Verschuere & Shalvi, 2014). After all, the current state of politics in
the United States seems at historic levels of partisans considering their opposing party
members immoral and untrustworthy (Pew Research Center, 2016).
This article tests whether partisan voters indeed manifest the predicted effects of
TDT and SIT in their processing of a politician’s message. SIT holds that group mem-
bers should consider a politician of their own in-group to be more trustworthy than a
politician from their out-group. TDT holds that an in-group’s presumed trust of their
own should extend to the realm of deception detection and affect message recipients’
susceptibility to inaccurate appraisals of their in-group’s veracity. In a political realm,
SIT and TDT are linked together as SIT emphasizes partisan bias but TDT emphasizes
truth bias. That is, SIT’s intergroup dynamics should translate to partisans considering
the out-group deceptive and the in-group honest—regardless of a politician’s actual
message content (Dunbar, 2017), while TDT’s focus on the truth bias similarly sug-
gests a presumption of honesty from partisan group members. However, TDT’s exten-
sion of in-group truth bias leading to inaccurate appraisals of one’s own politician, and
SIT’s emphasis on downplaying indiscretions from one’s own in-group while exag-
gerating malfeasance from the out-group (Dunbar et al., 2016) may also manifest as
voters inaccurately detecting deception from their out-group politician. TDT posits
that people will believe their in-group, and thus in the present political experiment
partisan voters may be susceptible to deceit from their in-group politician. TDT also
implies—in line with SIT—that people believe their in-group more than they believe
their out-group. From the logic of TDT, and emboldened by recent work regarding
intergroup deception (Dunbar, 2017; Dunbar et al., 2016), we would expect people to
exhibit disbelief toward their out-group. TDT does not predict deception bias or lie
bias, harping on the power of truth bias. The present study will examine if partisan
voters remain in a state of truth bias whether their in-group politician deceives, and
will examine if partisan voters manifest deception bias whether their opposing politi-
cian deserves credulity.
We also test the robustness of TDT by examining whether the truth bias holds when
people process a politician’s message. After all, the public seems to overwhelmingly
express the view that politicians “always” dodge and “never” answer questions (Bull,
2008; Harris, 1991). This article explores contrasting perspectives regarding truth bias
versus deception bias in a political setting. Popular media depictions of politicians, as
well as scholarly commentary, indicate that politicians deceive at extraordinary rates
and would lead us to think that audiences expect deception from politicians practically
whenever politicians’ lips are moving (Braun, Van Swol, & Vang, 2015; Romaniuk,
2013). Meanwhile, TDT emphasizes truth bias and the veracity effect, so our
Clementson 409
experiment explores whether people exhibit deception bias—as pervasive (yet naïve)
characterizations of political communication may have us believe—contrary to the
truth-bias tenets of TDT.
A Brief Note on Key Terms
Before reviewing the literature, we first offer a brief note on terminology related to
deception and evasion. (Definitions of deception may be found in Masip, Garrido, &
Herrero, 2004; Levine, 2014; and Buller & Burgoon, 1996.) Whether a message is a
lie (of commission or omission), equivocation, or evasion, deceptive messages all
have the same intended result of misleading the recipient into a false belief (Buller,
Burgoon, Buslig, & Roiger, 1994). For example, a falsehood may be outright dissem-
bling, whereas an evasion may provide seemingly truthful information, but an evasion
diverges from the relevant information solicited by the message decoder, thus creating
a false belief akin to lying. (More on deceptive tactics can be found in Buller, Burgoon,
White, & Ebesu, 1994, and Buller & Burgoon, 1994. More on deceiving through
covertly violating maxims of conversational cooperativeness can be found in
McCornack, 1992.) According to Bradac, Friedman, and Giles (1986), an evasion may
be generally defined as an uttered response to a question that is irrelevant to the topic
of the question. Unlike lies or telling the truth, which are intended as relevant to the
question asked, evasions are formulated by the speaker as intentionally irrelevant to
the topical query (Bradac et al., 1986).
The type of deception in this study’s political news interview is the act of dodging
a question. A dodge is also known as an evasion (Bull & Mayer, 1993). The term eva-
sion “connotes moral impropriety” (Clayman & Heritage, 2002, p. 242). Conversely,
an “answer” generally addresses the topic of the question to a reasonable degree as
asked by the interviewer (Clayman & Heritage, 2002). A dodge (or evasion) avoids the
topic of the question. Some forms of evasion are not necessarily deceptive, such as an
announced refusal to answer (Ekström, 2009) or overt topic avoidance (Afifi, Afifi,
Morse, & Hamrick, 2008). The form of evasion in this study is deceptive. It is an unan-
nounced off-topic response. In the parlance of Grice’s (1989) theory of conversational
implicature, surreptitiously shifting the agenda is an exploitative violation of the rel-
evance maxim. Its covert employment intends to mislead and deceive (McCornack,
1992; McCornack, Morrison, Paik, Wisner, & Zhu, 2014). To covertly dodge a ques-
tion is a subversive maneuver to evade in hopes that the interviewer and audience do
not notice (Rogers & Norton, 2011). For example, a person may dodge a question by
providing a response irrelevant to the topical query, yet escape detection because the
answer otherwise supplies a proper amount of information, and seems clear and not a
“bald-faced lie” (McCornack, 1992; McCornack et al., 2014).
In-group Versus Out-group Competition
Groups, such as political parties, compete against other groups for survival. In her
theory of the evolution of social groups, Brewer (1999) suggests groups survive
410 Journal of Language and Social Psychology 37(4)
through cooperation and trust. According to Brewer (1999), a group’s “cooperative
system requires that trust dominate over distrust” (p. 433). This leads to the notion of
an in-group. An in-group relies on mutual trust of its members. In an in-group, the
members expect cooperation from each other.
At a fundamental level, an in-group is characterized by its members being able to
trust each other (Brewer, 1999). According to optimal distinctiveness theory (Brewer,
1991), the benefits of group membership are best achieved through strong attachment—
a salient in-group where the members cooperate and trust each other. The in-group
discriminates against its out-group. The in-group and out-group compete for resources.
SIT (Tajfel, 1981; Tajfel & Turner, 1986) takes group survival to a further psycho-
logical level. SIT is interested in how competition manifests in group members’ minds.
Individuals are psychologically motivated to retain identification with fellow mem-
bers and differentiate themselves from the out-group.
SIT offers an explanatory framework for people’s group affiliations becoming
salient. A salient in-group arises as the members strongly perceive favoritism toward
their own members and derogation of an out-group. The attachment toward the
in-group is a cognitive process of accentuated belonging relative to exaggerated
detachment from the out-group (Leonardelli, Pickett, & Brewer, 2010). Meta-analysis
has affirmed a basic tenet of SIT that the more salient a group is, the more bias people
will develop in favor of their in-group (Mullen, Brown, & Smith, 1992).
Political Partisan Bias
Social identity and group membership help people deal with politics (Brewer, 2001).
Politics can present people with much anxiety and uncertainty (Lau & Redlawsk,
2001). According to the American Psychological Association (2016), the 2016 U.S.
presidential election may have been the most stressful in recent history. People can
assuage tension and expediently make decisions about politics by using political party
cues as informational shortcuts (Groenendyk & Banks, 2014).
People tend to make the most immediate and impactful assumptions in their politi-
cal decision making based on a politician’s party label. Seminal voting studies (e.g.,
Berelson, Lazarsfeld, & McPhee, 1954; Campbell, Converse, Miller, & Stokes, 1960;
Lazarsfeld, Berelson, & Gaudet, 1944) harp on the influence of partisanship in peo-
ple’s assumptions about politicians. Party identification is one of the most stable iden-
tifications over time (Sears & Funk, 1999). Currently in the United States, the two
major rival parties are the Democrats and Republicans. While there have been shifts in
U.S. Americans calling themselves Independents rather than Democrats or Republicans,
in longitudinal studies people tend to report the same party identification with more
stability than most other social category labels (Huddy, 2001).
People assume a politician who shares their party affiliation is more similar to
themselves than a politician of the opposing party affiliation (Pew Research
Center, 2016; Rahn, 1993). People rate others from their own political party as
more honest and ethical than they rate a politician of the opposing party (Ehrlich
& Gramzow, 2015).
Clementson 411
Carlin and Love (2013) ran an experiment with college students who identified as
either Democrats or Republicans. Participants played a trust game over the Internet.
The only experimental manipulation by the researchers was telling participants that
the other player was a Democrat or a Republican. The partisan participants gave more
lottery tickets to an in-group member than an out-group member. Trust between the
undergrads was solely based on party identification. Munro, Lasane, and Leary (2010)
also gave college students a task irrelevant to politics and found that partisan bias
drove the participants’ judgments. Participants played the role of college admissions
officials. When an applicant held the opposing party affiliation of a participant then
participants tended to reject the applicant.
An in-group’s trust among themselves and distrust for the out-group may be most
distinct when the groups are political (Brewer, 1999). In-group loyalties are tied to
out-group opponents being distrusted. The partisanship of American voters has been
discussed as synonymous with in-group/out-group social identity (Green, Palmquist,
& Schickler, 2002). A burgeoning corpus of studies have empirically examined voters’
partisan behavior under a framework of SIT (Gerber, Huber, & Washington, 2010;
Greene, 2004). For example, political psychologists have applied SIT’s in-group/out-
group differentiation to studies of Australian college students (Duck, Hogg, & Terry,
1995), British teenagers (D. Abrams & Emler, 1992), students and artists in London
(Kelly, 1988), and U.S. students (Greene, 1999). This brings us to our first proposition
regarding in-group/out-group perceptions of a politician.
Hypothesis 1: Whether a politician evades or answers questions, people who share
the politician’s party affiliation will perceive the politician as significantly more
trustworthy than those of the opposing party affiliation.
The next hypothesis builds slightly from the first. People should perceive more
deception from an out-group politician than from their in-group politician. Politicians
are generally considered dishonest and deceptive (Gallup, 2016; Serota, Levine, &
Boster, 2010). The pervasive perception is that politicians “never give a straight
answer to a straight question” (Bull, 2008, p. 337). Politicians practically “come out of
the womb equivocating” (Bavelas, Black, Chovil, & Mullett, 1990, p. 235). In essence,
people express an expectation that politicians will dodge questions. Yet SIT’s tenets of
party favoritism and opposing party derogation should transfer to perceptions of
deception. Accordingly, we posit the following in the context of a political news
interview.
Hypothesis 2: People who are exposed to a politician from their partisan in-group
will be less likely to report that the politician dodged a question than people who
are exposed to a politician from their partisan out-group.
The first two hypotheses concerned partisan bias affecting people’s perceptions of
a politician being trustworthy or deceptive. We now build from SIT to other lines of
research which take theoretical positions regarding people’s innate judgments of
412 Journal of Language and Social Psychology 37(4)
veracity versus deception and salient in-group members’ observations. The next pre-
diction will concern whether people have a deception bias toward a politician’s mes-
sage. The following few sections will then bridge theorizing concerning partisan
in-group/out-group bias and deception detection by salient in-groups.
Truth Bias
Despite the popularity and prevalence of deception detection in various discussions of
conversational and institutional discourse, people typically receive messages as being
true (Gilbert, Tafarodi, & Malone, 1993). As its name implies, TDT (Levine, 2014)
emphasizes that human interactants exhibit truth bias. Truth bias may be defined as “the
tendency to actively believe or passively presume that another person’s communication
is honest independent of actual honesty” (Levine, 2014, p. 380). Barring particularly
suspicious contexts or a speaker having an obvious motive to lie, people’s default men-
tal setting is a presumption of veracity. People expect honesty from each other. TDT’s
mechanism whereby a deceptive message would slip by undetected is based on
Spinoza’s (1677/1982) belief theory. To “unbelieve” something—thus shifting from a
predisposed, automatic truth bias—requires conscious effort (Gilbert, 1993).
Formative research from Zuckerman, DePaulo, and Rosenthal (1981) reports that peo-
ple are “more likely to call messages truthful than deceptive” (p. 24). TDT holds that the
truth bias can facilitate accuracy in deception detection. TDT also points out that people’s
presumption of truth is not necessarily a bad belief state, because most people are truthful
most of the time. As content analyses in the domain of politics have revealed, politicians
nearly always give on-topic responses to questions (Clementson & Eveland, 2016).
Ordinary detection tends to result in more accurate truth detection (relative to lie
detection) because of the “Spinoza effect” (Gilbert, 1993; Levine, 2014) leading to the
veracity effect (Levine, Kim, & Blair, 2010; Levine, Park, & McCornack, 1999).
Spinoza (1677/1982) philosophized that the human mind initially receives information
as being truthful because understanding requires acceptance in order to process. Only
after automatic credulity, according to Spinoza, can the veracity of information then be
appraised (Bennett, 1984). The mind can disbelieve false information (i.e., detect
deception) but not without first representing it as true (Gilbert, Krull, & Malone,
1990). The Spinoza “effect” occurs when people continue to believe (false) informa-
tion without activating the appraisal stage, because the information retains its inertia
from initial acceptance. To “unbelieve” something requires cognitive effort, so errors
are made in allowing deception to go undetected (Gilbert et al., 1993). The Spinoza
effect helps explain truth bias in deception detection research (Levine, 2014).
The veracity effect regards people’s propensity to be more accurate at judging truths
than lies (Levine et al., 1999). People expect honest communication, so in study settings
prompting people to discern between truths and lies, the more truths there are to detect,
the more accurate the participants will appear (Levine, Kim, Park, & Hughes, 2006).
When accuracy rates are scored separately for truths and lies, the veracity of the messages
that are judged can predict resultant accuracy. Consistent with the truth bias and veracity
effect, there tends to be a positive, linear correlation between the amount of truths in a
given deception detection study and observed accuracy (Park & Levine, 2001).
Clementson 413
Politics Triggering Deception Perception and Detection
Contrary to the truth bias, in political contexts, a deception bias might surface. Instead
of expecting truth, people may be more likely to expect deception from a politician.
Unlike the truth bias whereby people presume honesty from each other—and thus the
veracity effect in which increasing truthful message exposure increases the appear-
ance of accuracy in detection—a deception bias may arise in processing a politician’s
message. Thus, the more a politician deceives instead of speaking truths, the more
message decoders may appear to correctly detect deception rather than detect
veracity.
Public perceptions give the impression that politicians deceive at extraordinary
rates. There is the old joke: “How can you tell when a politician is lying? His lips are
moving” (Braun et al., 2015). An article in The New York Times by the editor of
PolitiFact had the headline “All politicians lie” (Holan, 2015). Ekman (2009) dis-
cusses politicians as exemplifying deception. According to Braun et al. (2015),
deception is ubiquitous in politics. Kahneman (2011) speculates that people (himself
included) think politicians are the most deceptive people because, unlike otherprofes-
sions, politicians’ verbal indiscretions are covered prominently in the media.
According to Romaniuk (2013), “There is a widely held belief . . . that politicians
often produce evasive responses under questioning from members of the news media”
(p. 145). People believe “politicians are notorious for not answering questions.” For
example, a U.S. presidential debate opened with a questioner challenging the politi-
cians to “do something revolutionary and . . . actually answer the questions”
(Romaniuk, 2013, p. 145).
People’s truth bias causes their judgments to appear more accurate in detecting
truths in standard deception detection experiments. However, based on mass-mediated
depictions and expressions of the presumed pervasiveness of deception in politics, we
will offer a prediction counter to TDT. TDT emphasizes the truth bias, but we posit
that people are so inclined to expect deception from politicians that instead of the
veracity effect manifesting, when people observe a political question–answer setting a
deception bias may manifest. Given the popular media and scholarly assertions about
rampant deception from politicians, people might expect a politician to dodge ques-
tions. After all, presumably the public thinks that politicians “always” dodge and
“never” answer questions (Bull, 2008; Harris, 1991). Audience members may presume
deception and appear to be more accurate in their detection when a politician dodges
than when a politician does not dodge. If people think politicians deceive at extraordi-
nary rates, then a reversal of the veracity effect may arise, whereby an increase in
deception by a politician would translate to increased accuracy in people’s detection of
said deception. Thus—counter to TDT and truth bias and the veracity effect, and
instead in line with assertions from mass media and academic literature suggesting a
deception bias—we propose:
Hypothesis 3: People who are exposed to a politician dodging will be more accu-
rate in their dodge detection than those who are exposed to a politician not
dodging.
414 Journal of Language and Social Psychology 37(4)
The Interaction of Group Dynamics and Deception on
Accuracy
Moving from people’s bias toward truth or deception in processing political messages,
we next turn to accurate detection being influenced by partisan bias. Our first two
predictions concerned the basic tenets of SIT’s presumptions of trustworthiness from
an in-group and deception from an out-group. Our third prediction concerned TDT’s
assertions about the truth bias and whether politics might present an exception with a
deception bias affecting accurate perceptions. The next and final prediction will bring
together the theorizing to explore an interaction between in-group/out-group dynamics
and deception on accuracy. We may gain a sense of understanding the phenomenon of
deception detection in politics and predict the causal effects of a politician dodging or
not dodging on accuracy as moderated by whether a politician represents people’s in-
group or out-group.
In its elaboration of the truth bias affecting the perception and detection of decep-
tion, TDT asserts that people’s processing of messages as being honest or deceptive
may be influenced by group dynamics. TDT suggests that—in presumably rare
instances of an in-group member deceiving a fellow member—salient in-group mem-
bers would be susceptible to deception from their own members. As discussed earlier,
in-group members presume honesty. They have a truth-default—perhaps to a fault.
Their group’s existence and survival requires implicitly trusting each other. In the
occurrence of a member potentially deceiving another member, the deception would
likely escape detection.
Just as groups tend to exhibit an inflated truth bias among themselves, they might
err on the side of too much suspension of the truth-default toward out-groups.
Extending TDT with SIT, this positive bias toward one’s in-group and negative bias
toward the out-group should translate to people’s observations of deception. People
judge deception from their out-group harsher than deception from members of their
own in-group (Dunbar et al., 2016). Deception among an in-group can be excused as
ethical and altruistic, benefitting other members (e.g., sparing them from brutal hon-
esty). However, deceiving an out-group has negative connotations. For instance, the
same piece of misleading information that would be considered benign teasing or
exaggeration among in-group members could be considered ill-intentioned lying to an
out-group (Dunbar et al., 2016). If people enhance their self-worth through favoritism
of their in-group and derogation of their out-group à la SIT, their distinctiveness could
motivate them to treat deception from their in-group members positively (i.e., ignoring
the implications of deceit) compared with noticing out-group deception’s averseness.
Research in political and nonpolitical contexts indicates that people consider members
of their own group trustworthy and honest, and consider the out-group untrustworthy
and dishonest, but no study has tested whether such an effect occurs when a group
member is deceptive and thus makes fellow in-group members susceptible to decep-
tion (Dunbar, 2017). Members of different political groups could be exposed to the
same messaging and yet draw different perceptions of deceptiveness based on whether
the speaker shares their party affiliation (Dunbar, 2017).
Clementson 415
Although studies from Rogers and Norton (2011) and Clementson (2018) measured
effects of politicians dodging questions, they did not explicitly account for partisan
group affiliation. And party identification is probably the biggest influence on people’s
perceptions of a politician (Rahn, 1993). Clementson (2018) exposed participants to a
news interview in which the politician had no party label. Yet there was an effect in
which participants of stronger partisanship spotted less dodging across conditions.
Clementson speculated whether partisans were more susceptible to political decep-
tion. He wondered if TDT’s prediction about salient in-groups’ propensity toward the
truth bias was surfacing. But the study did not ask participants point-blank if they
observed any dodging, nor did the study tap in-group/out-group dynamics.
TDT builds from SIT when in-group tensions arise in deception detection. Salient
in-groups presume honesty from their members—and by extension should presume
dishonesty from the out-group. A group’s competition for resources forces a salient
in-group to presume truth of their own members and distrust oppositional groups (J. R.
Abrams, Eveland, & Giles, 2003). Because salient in-groups would presumably expect
dishonesty from an out-group, when a politician of the opposing party does not dodge
a question it is likely that observers will have inaccurately presumed deception. Also,
as mentioned previously, TDT notes that in trigger events—of which politics is an
exemplar (Harwood, 2014; Verschuere & Shalvi, 2014)—people are more suspicious
and the truth bias falters. Combining the detection effects of the truth bias encounter-
ing salient in-group partisanship à la TDT and in-group trust versus out-group distrust
à la SIT, we expect group members to more correctly observe their in-group politician
not dodging and the opposing politician dodging. Accordingly we propose:
Hypothesis 4: The relationship between whether a politician dodges or does not
dodge and perceptual accuracy depends on whether the politician represents a per-
son’s in-group or out-group. When a politician does not dodge, in-group voters will
be more accurate in their detection than out-group voters. When a politician dodges,
out-group voters will be more accurate than in-group voters.
Method
Participants
Participants (N = 618) were registered voters in the state where this study ran.1 They
were recruited for a Qualtrics Panel. They were 48.4% male and 51.6% female. Age
ranged from 18 to 90 years (M = 53.85, SD = 27.84). Participants reported their race
as 86.6% White, 8.3% Black or African American, 1.5% Asian, 1.5% Hispanic or
Latino, 0.6% American Indian or Alaska Native, and 1.1% Other.
Participants were recruited for a nondescript study; their political partisan identifi-
cation and ideology were not primed other than as an opening demographic item.
Using the standard wording of the American National Election Studies, at the begin-
ning of the study (after obtaining informed consent) respondents were asked,
“Generally speaking, do you think of yourself as a Republican, a Democrat, an
416 Journal of Language and Social Psychology 37(4)
independent, or something else?” Respondents who selected Democrat or Republican
(i.e., “pure” partisans) were retained. They were 50.2% Democratic and 49.8%
Republican. Those who selected Independent or “something else” were filtered out by
Qualtrics and went unmentioned in data analysis. Nonpartisans were purged because
this study compares partisan groups operationalized in U.S. politics as Democrats and
Republicans.2
Experimental Design
Participants watched a news interview embedded in an online survey. In the 4-minute
clip, a journalist interviews a congressional candidate from the U.S. state of this study
and asks four questions about national and state issues. The stimulus was constructed
to be as realistic and relevant for participants as possible. We strove for ecological
validity and subject salience. The politician’s answers were also scripted to include
bipartisan/nonpartisan rhetoric, so the manipulation was believable for the politician
to be either a Democrat or Republican. The party identification of the politician was
manipulated. The screen identifies the politician as either a Democrat or a Republican.
Participants were randomly assigned to be exposed to one of four video clips. The
between-subjects design had 2 (dodge or no-dodge) × 2 (Democratic or Republican
politician) experimental conditions. In the no-dodge version, the politician answers all
the questions on-topic. In the dodge version, the politician gives an off-topic answer to
one question. It is the second question in the interview. The journalist asks the politi-
cian about his plan for the economy and jobs, and he responds with his plan for peace
in the Middle East, a similar manipulation as the off-topic dodge condition in Rogers
and Norton’s (2011) political debate experiment.
The interview was filmed at a real TV studio. The interviewer was the real senior
political reporter for the capital city newspaper where the study ran. The journalist
plays himself. The politician was not a real politician and had never appeared on the
news before. (The actor playing the politician was a real professional political consul-
tant.) The script appears in the appendix.
Variables testing Hypotheses 2 to 4 were coded such that odds ratios could be
attained. Exposure to a treatment in which the politician dodged was coded 1 (for
“success” in the parlance of odds ratios) and 0 for no-dodge condition.
Measures
In-group/Out-group. After data collection, participants were categorized as in-group or
out-group. These variables were based on two indicators from the survey: (a) a partici-
pant’s self-identified party affiliation (Democratic or Republican) and (b) exposure to
a stimulus in which the politician was a Democrat or Republican. Participants were
then classified as being either of the same party (in-group) or the other opposing party
(out-group) per their exposure.
Qualtrics randomization resulted in 327 in-group participants (52.9% of the sam-
ple) and 291 out-group (47.1% of the sample). The two groups were not significantly
Clementson 417
different from 50%, based on a one sample t test with the test value of .5 as the groups
were coded 0 and 1, t(617) = 1.449, p = .148.
Trustworthiness. McCroskey and Teven’s (1999) six-item scale taps perceptions of a
speaker’s trustworthiness. Placed on 7-point continua, the semantic differential items
are as follows: honest/dishonest, untrustworthy/trustworthy, honorable/dishonorable,
moral/immoral, unethical/ethical, and phoney/genuine. Half of the items require
reverse-coding. Higher scores indicate the politician was perceived as more trustwor-
thy (α = .94, M = 4.53, SD = 1.33).
Observation of Dodging. After exposure to the stimulus and a manipulation check, par-
ticipants were asked “Did he dodge any of the questions?” There were two response
options randomly presented: Yes or No. The majority of participants (62.3%) said
“No” and 37.7% said “Yes.” A one-sample t test with the test value of .5 indicated the
differences significantly varied from a 50/50 split, t(617) = 57.56, p < .001. Although
about half were exposed to a dodge, only about a third of the participants reported that
the politician dodged a question.
Both Republicans and Democrats were more likely to say that the politician did not
dodge any questions than that the politician did dodge questions. The difference
between the parties was not significant, χ2(1) = 2.29, p = .130. See Figure 1.
Accuracy. After data collection, a variable was created for whether participants were
accurate or inaccurate in their judgment. The dichotomous variable was coded 1 for
accurate and 0 for inaccurate. This variable was based on (a) whether a participant
selected “Yes” or “No” in response to the question asking if the politician dodged any
questions and (b) whether the participant was in a dodge or no-dodge condition. Over-
all, the majority (59.5%) were accurate and 40.5% were inaccurate. A one-sample
Figure 1. Percent of each party who perceived dodging.
418 Journal of Language and Social Psychology 37(4)
t test with the test value of .5 confirmed that the participants had significantly greater
accuracy than chance, t(617) = 4.83, p < .001.
Manipulation Checks
The Qualtrics survey forbade respondents from returning to a prior screen. There was
a manipulation check immediately after exposure to the stimulus in which participants
were asked to recall the politician’s party identification. Options were Democrat or
Republican (randomly presented), “the video clip didn’t say,” or “I don’t remember.”
Participants who failed for their particular condition were then filtered out. If they
selected “the video clip didn’t say” or “I don’t remember” they were filtered out.3
Before being debriefed, participants were asked how much prior media exposure
they had to the politician in the video clip. On a scale of 0 (none) to 10 (an extreme
amount), responses ranged from 0 to 10 (M = 1.61, SD = 2.35, Mdn = 0, Mode = 0).
Most (64.9%) indicated that that they had zero exposure. And the median and mode
were zero. However, participants indicated, on average, that the stimulus apparently
held enough ecological validity for the mean to be between 1 and 2.
Randomization and Validity Checks
For random assignment to conditions of participants’ own party affiliation and whether
the politician dodged, the breakdown was as follows. Democratic participants exposed
to No-Dodge, n = 157 (25.4%); Democratics exposed to Dodge, n = 153 (24.8%);
Republican participants exposed to No-Dodge, n = 162 (26.2%); and Republicans
exposed to Dodge, n = 146 (23.6%). Those were not significantly different, χ2(1) =
0.236, p = .627.
Slightly more participants were randomly assigned to a No-Dodge condition
(51.6%) than a Dodge condition (48.4%). There was not a significant difference
between the conditions, t(617) = −0.80, p = .422.
For random assignment to conditions of whether participants were exposed to their
in-group or out-group politician and whether the politician dodged, the breakdown
was as follows. In-group No-Dodge, n = 165 (26.7%); In-group Dodge, n = 162
(26.2%); Out-group No-Dodge, n = 154 (24.9%); and Out-group Dodge, n = 137
(22.2%). Those were not significantly different, χ2(1) = 0.374, p = .541.
The politician’s trustworthiness was not significantly different, on average, whether
participants were Democratic (M = 4.54, SD = 1.30) or Republican (M = 4.52, SD =
1.37), t(616) = 0.149, p = .858.
Results
The first hypothesis predicted that, regardless of whether a politician evades a ques-
tion, people who share the politician’s party affiliation will perceive him as being
significantly more trustworthy than people of the opposing party will perceive him. In
an independent samples t test, the politician’s trustworthiness was significantly higher,
Clementson 419
on average, when he was in-group (M = 4.94, SD = 1.23) than out-group (M = 4.08, SD
= 1.31), t(616) = 8.416, p < .001, Cohen’s d = .677. Hypothesis 1 received support.
Hypothesis 2 predicted that people who were exposed to a politician from their
partisan in-group would be less likely to report that the politician dodged a question
than people exposed to a politician from their out-group. Put another way, people who
were exposed to a politician from their out-group were predicted to be more likely to
report that the politician dodged a question than people exposed to a politician from
their in-group. There was a significant association, Pearson χ2(1) = 16.309, p < .001;
G2(1) = 16.348, p < .001. Indeed, 30% perceived a dodge in the in-group condition and
46% perceived a dodge in the out-group condition. Meanwhile, of those who reported
that the politician did not dodge any questions, more of them were in an in-group
exposure condition than out-group. Figure 2 presents the results. The odds of a person
perceiving dodging were about 2 times larger when a politician was from people’s out-
group than when the politician was from people’s in-group, odds ratio: 1.966, 95%
confidence interval [CI: 1.413, 2.734]. Hypothesis 2 received support. People exposed
to a politician from their out-group were significantly more likely to report that the
politician dodged a question than people exposed to a politician from their in-group.
Hypothesis 3 predicted that people exposed to a politician dodging would be more
accurate in reporting that the politician dodged than those who are exposed to a politi-
cian not dodging would be accurate in their observation. There was a significant asso-
ciation between the variables—Pearson χ2(1) = 36.913, p < .001; G2(1) = 37.270, p <
.001—but in the opposite way predicted. Contrary to the prediction, among those in
the dodge condition accuracy was 47%, compared with those in the no-dodge condi-
tion where accuracy was 71%. Meanwhile, of those who were inaccurate, more were
exposed to dodging than no-dodging. Figure 3 presents the results.
The odds of a person being accurate in their dodge detection when exposed to a
dodge was a little over a third of the odds of someone not exposed to a dodge being
accurate, odds ratio: 0.362, 95% CI[0.259, 0.504]. The odds of someone not exposed
Figure 2. Percentages who perceived dodging in in-group and out-group conditions.
420 Journal of Language and Social Psychology 37(4)
Figure 3. Percent accurate in dodge detection for dodge versus no-dodge conditions.
to dodging being accurate in their dodge detection was 2.76 times the odds for some-
one exposed to dodging being accurate. Hypothesis 3 was rejected. People exposed to
a politician dodging appear more likely to be inaccurate in their detection relative to
those exposed to a politician not dodging.
Hypothesis 4 predicted that the relationship between dodge/no-dodge exposure and
accuracy would depend on whether the politician represents a person’s in-group or
out-group. We specifically proposed that voters would be more accurate when their
in-group politician does not dodge than when their in-group politician dodges, and
proposed that when a politician dodges then the out-group voters would be more accu-
rate than in-group voters.
This hypothesis was tested with binary logistic regression. A two-predictor logistic
model with its interaction term was fitted to the data. The result showed:
Predicted logit of Accuracy = 0.190 0.565 In-Group +
() ()
00.286 No-Dodge
+ 1.475 Group x Dodge
()
()
After affirming overall model fit, GM(3) = 56.25, p < .001, sequential analysis was run
to inspect the unique contribution of the interaction. The two independent variables
were entered first. Then, the interaction term was entered as a second block. When the
interaction variable was included in the model—Wald χ2(1) = 18.07, p < .001—there
was a statistically significant decrease in the proportion reduction of error (log-likeli-
hood), χ2(1) = 18.48, p < .001. The statistical significance affirmed that including the
interaction term in the model decreased error. From the first block to the second block,
Cox and Snell R2 improved by .028, from .059 to .087. Nagelkerke R2 improved by
.037, from .080 to .117. About 12% of the null deviance was accounted for by the set
of predictors. About 4% of the null deviance was accounted for by the interaction term.
The prediction of Hypothesis 4 that group membership would moderate the effect of
dodge exposure on detection accuracy was affirmed. In-group voters were more likely to
Clementson 421
be accurate in detection when their politician did not dodge than when their politician
dodged, Exp(β) = 4.37, p < .001, 95% CI [2.21, 8.63]. Out-group voters were more accu-
rate than the in-group in the dodge condition. Figure 4 illustrates the moderation effect.
The interaction term’s odds ratio indicated that the odds of being accurate are 4.37
times greater when an in-group member is exposed to no-dodging than when an in-
group member is exposed to dodging. Being in the in-group and being exposed to
no-dodging increases the odds of accuracy by 337.1% compared with an in-group
member being exposed to dodging. Hypothesis 4 received support.
Discussion
The first hypothesis tested the tenets of SIT in a political news interview setting.
Regardless of whether the politician answered all the questions or dodged one, viewers
who shared party affiliation with the politician considered him significantly more
trustworthy than viewers from the politician’s opposing party.
The second hypothesis concerned in-group/out-group dynamics and perceptual
deception. When people were exposed to a politician sharing their party identification
then 30% perceived dodging, but when they were exposed to a politician of their
opposing party, then nearly half (46%) perceived dodging—a significant difference.
The third hypothesis concerned people’s accuracy in appraising political deception.
We expected people to be more accurate when dodging was present than when it was
absent. Alas! our prediction was rejected. In this test of people’s accuracy in dodge
detection, people were significantly more accurate in the absence (71%) than in the
presence (47%) of a dodge.
The fourth hypothesis explored people’s accuracy in appraising evasion as depend-
ing on whether people are exposed to a politician from their in-group or out-group.
Figure 4. Accuracy (from 0 inaccurate to 1 accurate) based on dodge exposure moderated
by group affiliation.
422 Journal of Language and Social Psychology 37(4)
In-group voters were more accurate when their politician did not dodge than when he
dodged. When a politician dodged, though, out-group voters were more accurate. When
the politician dodged, a majority (55%) of observers were accurate in their detection
when the politician represented their out-group, but less than half (41%) were accurate
if the politician represented their in-group. When people were not exposed to dodging,
they were especially accurate when the politician represented the in-group (80%).
Theoretical Implications
This article extends two theories relevant to politics—SIT and TDT—and explores the
processing of partisan voters when exposed to deception. Most evident with biased
perceptions was the interaction of in-group/out-group and dodge/no-dodge manifest-
ing SIT. Group members processed their own politician as being favorably positive.
Aligning with TDT, salient in-groups presume honesty from their fellow members and
presume dishonesty from group members competing for resources (Brewer, 1999;
Tajfel & Turner, 1979). Their positive group identity displayed accentuated truth bias.
To the best of our knowledge, this article provides the first experiment testing
TDT’s assertion that the truth bias would affect the perception and detection of decep-
tion by salient in-groups. In support of TDT, salient in-group members indeed presume
honesty of each other and presume deception from their out-group. This study’s find-
ings support TDT’s emphasis on people having as their default mental setting a pre-
sumption of truth. Our stimulus concerned a political interview with a real journalist
questioning a politician for whom half the participants held the opposing party identi-
fication. This study used a suspicion-invoking trigger event which had never been
tested in such a way before. And the truth bias still prevailed. The resilience of the
truth bias appears remarkable.
The prediction that people would be more accurate in their deception detection
when the politician dodged, than when dodging was absent, was inspired by the folk
idea that people distrust politicians to the point of having a deception bias instead of
truth bias. Politics exemplifies verbal deception (McCornack et al., 2014), and the
media tend to focus on politicians misbehaving (Serota et al., 2010). However, the
truth bias remained largely intact, even toward politicians. Furthermore, our study
permitted partisan bias to manifest, with voters exposed to a deceptive politician of
their opposing party. Yet the results indicated that people’s ratings of the politician’s
trustworthiness hovered around 5 for their own party and 4 for the other side—on a
7-point scale, with 4 as the midpoint. So, people did not find the politician especially
dishonest or untrustworthy. Despite the differences, the ratings tended to be in the
middle. People were still .623 truth-biased. Even in the out-group, people were still
truth-biased (.70 in-group, .54 out-group). The truth bias is so robust, reliable, and
“powerful” in human interactions that Levine “has never observed a lie bias in any of
his data,” including studies where he prompted participants to suspect deception
(Burgoon & Levine, 2010, p. 210; cf. Levine, Serota, & Shulman, 2010). The decep-
tion literature can now add partisan political interviews as a presumably deceptive
terrain in which the truth bias retains its robustness. In this first test of TDT in a
Clementson 423
political context pitting truth bias versus deception bias, TDT appeared robust. We
mistakenly expected people to presume that the politician would dodge and thus
expected people to more accurately detect dodging than no-dodging. Yet the veracity
effect remained strong. Our prediction (in Hypothesis 3) was based on the exasperated
naïve perception that pervades political discourse in popular media and academic lit-
erature, expecting people to presume that the politician would be dodging. Yet truth
bias retained its power and the veracity effect arose—even with out-group
politicians.
Other work on intergroup deception would suggest that people consider lies told to
them by out-group members to be less acceptable than they consider lies told to them
by their in-group members (Dunbar, 2017; Dunbar et al., 2016). The present study
measured detection, not perceptions of acceptability, as we tested whether there were
differences in in-group versus out-group partisans spotting a dodge. However, future
research could examine whether political affiliation affects the acceptability of lying,
which could directly extend the work of Dunbar et al.
Differences in perceptions of deception by the opposing groups were not as stagger-
ing as we might have expected considering rampant partisan bickering that pervades
mass-mediated depictions of politics. While the predictions were affirmed as people’s
perceptions conveyed in-group trust and out-group distrust, the truth bias still surfaced.
Even in the out-group exposure condition, a majority (54%) reported that the politician
did not dodge. With in-group exposure, however, the truth bias appeared more perva-
sive—as predicted by TDT concerning salient in-groups. Seventy percent did not think
their in-group politician dodged. Although in-group members were far more likely to
say that their politician did not dodge questions, a majority of out-group observers also
did not perceive dodging—even though in the dodge condition the journalist asked the
politician for his plan on jobs and the economy and the politician answered by talking
about peace in the Middle East. Most (71%) of the people in the no-dodge condition
reported not seeing a dodge. This result is consistent with the truth bias and the veracity
effect extended to political deception. Based on the veracity effect, accuracy in decep-
tion detection experiments is largely a function of the induction’s base-rate of message
veracity (Levine et al., 2006). The veracity effect is inspired by two key theoretical
assertions of cognitive processing. First, people generally have a truth bias, because our
psychological default is a presumption of honesty. Therefore, second, the more truths
an observer is presented with, the more the observer will appear to spot.
This study suggested support for the “Spinoza effect” (Gilbert, 1993; Levine, 2014)
leading to the veracity effect (Levine et al., 1999) in politics. Just as decades of decep-
tion detection studies with dozens of experiments have revealed people are more likely
to be accurate in truth–lie stimuli when the speaker tells the truth (Park & Levine,
2001), this first deception detection experiment to test accuracy in a political context
also found support for the influence of the truth bias. The odds of being accurate in
dodge detection were 2.76 times greater when the politician did not dodge than when
he dodged. Despite popularized depictions to the contrary, when people are exposed to
a politician dodging, it seems unlikely that they will accurately detect it. Conversely,
people are likely to be inaccurate in their judgment when the politician dodges. At
424 Journal of Language and Social Psychology 37(4)
least 70% of participants in the no-dodge condition and 70% of participants in the in-
group condition reported observing no-dodge. The vast majority of participants in the
no-dodge condition were accurate in their observations.
Whether the politician shared their party identification, the majority in a given con-
dition reported no-dodge. Nonetheless, we can glean that partisan perceptions mani-
fested as cooperation and trust for the in-group, based on Brewer’s (1999) theory of
the evolution of social groups. People were significantly more likely to report that
their in-group politician did not dodge any questions and appeared to distrust the out-
group politician. TDT, as well as Brewer’s (1991) optimal distinctiveness theory,
would assert that people were far more inclined to report that their in-group politician
did not dodge—even when he did—because members need to trust each other and
believe they are more honest than the out-group. Partisans are strongly attached to
each other psychologically and presume cooperation to survive as a political group.
Conclusion
This article contributes to our understanding of partisan bias and deception in politics.
In line with SIT, out-group members perceive more dodging than in-group members—
even if both contingents tend toward the truth bias. We combined SIT and TDT, find-
ing support for their linkage. People’s accuracy in detecting dodges and nondodges
was moderated by whether the politician was from their in-group or out-group. A
dodge was more likely to be detected by out-group members, while no-dodging was
more likely to be detected by in-group members.
In support of TDT, salient in-group members are susceptible to deception, and the
truth bias counteracts partisan bias as out-group members seemed to believe the politi-
cian more than suspect him of deception. Fortunately, though, most people tell the
truth most of the time (Levine, 2014). This includes politicians giving far more on-
topic answers than deceptive evasions, based on content analyses of U.S. presidential
debates and press conferences (Clementson & Eveland, 2016). Humanity’s truth bias
overriding partisan bias in politics may be a healthy mental default.
Appendix
Script From Stimuli
Reporter: Hello, and welcome. I’m [name blinded], senior political reporter for the
[name of newspaper blinded]. We are honored to be joined today by [name
blinded], a candidate for the U.S. House of Representatives. We thank him for
joining us, to answer some questions about issues important in this campaign for
the House. Welcome.
Politician: Thank you for having me.
Reporter: I’d like to ask you about the environment. What is your stance on such
key issues as our dependence on oil, renewable energy, and the continued use
and depletion of our coal resources?
Clementson 425
Politician: Sure, well I have a plan for cleaning up the environment and protecting
our natural resources. Our nation has increased oil production to the highest
levels in 16 years. Natural gas production is the highest it’s been in decades. We
have seen increases in coal production and coal employment. But we can’t just
produce traditional sources of energy. We’ve also got to look to the future. That’s
why we need to double fuel efficiency standards on cars. We ought to double
energy production from sources like wind and solar, and as well as biofuels.
Reporter: I would like next to inquire about jobs. Our economy has strengthened
across certain sectors, but employment is not near where it needs to be. For
example, the manufacturing industry continues to sustain deep cuts and layoffs.
What is your plan to bolster the workforce and create jobs?
Politician:
*****************************
***ON-TOPIC VERSION***
I was just at a manufacturing facility, where some twelve hundred people lost their jobs.
Yes, I agree that we need to bring back manufacturing to America. This is about
bringing back good jobs for the middle class Americans. And [first name of reporter
blinded], I want you to know, and your newspaper to know, that’s what I’m going to
do. I will work to create incentives to start growing jobs again in this country.
***OFF-TOPIC VERSION***
I’ve got a strategy for the Middle East. And let me say that our nation now needs to
speak with one voice during this time, to diffuse tensions. Look, we’re going to
face some serious new challenges, and as your Congressman I have a plan to
deal with the Middle East.
*****************************
Reporter: Let me ask you about taxes. As you run for the U.S. House, what is your
tax plan? And what would you specifically do to benefit middle-income
Americans?
Politician: My view is that we ought to provide tax relief to people in the middle class.
As you know, [first name of reporter blinded], and as has been reported in your
paper, the people who are having a hard time right now are indeed middle-income
Americans. Folks in our state have seen their income go down by forty-three hun-
dred dollars a year. I believe that the economy works best when middle-class fami-
lies are getting tax breaks so that they’ve got some money in their pockets.
Reporter: Where do you stand on gun control? Do you favor new restrictions or do
you believe our current climate we handle gun ownership responsibly?
Politician: I believe law-abiding citizens ought to be able to own a gun. I believe
in background checks to make sure that guns don’t get in the hands of people
that shouldn’t have them. The best way to protect our citizens from guns is to
prosecute those who commit crimes with guns. And I am a strong supporter of
the Second Amendment.
Reporter: That concludes our interview. We thank [name blinded], candidate for
the U.S. House of Representatives, for being here and taking our questions.
Politician: Thank you [first name blinded], I appreciate you having me.
426 Journal of Language and Social Psychology 37(4)
Reporter: From the [name of newspaper blinded], I am [full name blinded]. Thank you for
joining us.
Authors’ Note
Data collection was courtesy of Time-Sharing Experiments for the School of Communication
(TESoC) at The Ohio State University.
Acknowledgments
The author thanks William P. Eveland Jr., Susan L. Kline, and Hillary C. Shulman, the Editor,
and anonymous reviewers, for their incisive feedback, encouragement, and suggestions for
improving this work.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship,
and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/
or publication of this article: The author received financial support for the Qualtrics Panel data
collection from the Time-sharing Experiments for the School of Communication (TESoC) at
The Ohio State University.
Notes
1. There were a total of 618 participants included in this study after filtering out those who
failed attention checks, the manipulation check, and other survey filters. The survey filters are
described later in this section. Qualtrics could not provide researchers with the total number of
recruited participants who were filtered, per their protocol—with the exception of the number
of people who failed the manipulation check, mentioned in a later footnote. Hence, we describe
the filtering mechanisms but only quantifiably report inclusion of 618 in the experiment.
2. Leaners and/or “weak” partisans were excluded because when polls include leaners and weak
partisans, partisan effects tend to dissolve (Fiorina, Abrams, & Pope, 2011), suggesting that
polarized opinions of partisans are isolated to those who identify as such. Those who identify
weakly with a party or are Independents but lean toward a party may not demonstrate in-group/
out-group effects that provide as valid a test of TDT’s assertions of salient group perceptions.
3. Each of the four conditions sustained failure rates of about 6% in their respective manipula-
tion check. Here is the breakdown of those filtered out who failed that manipulation check,
based on their treatment condition: 6.39% of the participants randomly assigned to the
Democratic politician Dodge condition failed, 6.95% randomly assigned to the Democratic
politician No-Dodge condition failed, 6.39% in the Republican politician Dodge condition
failed, and 5.46% in the Republican politician No-Dodge failed.
References
Abrams, D., & Emler, N. (1992). Self-denial as a paradox of political and regional social iden-
tity: Findings from a study of 16- and 18-year-olds. European Journal of Social Psychology,
22, 279-295.
Clementson 427
Abrams, J. R., Eveland, W. P., Jr., & Giles, H. (2003). The effects of television on group vitality:
Can television empower nondominant groups? In P. J. Kalbfleisch (Ed.), Communication
yearbook (Vol. 27, pp. 193-219). Mahwah, NJ: Lawrence Erlbaum.
Afifi, T. D., Afifi, W. A., Morse, C. R., & Hamrick, K. (2008). Adolescents’ avoidance tenden-
cies and physiological reactions to discussions about their parents’ relationship: Postdivorce
and nondivorced families. Communication Monographs, 75, 290-317.
American Psychological Association. (2016, October 13). APA survey reveals 2016 presiden-
tial election source of significant stress for more than half of Americans. Retrieved from
http://www.apa.org/news/press/releases/2016/10/presidential-election-stress.aspx
Bavelas, J. B., Black, A., Chovil, N., & Mullett, J. (1990). Equivocal communication. Newbury
Park, CA: Sage.
Bennett, J. (1984). A study of Spinoza’s ethics. New York, NY: Cambridge University Press.
Berelson, B. R., Lazarsfeld, P. F., & McPhee, W. N. (1954). Voting: A study of opinion forma-
tion in a presidential campaign. Chicago, IL: University of Chicago Press.
Bradac, J., Friedman, E., & Giles, H. (1986). A social approach to propositional communica-
tion: Speakers lie to hearers. In G. McGregor (Ed.), Language for hearers (pp. 127-151).
New York, NY: Pergamon Press.
Braun, M. T., Van Swol, L. M., & Vang, L. (2015). His lips are moving: Pinocchio effect and
other lexical indicators of political deceptions. Discourse Processes, 52, 1-20.
Brewer, M. B. (1991). The social self: On being the same and different at the same time.
Personality and Social Psychology Bulletin, 17, 475-482.
Brewer, M. B. (1999). The psychology of prejudice: In-group love or out-group hate? Journal
of Social Issues, 55, 429-444.
Brewer, M. B. (2001). The many faces of social identity: Implications for political psychology.
Political Psychology, 22, 115-125.
Bull, P. (2008). “Slipperiness, evasion, and ambiguity”: Equivocation and facework in noncom-
mittal political discourse. Journal of Language and Social Psychology, 27, 333-344.
Bull, P., & Mayer, K. (1993). How not to answer questions in political interviews. Political
Psychology, 14, 651-666.
Buller, D. B., & Burgoon, J. K. (1994). Deception: Strategic and nonstrategic communication.
In J. A. Daly & J. M. Wiemann (Eds.), Strategic interpersonal communication (pp. 191-
223). Hillsdale, NJ: Lawrence Erlbaum.
Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communication Theory,
6, 203-242.
Buller, D. B., Burgoon, J. K., Buslig, A. S., & Roiger, J. F. (1994). Interpersonal deception VIII.
Further analysis of nonverbal and verbal correlates of equivocation from the Bavelas et al.
(1990) research. Journal of Language and Social Psychology, 13, 396-417.
Buller, D. B., Burgoon, J. K., White, C. H., & Ebesu, A. S. (1994). Interpersonal deception VII.
Behavioral profiles of falsification, equivocation, and concealment. Journal of Language
and Social Psychology, 13, 366-395.
Burgoon, J. K., & Levine, T. R. (2010). Advances in deception detection. In S. W. Smith & S.
R. Wilson (Eds.), New directions in interpersonal communication research (pp. 201-220).
Thousand Oaks, CA: Sage.
Campbell, A., Converse, P. E., Miller, W. A., & Stokes, D. E. (1960). The American voter.
Chicago, IL: University of Chicago Press.
Carlin, R. E., & Love, G. J. (2013). The politics of interpersonal trust and reciprocity: An exper-
imental approach. Political Behavior, 35, 43-63.
Clayman, S., & Heritage, J. (2002). The news interview: Journalists and public figures on the
air. Cambridge, England: Cambridge University Press.
428 Journal of Language and Social Psychology 37(4)
Clementson, D. E. (2018). Effects of dodging questions: How politicians escape deception
detection and how they get caught. Journal of Language and Social Psychology, 37,
93-113. doi:10.1177/0261927X17706960
Clementson, D. E., & Eveland, W. P., Jr. (2016). When politicians dodge questions: An analysis
of presidential press conferences and debates. Mass Communication and Society, 19, 411-
429.
Duck, J. M., Hogg, M. A., & Terry, D. J. (1995). Me, us and them: Political identification and
the third-person effect in the 1993 Australian federal election. European Journal of Social
Psychology, 25, 195-215.
Dunbar, N. E. (2017). Intergroup deception. In H. Giles & J. Harwood (Eds.), Oxford ency-
clopedia of intergroup communication. New York, NY: Oxford University Press.
doi:10.1093/acrefore/9780190228613.013.486
Dunbar, N. E., Gangi, K., Coveleski, S., Adams, A., Bernhold, Q., & Giles, H. (2016). When is it
acceptable to lie? Interpersonal and intergroup perspectives on deception. Communication
Studies, 67, 129-146.
Ehrlich, G. A., & Gramzow, R. H. (2015). The politics of affirmation theory: When group-
affirmation leads to greater in-group bias. Personality and Social Psychology Bulletin, 41,
1110-1122.
Ekman, P. (2009). Telling lies: Clues to deceit in the marketplace, politics, and marriage (4th
ed.). New York, NY: W.W. Norton.
Ekström, M. (2009). Announced refusal to answer: A study of norms and accountability in
broadcast political interviews. Discourse Studies, 11, 681-702.
Fiorina, M. P., Abrams, S. J., & Pope, J. C. (2011). Culture war? The myth of a polarized
America (3rd ed.). Columbus, OH: Longman.
Gallup. (2016). Honesty/ethics in professions. Retrieved from http://www.gallup.com/poll/1654/
honesty-ethics-professions.aspx
Gerber, A. S., Huber, G. A., & Washington, E. (2010). Party affiliation, partisanship, and politi-
cal beliefs: A field experiment. American Political Science Review, 104, 720-744.
Gilbert, D. T. (1993). The assent of man: The mental representation and control of belief.
In D. M. Wegner & J. W. Pennebaker (Eds.), Handbook of mental control (pp. 57-87).
Englewood Cliffs, NJ: Prentice Hall.
Gilbert, D. T., Krull, D. S., & Malone, P. S. (1990). Unbelieving the unbelievable: Some prob-
lems in the rejection of false information. Journal of Personality and Social Psychology,
59, 601-613.
Gilbert, D. T., Tafarodi, R. W., & Malone, P. S. (1993). You can’t not believe everything you
read. Journal of Personality and Social Psychology, 65, 221-233.
Green, D., Palmquist, B., & Schickler, E. (2002). Partisan hearts and minds: Political parties
and the social identities of voters. New Haven, CT: Yale University Press.
Greene, S. (1999). Understanding party identification: A social identity approach. Political
Psychology, 20, 393-403.
Greene, S. (2004). Social identity theory and party identification. Social Science Quarterly, 85,
136-153.
Grice, H. P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press.
Groenendyk, E. W., & Banks, A. J. (2014). Emotional rescue: How affect helps partisans over-
come collective action problems. Political Psychology, 35, 359-378.
Harris, S. (1991). Evasive action: How politicians respond to questions in political interviews.
In P. Scannell (Ed.), Broadcast talk (pp. 76-99). Newbury Park, CA: Sage.
Clementson 429
Harwood, J. (2014). Easy lies. Journal of Language and Social Psychology, 33, 405-410.
Holan, A. D. (2015, December 11). All politicians lie: Some lie more than others. The New York
Times. Retrieved from https://nyti.ms/2jGR2JJ
Huddy, L. (2001). From social to political identity: A critical examination of social identity
theory. Political Psychology, 22, 127-156.
Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.
Kelly, C. (1988). Intergroup differentiation in a political context. British Journal of Social
Psychology, 27, 319-332.
Lau, R. R., & Redlawsk, D. P. (2001). Advantages and disadvantages of cognitive heuristics in
political decision making. American Journal of Political Science, 45, 951-971.
Lazarsfeld, P. F., Berelson, B., & Gaudet, H. (1944). The people’s choice: How the voter makes
up his mind in a presidential campaign. New York, NY: Columbia University Press.
Leonardelli, G. J., Pickett, C. L., & Brewer, M. B. (2010). Optimal distinctness theory: A
framework for social identity, social cognition, and intergroup relations. Advances in
Experimental Social Psychology, 43, 63-113.
Levine, T. R. (2014). Truth-Default Theory (TDT): A theory of human deception and deception
detection. Journal of Language and Social Psychology, 33, 378-392.
Levine, T. R., Kim, R. K., & Blair, J. P. (2010). (In)accuracy at detecting true and false confes-
sions and denials: An initial test of a projected motive model of veracity judgments. Human
Communication Research, 36, 81-101.
Levine, T. R., Kim, R. K., Park, H. S., & Hughes, M. (2006). Deception detection accuracy
is a predictable linear function of message veracity base-rate: A formal test of Park and
Levine’s probability model. Communication Monographs, 73, 243-260.
Levine, T. R., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies:
Documenting the “veracity effect.” Communication Monographs, 66, 125-144.
Levine, T. R., Serota, K. B., & Shulman, H. C. (2010). The impact of Lie to Me on viewers’
actual ability to detect deception. Communication Research, 37, 847-856.
Masip, J., Garrido, E., & Herrero, C. (2004). Defining deception. Anales de Psicologia, 20,
147-171.
McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59,
1-16.
McCornack, S. A., Morrison, K., Paik, J. E., Wisner, A. M., & Zhu, X. (2014). Information
Manipulation Theory 2: A propositional theory of deceptive discourse production. Journal
of Language and Social Psychology, 33, 348-377.
McCroskey, J. C., & Teven, J. J. (1999). Goodwill: A reexamination of the construct and its
measurement. Communication Monographs, 66, 90-103.
Mullen, B., Brown, R., & Smith, C. (1992). In-group bias as a function of salience, relevance,
and status: An integration. European Journal of Social Psychology, 22, 103-122.
Munro, G. D., Lasane, T. P., & Leary, S. P. (2010). Political partisan prejudice: Selective distor-
tion and weighting of evaluative categories in college admissions applications. Journal of
Applied Social Psychology, 40, 2424-2462.
Park, H. S., & Levine, T. R. (2001). A probability model of accuracy in deception detection
experiments. Communication Monographs, 68, 201-210.
Pew Research Center. (2016). Partisanship and political animosity in 2016: Highly negative
views of the opposing party—and its members. Retrieved from http://www.people-press.
org/2016/06/22/partisanship-and-political-animosity-in-2016/
430 Journal of Language and Social Psychology 37(4)
Rahn, W. M. (1993). The role of partisan stereotypes in information processing about political
candidates. American Journal of Political Science, 37, 472-496.
Rogers, T., & Norton, M. I. (2011). The artful dodger: Answering the wrong question the right
way. Journal of Experimental Psychology: Applied, 17, 139-147.
Romaniuk, T. (2013). Pursuing answers to questions in broadcast journalism. Research on
Language and Social Interaction, 46, 144-164.
Sears, D. O., & Funk, C. L. (1999). Evidence of the long-term persistence of adults’ political
predispositions. Journal of Politics, 61, 1-28.
Serota, K. B., Levine, T. R., & Boster, F. J. (2010). The prevalence of lying in America: Three
studies of self-reported lies. Human Communication Research, 36, 2-25.
Spinoza, B. (1982). Ethica (S. Shirley, Trans.). In S. Feldman (Ed.), The ethics and selected let-
ters (pp. 31-233). Indianapolis, IN: Hackett. (Original work published 1677)
Tajfel, H. (1981). Human groups and social categories. New York, NY: Cambridge University
Press.
Tajfel, H., & Turner, J. (1979). An integrative theory of intergroup conflict. In W. G. Austin &
S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33-47). Monterey,
CA: Brooks.
Tajfel, H., & Turner, J. (1986). Social identity theory of intergroup behavior. In S. Worchel &
W. G. Austin (Eds.), Psychology of intergroup relations (pp. 7-24). Chicago, IL: Nelson-
Hall.
Verschuere, B., & Shalvi, S. (2014). The truth comes naturally! Does it? Journal of Language
and Social Psychology, 33, 417-423.
Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication
of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 14,
pp. 1-59). New York, NY: Academic Press.
Author Biography
David E. Clementson (PhD, The Ohio State University) is an assistant professor in the
Department of Communication Studies at California State University, Sacramento. His research
examines public figures dodging questions. Formerly a professional journalist, political cam-
paign manager, and public relations director, he has run successful campaigns for Democrats
and Republicans.
... In recent years, a large amount of data on the Internet is causing a lack of control over what information is true. In addition, people could easily be deceived since humans are not particulary effective in detecting deception [15,16] (especially in texts); for example, a politician could be discredit by spreading misleading texts online that could cause low popularity in campaign [2]. ...
... Because the cosine measure represents similarity and the Euclidean measure represents the distance between objects, we turn the Euclidean distance measure into a similarity measure by using the following adequacy: modif yEuclidean = 1 Euclidean+1 ; similarityEuclidean obtains values in the range (0, 1], where 1 means that the objects are the same and values close to 0 means that the objects are highly dissimilar. The cosine measure was modified by simply adding a unit to obtain only positive values: modif iedCosine = cosine + 1 in the range [1,2]. ...
Article
Full-text available
In recent years, with the rise of the Internet, the automatic deception detection in text is an important task to recognize those of documents that try to make people believe in something false. Current studies in this field assume that the entire document contains cues to identify deception; however, as demonstrated in this work, some irrelevant ideas in text could affect the performance of the classification. Therefore, this research proposes an approach for deception detection in text that identifies, in the first instance, key ideas in a document based on a topic modeling algorithm and a proposed automatic extractive text summarization method, to produce a synthesized document that avoids secondary ideas. The experimental results of this study indicate that the proposed method outperform previous methods with standard collections.
... During information processing, statements attributed to credible and reliable sources seem to induce people to more easily believe them. While Ehrlich & Gramzow (2015) have shown that people tend to regard their party's politicians as more honest than opposition politicians, Clementson (2018) noted that voters are not as accurate in evaluating statements by politicians from their parties as they are from an opposing politician. Regarding receptivity to bullshit, several studies have found that the level of acceptance and belief increases if it belongs to a reliable source (Gligorić & Vilotijević, 2020;Ilić & Damnjanović, 2021;Nilsson et al., 2019). ...
... If previous results seemed to suggest a political asymmetry in relation to receptivity to politically aligned bullshit, partisanship proves to be a strong predictor in relation to acceptance of pseudoprofound bullshit. In accordance with other studies (Ehrlich & Gramzow, 2015;Clementson, 2018;Petrocelli, 2021), bullshit receptivity depends on the alignment of the source, and party orientation reacts more evidently to the stimulus of a directly associated source. Both left and right supporters are more likely to blindly judge the content of snotty-profound bullshit based on its source. ...
Article
Full-text available
The spread of political disinformation remains a problem for democracy. In a digital universe surrendered to the dominance of social media, motivated political reasoning can be an ally of disinformation in general. Our exploratory study is a first approach, in Portugal, to the analysis of receptivity to bullshit. The main objective is to verify how political and partisan orientation can influence the level of receptivity to pseudo-profound bullshit. We used a survey (n = 268) to measure participants' partisanship and ideological orientation and to identify possible political and partisan (a)symmetries regarding receptivity to pseudo-profound bullshit. Our findings revealed that individuals are less receptive to pseudo-profound bullshit attributed to political leaders than when the source is anonymous. Furthermore, partisanship, as motivated reasoning, can determine how respondents evaluate information. We found that the level of receptivity to pseudo-profound bullshit is dependent on the political alignment of the source for left and right supporters. In addition to partisan bias, our results show that people with lower levels of education are more receptive to bullshit in general, which reinforces the need to invest in digital literacy to combat disinformation.
... Consequently, such individuals were vulnerable to believing misinformation. The tendency for individuals to believe most statements has been referred to as the "truth bias" (Mccornack & Parks, 1986), and has been shown in a variety of contexts (Levine et al., 1999), including when individuals judge the veracity of political statements (Clementson, 2017). The present work suggests that, in the political realm, the truth bias may not be a general effect. ...
Article
Full-text available
More work needs to be done to understand how mental well‐being and interpersonal factors are associated with biases in judging the veracity of true and false political information. Three days before the 2020 U.S. presidential election, 477 participants guessed the veracity of true and false political statements. Interpersonal factors (e.g. high prosociality and a need to belong) and mental health risk factors (e.g. high depressive symptoms and low eudaimonic well‐being) were highly associated with believing false information. Further, positive well‐being was associated with assessing news with a partisan bias. Next, hierarchical regression was used to better understand the combination of factors which best predict accurate judgements. To reduce the chances of overfitting, out‐of‐sample validation was used. 40% of the variance for believing false information was explained by high prosociality and low well‐being. In addition, well‐being mediated the effects of political ideology when assessing the veracity of political information.
... Consequently, such individuals were vulnerable to believing misinformation. The tendency for individuals to believe most statements has been referred to as the 'truth bias' (Mccornack & Parks, 1986), and has been shown in a variety of contexts (Levine et al., 1999), including when individuals judge the veracity of political statements (Clementson, 2017). The present work suggests that the truth bias may not be a general effect. ...
Preprint
Full-text available
In the last few years, work has begun to diagnose the individual risk factors associated with believing political misinformation. However, little is known about whether individual differences in interpersonal behaviors and well-being (broadly defined) are associated with biases in judging the veracity of political information. The goal of this work was two-fold. First, it tested whether interpersonal (e.g. prosociality and need to belong), affective (e.g. anxiety and happiness), eudaimonic well-being (e.g. autonomy and one’s sense of purpose in life), and mental health (depressive symptoms) factors were associated with the ability to judge the veracity of true and false political statements. Prosociality, high negative affect, and poor eudaimonic well-being were all highly associated with the tendency to believe that most headlines were true. However, low prosociality, low negative affect, and high eduaimonic well-being were associated with assessing news with a partisan bias. Second, given that several of the psychological factors covaried, out-of-sample validation was used to understand the combination of psychological factors which best predicted truth discernment. Political and demographic factors known to predict accuracy were considered in tandem. By including measures of a.) interpersonal behaviors, b.) hedonic/affective well-being, and c.) eudaimonic well-being in a model, more than 50% of the variance was explained for both true and false statements. The best out-of-sample validated models did not include some factors previously found to predict accuracy (such as conservative ideology). This works demonstrates the importance of mental health factors and interpersonal forces when individuals attempt to navigate the political landscape.
... Deception behaviors frequently appear in human daily life, such as politics (Clementson, 2018), news (Conroy et al., 2015a;Vaccari and Chadwick, 2020), and business (Grazioli and Jarvenpaa, 2003;Triandis et al., 2001). Despite its frequent occurrences, researchers have repeatedly shown that humans are not good at detecting deceptions (it's 54% accuracy on average for both police officers and college students (Vrij and Graham, 1997)), even for highly-skilled professionals, such as teachers, social workers, and police officers (Hartwig et al., 2004;Vrij et al., 2006). ...
Conference Paper
Full-text available
It is well known that human is not good at deception detection because of a natural inclination of truth-bias. However, during a conversation, when an interlocutor (interrogator) is being asked explicitly to assess whether his/her interacting partner (deceiver) is lying, this perceptual judgment depends highly on how the interrogator interprets the context of the conversation. While the deceptive behaviors can be difficult to model due to their high heterogeneous manifestation, we hypothesize that this contextual information, i.e., whether the interlocutor trusts or distrusts what his/her partner is saying, provides an important condition in which the deceiver’s deceptive behaviors are more consistently distinct. In this work, we propose a Judgmental-Enhanced Automatic Deception Detection Network (JEADDN) that explicitly considers interrogator’s perceived truths-deceptions with three types of speech-language features (acoustic-prosodic, linguistic, and conversational temporal dynamics features) extracted during a conversation. We evaluate our framework on a large Mandarin Chinese Deception Dialog Database. The results show that the method significantly outperforms the current state-of-the-art approach without judgements of interrogators on this database. We further demonstrate that the behaviors of interrogators are important in detecting deception when the interrogators distrust the deceivers. Finally, with the late fusion of audio, text, and turn-taking dynamics (TTD) modality features, we obtain promising results of 87.27% and 94.18% accuracy under the conditions that the interrogators trust and distrust the deceivers in deception detection which improves 7.27% and 13.57% than the model without considering the interlocutor’s judgements respectively.
... There have been many expeditions into the task of typologizing forms of both truth (Zimmer et al., 2019) and deception (e.g., Clementson, 2017;de Regt et al., 2020;Garrett et al., 2019;Pawlick et al., 2019), and theorizing their enactments, detection and management (e.g., Buller & Burgoon, 1996;Levine et al., 2016;Walczyk et al., 2014). Hopper and Bell (1984) conceptualized six types of interactional deception: fictions (e.g., make-believe, exaggeration, irony, white lies, etc.), playings (e.g., jokes, teases, kidding, trick, hoax, etc.), lies (e.g., dishonesty, fib, lie, etc.), crimes (e.g., conspiracy, entrapment, counterfeit, forgery, fraud, etc.), masks (inveigling, hypocrisy, back-stabbing, concealment, evasion, etc.), and unlies (e.g., distortions, false implications, misrepresentations). ...
... Deception is often mechanistically used to share a mix of truthful and deceptive experiences when being inquired and interrogated [1]. Although deception behaviors frequently exist in our daily life, such as in politics [2], news [3,4], and business settings [5,6], it is challenging for untrained personnel to identify deceptions accurately. According to [7], deception detection accuracy is only at 54% on average for both police officers and college students. ...
Article
Full-text available
While deceptive behaviors are a natural part of human life, it is well known that human is generally bad at detecting deception. In this study, we present an automatic deception detection framework by comprehensively integrating prior domain knowledge in deceptive behavior understanding. Specifically, we compute acoustics, textual information, implicatures with non-verbal behaviors, and conversational temporal dynamics for improving automatic deception detection in dialogs. The proposed model reaches start-of-the-art performance on the Daily Deceptive Dialogues corpus of Mandarin (DDDM) database, 80.61% unweighted accuracy recall in deception recognition. In the further analyses, we reveal that (i) the deceivers’ deception behaviors can be observed from the interrogators’ behaviors in the conversational temporal dynamics features and (ii) some of the acoustic features (e.g. loudness and MFCC) and textual features are significant and effective indicators to detect deception behaviors.
Chapter
This chapter examines how body image deception is created and understood in social media. The authors focus specifically on the beach body, which is a narrower form of bodily representation online, but where deception is especially likely to occur. Focus group discussions with young adults revealed that editing and perfecting the beach body is commonplace and even normalized on social media. However, participants distinguished between celebrities and friends in expected use of manipulation and seemed to place a limit on the acceptable types of manipulation: body tan but not body shape, for example. The authors discuss the implications of these discussions and how applying deception theory in body image research can provide useful insights.
Article
This article focuses on message credibility and detection accuracy of fake and real news as represented on social media. We developed a deception detection paradigm for news headlines and conducted two online experiments to examine the extent to which people (1) perceive news headlines as credible, and (2) accurately distinguish fake and real news across three general topics (i.e., politics, science, and health). Both studies revealed that people often judged news headlines as fake, suggesting a deception-bias for news in social media. Across studies, we observed an average detection accuracy of approximately 51%, a level consistent with most research using this deception detection paradigm with equal lie-truth base-rates. Study 2 evaluated the effects of endorsement cues in social media (e.g., Facebook likes) on message credibility and detection accuracy. Results showed that headlines associated with a high number of Facebook likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. These studies introduce truth-default theory to the context of news credibility and advance our understanding of how biased processing of news information can impact detection accuracy with social media endorsement cues.
Article
Full-text available
People often need to dodge questions. Politicians presumably are the worst offenders. Two recent theories of deception make assertions applicable to politicians dodging questions: information manipulation theory 2 and truth-default theory. We examine both in two experiments. We constructed news interviews with a politician dodging questions. Results indicate people detect particular types of dodges and dodging impairs trustworthiness. But detection also depends on the strength of political attitudes. Furthermore, personality trait suspicion moderates perceptions of trustworthiness. We also extend theorizing and suggest dodges are detected because of rumination triggered by topic avoidance. Discussion includes ramifications for political engagement when the public perception of politicians “always” dodging questions is supported by extensive cognitive mechanisms.
Article
Full-text available
While deception is generally viewed as an undesirable and unethical action, people evaluate some lies as more detrimental than others. This study examined factors influencing deception assessments, including the seriousness of the lie and whom it benefits. The effect of an intergroup versus an interpersonal context for the lie was examined. Utilizing 24 vignettes varying in terms of these conditions, 259 participants evaluated a lie’s appropriateness, deceptiveness, and complexity. Altruistic and white lies were viewed as less deceptive and more acceptable than self-serving and more consequential lies. Lies evaluated as least acceptable were interpersonal, serious, and self-serving compared to altruistic lies and those embedded in an intergroup context. Intergroup and interpersonal deceptions are recognized as distinct forms of lying and are evaluated differently.
Article
Full-text available
The article discusses a distinction between easy, small lies versus difficult, big lies. It suggests some different processes underlying the production of these different types of lies, suggesting that certain discursive challenges facing liars only manifest for difficult lies that have to be produced unexpectedly. The article also explores the strengths of the two articles at the center of this special issue, focusing on them as examples of theory building in the discipline of communication.
Article
The concept of vitality was first introduced to account for factors affecting language use in the late 1970s. Today, vitality has developed into a broader theory addressing issues related to ethnicity, gender, age, and intergroup communication. Theorists propose that the more vitality a group has, the more likely that group will survive as an entity in an intergroup context. Intergroup researchers claim that perceptions of vitality may be influenced by mass media. This relationship has yet to be explored in detail. Based on mass media theory, we offer a number of contrasting propositions about how television might function to impact subjective group vitality and, ultimately, intergroup communication. The integration of relevant intergroup and mass communication literature reflects the extent to which television empowers minority groups as well as how levels of empowerment are manifested in nondominant groups’ behaviors.