ArticlePDF Available

Sound and Credibility in the Virtual Court: Low Audio Quality Leads to Less Favourable Evaluations of Witnesses and Lower Weighting of Evidence

Authors:

Abstract and Figures

Objectives: Recent virtual court proceedings have seen a range of technological challenges, producing not only trial interruptions but also cognitive interruptions in processing evidence. Very little empirical research has focused on how the subjective experience of processing evidence affects evaluations of trial participants and trial decisions. Metacognitive research shows that the subjective ease or difficulty of processing information can affect evaluations of people, belief in information, and how a given piece of information is weighted in decision making. Hypotheses: We hypothesized that when people experienced technological challenges (e.g., poor audio quality) while listening to eyewitness accounts, the difficulty in processing evidence would lead them to evaluate a witness more negatively, influence their memory for key facts, and lead them to weigh that evidence less in final trial judgments. Method: Across three experiments (total N = 593), participants listened to audio clips of witnesses describing an event, one presented in high-quality audio and one presented in low-quality audio. Results: When people heard witnesses present evidence in low-quality audio, they rated the witnesses as less credible, reliable, and trustworthy (Experiment 1, d = 0.32; Experiment 3, d = 0.55); had poorer memory for key facts presented by the witness (Experiment 2, d = 0.44); and weighted witness evidence less in final guilt judgments (Experiment 3, ηp² = .05). Conclusion: These results show that audio quality influences perceptions of witnesses and their evidence. Because these variables can contribute to trial outcomes, audio quality warrants consideration in trial proceedings. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Content may be subject to copyright.
LOW AUDIO QUALITY AND CREDIBILITY 1
LOW AUDIO QUALITY AND CREDIBILITY 1
Sound and Credibility in the Virtual Court: Low Audio Quality Leads to Less
Favourable Evaluations of Witnesses and Lower Weighting of Evidence
Elena Bild1, Annabel Redman1, Eryn J. Newman1, Bethany R. Muir1, David Tait2,
and Norbert Schwarz3
1 Research School of Psychology, The Australian National University, ACT 3600, Australia
2 School of Humanities and Communication Arts, Western Sydney University, NSW 2751,
Australia.
3 Mind and Society Center, University of Southern California, CA 90089-1061, USA.
Law & Human Behavior, in press
This manuscript has been accepted for publication in Law & Human Behavior. It is peer-reviewed
but has not yet undergone copy editing.
Author Note. Eryn J. Newman https://orcid.org/0000-0001-8663-7173; Bethany R. Muir
https://orcid.org/0000-0002-1981-6945; Norbert Schwarz https://orcid.org/0000-0002-8868-7067.
Elena Bild is now at the School of Psychology, University of Queensland, QLD 4072, Australia.
We have no known conflict of interest to disclose. Open Science and Pre-registration link:
https://osf.io/vf34x/?view_only=3cffa073590749e69415a32878be7146
Correspondence concerning this article should be addressed to Eryn J. Newman, Research School
of Psychology, The Australian National University, Canberra, ACT 2600, Australia. Email:
eryn.newman@anu.edu.au
LOW AUDIO QUALITY AND CREDIBILITY 2
Abstract
Objectives: Recent virtual court proceedings have seen a range of technological challenges, producing
both trial interruptions, but also cognitive interruptions in processing evidence. Very little empirical
research has focused on how the subjective experience of processing evidence affects evaluations of
trial participants and trial decisions. Metacognitive research shows that the subjective experience of
ease or difficulty of processing information can affect evaluations of people, belief in information, and
how a given piece of information is weighted in decision making. Hypotheses: We hypothesised that
when people experience technological challenges while listening to eyewitness accounts, such as poor-
quality audio, the difficulty in processing evidence would lead them to evaluate a witness more
negatively, influence memory for key facts, and lead people to weigh that evidence less in final trial
judgments. Method: Across three experiments (total N = 593), participants listened to audio clips of
witnesses describing an event, with one presented in high-quality audio and one presented in low-
quality audio. Results: When people heard witnesses present evidence in low-quality audio, they rated
the witnesses as less credible, reliable and trustworthy (Experiment 1, d = 0.32 Experiment 3, d = 0.55),
had poorer memory for key facts presented by the witness (Experiment 2, d = 0.44), and weighted
witness evidence less in final guilt judgments (Experiment 3, !2p = .05). Conclusion: These results
show that audio quality influences perceptions of witnesses and their evidence. Because these variables
can contribute to trial outcomes, audio quality warrants consideration in trial proceedings. Keywords:
Virtual courtrooms; Audio Quality; Cognitive Fluency; Witness Credibility
Public Significance Statement
Low audio quality led people to evaluate witnesses more negatively and reduced memory
performance for presented evidence. Even when memory for key facts was held constant, poor
audio quality led people to weigh evidence less heavily in final guilt judgments. These findings
suggest several policy implications regarding the conditions under which remote testimony is
presented in the courtroom and the critical role of controlling for a high-quality acoustic
experience in the physical courtroom more generally.
LOW AUDIO QUALITY AND CREDIBILITY 3
Sound and Credibility in the Virtual Court: Low Audio Quality Leads to Less
Favorable Evaluations of Witnesses and Lower Weighting of Evidence
From intermittent internet, to video freezing, to full audio drop outthese are just some
of the reasons trials have been paused, witnesses recalled, and remote hearings abandoned in
virtual court settings (e.g., Lapinski et al., 2020; Todd et al., 2020). These interruptions have
obvious consequences for human judgment and cognitionif the internet drops out an
eyewitness presenting evidence may end up repeating themselves and if the audio disappears,
jury members will be missing out on critical evidence. But more insidious technological
glitches may also affect trial proceedings. An echo or static on the audio may be enough to
influence jury decision making, even when the jury is not actually missing out on content and
merely experiences some difficulty in listening to evidence. Such smaller glitches are rarely
enough to cause a pause to a trial, nor to have a witness recalled, but may nonetheless
systematically affect human judgement and warrant consideration by the courts.
Over the last several years technology has been increasingly incorporated into the
criminal justice system. One great benefit is that technology allows various actors to contribute
remotely when they can not attend the court in person. In 2020, courts all over the world rapidly
turned to remote testimony, with over 60 countries implementing some form of virtual court
during the COVID-19 restrictions (Remote Courts Worldwide, 2020). In England and Wales,
for instance, shortly after lockdown procedures were in place in early 2020, there was a 500%
increase in audio hearings, and 340% increase in video hearings across all courts and tribunals
(Ryan et al., 2020). There are many advantages to this remote or virtual court shift, including
lowered costs, more flexibility, and greater access to resources for rural and disadvantaged
communities (e.g., Gourdet et al., 2020). But remote attendance changes the courtroom
experience for all involved and its impact on decision making is yet to be well understood
(McIntyre et al., 2020; Rossner et al., 2021). Moreover, policy and procedures are in their
infancy, with significant variations in how remote procedures are conducted (e.g., Scotland
using cinemas to accommodate juries who watched the trial on the large screen; Barrie, 2020;
also see Norton Rose Fulbright, 2020). Responses to disruptions in audio visual links are also
likely to differ between trials, such as the procedure for asking parties to disconnect and
reconnect when their audio connection is poor (Byrom et al., 2020). While Audio Visual Links
(AVL) and Zoom platforms allow for the use of both video and audio inputs, we focus on audio
here as it is also an alternative form of court attendanceone may join remotely using audio
LOW AUDIO QUALITY AND CREDIBILITY 4
only in some jurisdictions (e.g., Australia; Federal Circuit Court of Australia, 2020; UK; Byrom
et al., 2020; USA; Totenberg, 2020), and one can also turn off video if a connection is poor,
but audio evidence is critical in contributing content in a case. Anecdotal examples capture the
challenges with low-quality audio:
On repeated occasions throughout the proceedings that afternoon one or more of the
parties “dropped out,” necessitating a communication between them and my tipstaff
advising of the steps they should take to “dial back in”. Reconnection was successful
on each occasion, although not without interruption to the course of the proceedings.
From time to time counsel were also difficult to hear and on other occasions their
submissions were fractured or time delayed. Despite the valiant endeavours of the court
reporters, the integrity of the transcript suffered as a result. (Boys & Sams, 2020, “The
effectiveness of the virtual courtroom” section)
Indeed, the third author experienced these audio issues when recently presenting expert
evidence remotely. While the video connection was intact, the quality of audio in the physical
courtroom was such that evidence was repeated and clarified several times over (see also
Rowden & Wallace, 2019, for an extended critique of the complexities involved in remote
expert testimony). Indeed beyond the context of a worldwide pandemic, courts rely on remote
testimony in a number of contexts. Witnesses can join hearings remotely via AVL for good
cause for example (for a discussion see Fobes, 2020), and when children present evidence in a
closed-circuit context, they are also at the mercy of the quality of court technology in the
physical courtroom. Complete audio dropout and prolonged freezing are strong diagnostic cues
to a judge that information is being lost, having potentially significant effects for human
judgment. Smaller glitches in audio, some static, some echo, however, may go more undetected
or may seem of little concern even when detected. Whether there is a smooth connection is not
easily controlled by a speaker, who is usually at the mercy of their Wi-Fi and network, but this
variable may have significant consequences for how a speaker is perceived and how their
evidence is received by a jury. Audio quality is also relevant in inperson settings, where the
architecture, technology in the court, and distance can all bear on the quality of the listener’s
experience. Although audio is independent of the reliability of a given witness and the content
of a witness report, it may nonetheless affect a listener’s perceptions of a witness, what they
say and the extent to which their evidence is used in decision making. The present research
tests this possibility.
LOW AUDIO QUALITY AND CREDIBILITY 5
Metacognitive Experiences
A large body of research in social and cognitive psychology shows that processing the
content of a message is accompanied by subjective experiences of ease or difficulty, which can
shape how much people trust the communicator, agree with the message, remember its details,
and rely on it when making a decision (for reviews, see Alter & Oppenheimer, 2009; Schwarz,
2015; Schwarz et al., 2021). These metacognitive experiences of ease or difficulty are often
referred to as fluency experiences, denoting how “fluently” some cognitive operation can be
executed. The inferences that people draw from their metacognitive experiences are often
warranted. For example, convoluted messages are more difficult to process than logical ones
and arguments that are at odds with one’s knowledge are more difficult to follow than
arguments that are not (Schwarz, 2015). But, unfortunately, people are more sensitive to their
own fluency experience than to the source of that experiencethey notice that something is
easy or difficult, but do not realize where that difficulty comes from. For example, is a message
difficult to process because its content makes no sense or because the audio is difficult to
understand? In most cases, people attribute their difficulty to the content they are focused on
rather than to the influence of incidental background variables. Hence, numerous incidental
variables can influence recipients’ evaluation of the content. For example, in one study people
saw claims printed in easy to perceive high colour contrast fonts or in difficult to perceive low
colour contrast fonts. When participants were asked to rate the truth of those claims, people
were less likely to believe a given claim when it was presented in low colour contrast font than
in high colour contrast font (Reber & Schwarz, 1999). Similar effects are found with phonetic
or audio-perceptual experiences. Accents are one example when provided audio information
through a foreign accent, people rate information less likely to be true (Lev-Ari & Keysar,
2010), provide harsher sentences to a defendant (Romero-Rivas et al., 2021), and find
eyewitnesses less credible (Frumkin, 2007). Other research shows that people evaluate
academic conference talks more negatively when there is a (simulated) slight echo on the
microphone (Newman & Schwarz, 2018; see also, Fiechter et al., 2018). In short, whenever
information is difficult to perceive, understand, or imagine, when processing is clunky or
strained, recipients evaluate the substantive content of the information more negatively.
The same holds for the impressions people form of a speaker. For example, people
evaluate speakers with a difficult to pronounce name more negatively than speakers with an
easy to pronounce name (even from the same world region)those with complicated names
are rated less trustworthy, less familiar and more dangerous (Laham et al., 2012; Newman et
LOW AUDIO QUALITY AND CREDIBILITY 6
al., 2014). In one study people were more concerned about making a purchase when an online
seller had a complicated or difficult to pronounce usernamethey were less confident that
the product description was accurate, wondered whether the seller would honor the return
policy, and found the seller overall less trustworthy (Silva et al., 2017). These effects of
pronunciation and usernames are robust and hold even when people have access to more
objective information about the seller’s reputation. Similarly, faces that have been seen less
often, and are thus less easy to process compared to repeated faces, seem relatively less sincere
and honest (Brown et al., 2002; see also Weisbuch & Mackie, 2009). And even scientific
experts are perceived to be less competent when there is background noise in a radio interview
(Newman & Schwarz, 2018). Taken together, incidental variables that produce cognitive
difficulty in processing information about people, events, and products can have systematic
consequences in human judgmentwhen processing is difficult, people arrive at more negative
evaluations. This holds across all modes of information presentation, from the auditory
presentations discussed here to the readability of written material (for reviews, see Alter &
Oppenheimer, 2009; Schwarz et al., 2021).
Fluency in the Courtroom
Assessments of credibility, reliability, and perceptions of trust are crucial factors within
the context of the criminal justice system (Martire et al., 2020). Indeed a vast literature has
examined the psychological variables that influence these assessments for instance, whether
a trial participant is perceived to have high or low SES (Espinoza & Willis-Esqueda, 2008),
what clothing they are wearing (Fontaine & Kiger, 1978), or whether they are perceived to be
in custody (Rossner et al., 2017), to name just a few. This existing research shows that relatively
tangential attributes of a witness can reliably influence perceived witness credibility (e.g.,
McKimmie et al., 2013; see also Koehler et al., 2016). But the metacognitive research reviewed
above suggests that the same will hold for variables that are even more distal from the witness,
such as the audio quality in which a witness statement is recorded or the print font in which it
is transcribed. We provide a first test of this possibility by focusing on a variable that is
important in physical court spaces and can vary depending on technology, but is particularly
relevant in the virtual courtrooms that proliferated in response to the COVID-19 pandemic,
namely audio quality.
In Experiment 1, we asked people to listen to children giving testimony about an
innocuous experience (going to the doctor or going to the movies) and then asked each
participant to rate the credibility, reliability and trustworthiness of the witness. Half of the
LOW AUDIO QUALITY AND CREDIBILITY 7
time, the audio was altered such that there was either a slight delay creating an echo when the
witness spoke or the audio was enhanced such that the witness was particularly easy to hear.
In Experiment 2, we used the same materials and instead of testing perceptions of the
eyewitness, we tested people’s memory for the evidence. Finally, in Experiment 3, we created
new materials from adult eyewitnesses and extended the design such that people first read a
full trial summary and estimated guilt of the defendant. Subsequently, they encountered
additional eyewitness testimony (in high- or low-quality audio) and were asked to evaluate
the witness and estimate the guilt of the defendant for a second time. We expected that across
all experiments, audio quality would influence evaluations of the witness and their evidence:
specifically that people would evaluate testimony less favorably when it was presented in low
rather than high audio quality. In addition, we expected that low audio quality may reduce
people’s memory for the content of the evidence and when we controlled for memory, low
audio quality may still reduce the extent to which people weighted that evidence in their
impressions of guilt.
Experiment 1
In this first experiment, the primary research question was whether audio quality would
influence impressions about an eyewitness and their testimony. We expected that the same
testimony, provided by the same witness, would be evaluated less favorably when presented in
low rather than high-quality audio. Note that audio quality is not only irrelevant to the content
of the testimony, but also a technical variable over which the witness has little control. The key
dependent variables were ratings of credibility, trust and reliability of the eyewitness.
Method
Participants and Design
We posted 200 slots on Amazon Mechanical Turk, requiring a 90% approval rate and
one hundred and ninety-five participants fully completed the study (70 female, age range 1867,
M = 34.57, SD = 10.67). We manipulated audio quality (high or low) within participants, who
heard descriptions of two (unrelated) target events. Key dependent variables were ratings of
credibility, reliability and trust. We preregistered our N as 200, assuming a small effect size,
power of .80 and alpha level of .05. We also include sensitivity analyses for each experiment.
Setting alpha level of .05 and N of 195, we had 80% power to detect a d of 0.20. When we
considered this sensitivity analysis with more conservative exclusions, to be more confident
that people had listened carefully to testimony, with a reduced sample of 157 (after 38
exclusions were accounted for), alpha level of .05, we had 80% power to detect a d of 0.23. As
LOW AUDIO QUALITY AND CREDIBILITY 8
noted below, we find the same significant pattern of results with these participants included or
excluded. Note all pre-registrations, materials and data can be found:
https://osf.io/vf34x/?view_only=3cffa073590749e69415a32878be7146.
Materials and Procedure
In this experiment, participants were asked to act as though they were jurors, and
imagine they were listening to a child’s testimony in a courtroom. They were told that they
“will hear some short snippets of testimony from children describing everyday events.
Children's testimony is often presented via an audio link or video linkso what you hear today
is in a similar format to what jurors might encounter in a real courtroom setting.” The key
manipulation was that one of the audio clips was presented in high-quality audio and one was
presented in low-quality audio. In developing the audio clips, two male American children aged
ten and eleven years old were recorded on an iPhone reading a script of two everyday events:
a trip to the movies and a routine doctor’s check-up. Both clips were just over 2 minutes in
length. The recordings were edited using GarageBand software to create high- and low-quality
versions. The low-quality version was created using a ‘delay’ effect which increased the echo,
as though the speaker were in a large room. The high-quality version was created using a ‘noise
gate’ effect which removed the background noise and distortion, and a ‘bright vocal’ effect
which enhanced the child’s voice. The recordings were pilot-tested with a few volunteers who
listened to the audio recordings and were able to accurately transcribe (allowing pauses for
writing) the low-quality testimony, suggesting that the audio manipulations did not obscure the
content. More importantly, we tested this directly in the study when we asked people to provide
a short description of the witnessed event. Across counterbalanced conditions, participants
either heard about (1) a doctor’s check-up in high-quality audio and a movie visit in low-quality
audio, or (2) a movie visit in high-quality audio and a doctor’s check-up in low-quality audio.
The order of these clips was also counterbalanced between participants. Participants were
randomly assigned to conditions via Qualtrics.
After listening to each clip, participants were asked to evaluate the child using fivepoint
rating scales in response to three key questions: (1) How credible was the child? (2) How
reliable was the child? (3) How trustworthy was the child? (e.g., 1 ‘Not credible at all’ to 5
‘Very credible’). While we have combined these ratings into a general impression of the witness
for high-quality and low-quality audio (both Cronbach’s a >.92), we find the same significant
pattern of results when we analyse credibility, reliability and trust separately.
LOW AUDIO QUALITY AND CREDIBILITY 9
Participants were also asked to rate how easy it was to understand the child and provide a brief
description of the child’s testimony. Indeed, the manipulation check of how easy it was to
understand the child showed that high audio quality evidence was rated as much easier (M =
4.07, SD = 0.95) to understand than the low audio quality, (M = 2.01, SD = 1.12), t(194) =
21.64, p < .001, Hedges gave, to correct for bias in d (as reported throughout) d = 1.98, 95% CI
[1.72, 2.25]. Similar sound effects were used across studies and this manipulation check of
audio was only used in Experiment 1.
Results and Discussion
The primary research question in this study was whether audio quality influenced
people’s perceptions of a witness statement and impressions of a witness. As Figure 1 shows,
in sum, the answer is ‘yes’. When the audio was difficult to hear, participants rated the witness
less favorably than when the audio quality was clear. At an item level of analysis, this audio
effect was found for both fluency conditions.
To ensure data quality, we examined whether people could describe the nature of each
testimony. Of the 195 participants, 38 were classified as not being able to explain in a short
textbox what the testimony was about (for at least one of the witness accounts). We excluded
those participants from our analysis presented here, though we find the same significant pattern
of results when we examine the full sample. People evaluated witnesses more favorably when
they heard the witness present testimony in high-quality audio (M = 4.07, SD = 0.84) rather the
low-audio quality (M = 3.72, SD = 0.89), t(156) = 5.44, p < .001, d = 0.32, 95% CI [0.25, 0.56].
Replicating the pattern above, when we included witness combination in our analysis
as a between-subjects variable (whether people were in a condition in which they heard the
doctor description or movie description in high-quality audio), a repeated measures mixed
ANOVA 2 (audio quality: high, low) x 2 (witness combination: doctor high/movie low, doctor
low/movie high), showed the expected main effect for audio quality (as shown in the paired t-
test above), F(1, 155) = 31.49, p < .001, !2p= .17, 90% CI [.09, .25], and an interaction with the
witness combination condition emerged, F(1, 155) = 4.54, p = .035, !2p= .03, 90% CI [.001,
.08]. Follow-up analyses show that the audio effect holds within both witness combination
conditions, but the effect of audio was larger for the fluent doctor/disfluent movie. This may
be because the doctor event was relatively more schematic than the movie event, but such a
conclusion warrants further investigation. People evaluated the high audio quality witness
(high-quality audio doctor, M = 4.15, SD = 0.79; high-quality audio movie; M = 4.00, SD =
0.88) more favourably than the low audio quality witness (lowquality audio movie, M = 3.65,
LOW AUDIO QUALITY AND CREDIBILITY 10
SD = 0.83; low-quality audio doctor; M = 3.78, SD = 0.95). Paired comparisons support the
pattern observed in the means, (fluent doctor/disfluent movie; t(73) = 5.09, p < .001, d = 0.61,
95% CI [0.36, 0.87], disfluent doctor/fluent movie; t(82) = 2.65, p = .010, d = 0.24, [0.04,
0.44]). This effect of audio quality appeared to be more robust for the fluent doctor/disfluent
movie condition, as seen in the means, effect sizes, and paired comparisons. There was also no
main effect for the witness combination condition, F < 1.00.
In sum, evaluations of witnesses were dependent on the technical quality of the audio.
This effect was independent of the content of the testimony, with a similar pattern being found
when the child was testifying about a visit to the movies or to the doctor. These results highlight
the importance of audio fluency for perceptions of witnesses.
Figure 1
Mean ratings of witnesses across high and low audio quality conditions. The top panel is
collapsed across witnessed event, the lower panel presents means at an item-level analysis for
the doctor and movie witness statements. All figures present the data with exclusions present,
however data patterns and significance are the same with the full or reduced sample
LOW AUDIO QUALITY AND CREDIBILITY 11
Note. Error bars represent 95% CIs.
LOW AUDIO QUALITY AND CREDIBILITY 12
Experiment 2
In Experiment 2 we explored the possibility that audio quality may not only influence
people’s perceptions of a witness, but also how they remember facts presented in testimony.
Some research suggests that difficulty in processing can act as a ‘problem signal’ that conveys
that something may be off and require closer attention (Oppenheimer, 2008; Pieger et al., 2017;
Song & Schwarz, 2008). This kind of signal could produce the pattern of results we found in
Experiment 1, with participants rating the witness less favorably. However, this problem signal
can also influence the processing approach that people apply in a given task. Indeed, an
experience of disfluency can lead people to adopt a more analytical processing strategy. For
instance in one study, participants were better at detecting misinformation in questions when
those questions were presented to them in a disfluent format (e.g., in difficult to read font),
being less likely to rely on intuition (Song & Schwarz, 2008; see also Alter et al., 2007; Liu et
al., 2020). Arguably, this more analytical processing style should result in a better memory for
materials presented in a disfluent format, as has been found in some studies in educational
contexts (for a review, see Oppenheimer & Alter, 2014). While there is some support for
superior memory for disfluent information, other studies do not find this memory advantage
(Eitel et al., 2014; Rummer et al., 2016). Indeed, a negative influence of disfluency on the
perceived credibility of a witness (as observed in Experiment 1) may even undermine the
listener’s motivation to pay close attention to the details. If so, memory for the testimony should
be worse under disfluent processing. We address these possibilities in Experiment 2, examining
whether disfluency influences participants’ memory of the testimony, and if so, which direction
this influence takes. Jurors’ memory for witness testimony is crucial to trial verdicts and indeed,
trials may be paused or testimony restarted when audio completely drops out or freezes (e.g.,
Byrom et al., 2020). But smaller background disfluency may not lead to such procedural
interventions in practice. Here we consider whether these trial interventions are indeed
warranted given consequences of more general audio disfluency (rather than complete dropout
or freezing) on memory for testimony.
Method
Participants and Design
We posted 200 slots on Amazon Mechanical Turk, requiring a 90% approval rate and
one hundred and ninety-three Amazon Mechanical Turk participants fully completed the study
(82 female, age range 18-69, M = 33.02, SD = 9.19). Participants who had completed
Experiment 1 were unable to participate in Experiment 2. We preregistered our N as 200,
LOW AUDIO QUALITY AND CREDIBILITY 13
assuming a small effect size. We also conducted a sensitivity analysis, setting alpha level of .05
and N of 193, we had 80% power to detect a d of 0.20. When we considered this sensitivity
analysis with more conservative exclusions, to be more confident that people had listened to
the full testimony (by requiring that for both witness accounts, that people had clicked to the
next page after finishing the audio clip), with a reduced sample of 128 (after 65 exclusions
were accounted for), alpha level of .05, we had 80% power to detect a d of 0.25. We find the
same significant pattern of results for memory with these participants included or excluded.
The audio quality was manipulated within participants as in Experiment 1. The key dependent
variable was memory of factual evidence, measured as discrimination (d’) and response bias
(c) using signal detection analysis (Stanislaw & Todorov, 1999).
Materials and Procedure
Experiment 2 was identical to Experiment 1 with the following exceptions. After
listening to the audio clip, instead of completing ratings about the eyewitnesses, participants
were asked to complete a recognition test consisting of 20 statements about the testimony,
designed to test their memory for key facts. Test items consisted of statements such as “The
doctor took the boy’s height.” Participants had to decide whether a statement was ‘old’ (an
accurate description of content included in the testimony) or ‘new’ (a related but inaccurate
description of content included in the testimony, such as “The doctor listened to the boy’s
heart”). There were 10 ‘old’ and 10 ‘new’ statements included in the recognition test, all
provided in supplementary materials.
Results and Discussion
The primary research question in this study was whether audio quality influences people’s
memory for information presented in testimony. The answer was ‘yes’, with participants being
significantly better at discriminating between old and new test items when the audio quality
was high than when it was low. That is, disfluency did not lead to better memory for testimony,
but rather, reduced memory for key facts.
To test this research question, signal detection theory parameters were calculated to
measure participants’ sensitivity on the recognition tests: their ability to accurately discriminate
between old and new items. Response rates were classified either as ‘hits’, where an item was
included in the testimony and correctly identified as ‘old’, or a ‘false alarm’ where an item was
not included in testimony and incorrectly identified as ‘old’ at test. From the existing literature,
it was unclear whether participants would have higher or lower discrimination after listening
to low-quality audio stimuli. In line with the fluency literature more generally, it was however
LOW AUDIO QUALITY AND CREDIBILITY 14
expected that participants would have an ‘old’ bias (familiarity bias) for high-quality audio and
a ‘new’ bias for low-quality audio.
As Figure 2 shows, a paired-samples t-test between high- and low-quality audio
conditions showed that participants had higher discrimination when they listened to testimony
in high-quality audio, t(127)= 3.81, p < .001, d = 0.44, 95% CI [0.21, 0.68]. This pattern held
when we included Witness Combination as a condition in the analysis (whether it was the
doctor witness description or movie description in high audio) in our analysis. A repeated
measures mixed ANOVA 2 (audio quality: high, low) x 2 (witness combination: doctor high/
movie low, doctor low/movie high), with witness combination condition as a between-subject
variable, showed the expected main effect for audio quality F(1, 126) =
14.29, p < .001, !2p= .10, 90% CI [.03, .19]; and no main effect of witness combination , F(1,
126) = 2.32, p = .130, !2p = .02, [.00, .07]. While it is tempting to conclude that there is an
interaction with witness combination, this did not reach significance, F(1, 126) = 1.66, p = .200,
!2p = .01, [.00, .06], indicating that the effect of audio quality held regardless of which witnessed
event appeared in high or low audio quality.
Closer inspection revealed that while high-quality audio led to higher hits (M = 0.72,
SD = 0.24) than low-quality audio (M = 0.66, SD = 0.20), t(127) = 2.75, p = .010, d = 0.27,
95% CI [0.08, 0.47], low-quality audio led to more false alarms (M = 0.38, SD = 0.19) than
high-quality audio (M = 0.32, SD = 0.19), t(127) = 2.87, p = .005, d = 0.31, [0.11, 0.52]. In
short, poor audio quality impaired memory performance on both measures by reducing the
correct identification of old items and increasing the erroneous acceptance of new items as
previously seen. Participants’ bias to respond new or old (as captured by c values), was not
significantly different when the audio quality was low (M = -0.06, SD = 0.64) and when the
audio quality was high (M = -0.07, SD = 0.72); t(127) = 0.11, p = .911.
Figure 2
Mean discrimination for testimony across high and low audio quality conditions. The top panel
is collapsed across witnessed event, the lower panel presents means at an item-level analysis
for the doctor and movie witness statements.
LOW AUDIO QUALITY AND CREDIBILITY 15
Note. Error bars represent 95% CIs.
LOW AUDIO QUALITY AND CREDIBILITY 16
In sum, the results of Experiment 2 show that participants had better memory for
testimony when it was presented in high-quality audio. This is consistent with the robust finding
that disfluency impairs the perceived trustworthiness of messages, as reviewed in Schwarz et
al. (2021), and observed in Experiment 1 as well as Newman and Schwarz (2018) for
manipulations of audio quality. Hence, participants may have attended less to witnesses who
seemed to be less credible, reliable and accurate, resulting in reduced memory. A negative
impact of disfluent processing on memory performance has been observed in several studies
(Eitel et al., 2014; Kühl et al, 2014; Rummer et al., 2016), whereas others observed improved
memory under conditions of disfluency (Besken & Mulligan, 2014; Diemand-Yauman et al.,
2011; French et al., 2013; Sungkhasettee et al., 2011). At present, the conditions that determine
these diverging effects of fluency on memory are poorly understood (for discussions, see Alter
& Oppenheimer, 2014; Schwarz, 2015). We suggest in the absence of explicit motivating
conditions to attend to a message (such as in educational settings), people are less likely to
attend to messages of questionable credibility, resulting in poorer memory when disfluency
hurts message perception (as in the present studies). Compatible with this conjecture, we found
a systematic effect across counterbalances in Experiment 2. When we reran the key audio
comparison as a repeated measures analysis including audio quality and counterbalance as
factors we detected an interaction effect of counterbalance in Experiment 2, F (3, 124) = 3.66,
p = .014, !2p = .08, CI 90% CI [.01, .15] (not observed in Experiment 1, F (3, 153) = 2.11, p =
.101, !2p = .04, [.00, .09]). Disfluency (poor audio in this case) had the most robust negative
effect on memory performance on the first task, when people did not know that a memory test
would follow; it was less influential on the second task, where people already knew that a
memory test would be administered. Specifically, d’ performance across counterbalances
shows that conditions with disfluent testimony first, where participants had no knowledge of
an upcoming memory test, had a raw mean difference M = 0.90, 95% CI [0.59, 1.21]; in
contrast, conditions with disfluent testimony second, where participants could anticipate an
upcoming memory test, had a raw mean difference of M = 0.12, [-0.41, 0.17].
Experiment 3
In Experiment 1 and 2, people evaluated short witness descriptions with little contextual
trial information. In Experiment 3, we aimed to replicate the results of Experiment 1, but
incorporated an initial trial summary, so we could capture people’s impressions about a case
before introducing audio testimony. Extending beyond Experiment 1 and 2, this adapted
paradigm allowed us to examine the degree to which people updated their impressions about a
LOW AUDIO QUALITY AND CREDIBILITY 17
case and in particular, how variation in audio quality influenced how participants weighed
witness testimony in making assessments of guilt. To date, there is limited evidence regarding
the extent to which processing fluency guides evidence weighting in decision making. One
series of studies that looked directly at this question considered how people used fluency as a
cue to resolve contrasting information about a judgement target in a marketing context (Shah
& Oppenheimer, 2007). The key hypothesis was that fluency may influence which information
seems right: fluent information is more likely to be judged as true and should hence be more
heavily weighted in people’s judgements. Across studies, Shah and Oppenheimer found that
fluency affected the extent to which people relied on information in decision making, with
people making decisions in line with the fluent cues. For example, when participants were
exposed to conflicting cues, one fluent and one disfluent, participants used information from
the fluent cue more when making final judgments (Shah & Oppenheimer, 2007, Exp. 2). That
is, fluency indirectly affected judgments and operated as a kind of domain-general basis for cue
weighting (see Schwarz,
2015, for a similar discussion on truth).
In the courtroom, use of one of two conflicting accounts could shape belief in a
particular story or inevitably, assessments of guilt. Thus, in Experiment 3 we test not only
whether the audio effect on witness impressions occurs against a backdrop of a more
contextually rich trial description, but also whether audio quality influences the extent to which
testimony is weighed in assessments of guilt.
Method
Participants and Design
Two hundred and five Prolific workers fully completed the study (103 female, age range
18-68 (M = 31.46, SD = 12.45). As in Experiments 1 and 2, we manipulated audio quality
within-subjects. The key dependent variables were witness impressions (reliability, trust and
accuracy) and ratings of guilt before and after hearing the audio testimony. We preregistered
our N as 200, assuming a small-to-medium effect size. We also conducted a sensitivity analysis,
setting alpha level of .05 and N of 205, we had 80% power to detect a d of 0.20. With the
reduced sample of 118 (considering later manipulation checks), alpha level of .05, we had 80%
power to detect a d of 0.26. Note, we used Prolific as an online data collection platform for
Experiment 3 given the nature of the trial and specific metric measurements we used, allowing
the restriction to a UK sample. It also allows a replication outside of the MTurk platform, with
LOW AUDIO QUALITY AND CREDIBILITY 18
different materials (Prolific has been used as a core online data collection platform in
experimental psychological studies and has high data quality, see e.g., Peer et al., 2017).
Materials and Procedure
Initial Trial Summary and Initial Guilt Ratings.
Participants completed the study via Qualtrics, where they were asked to act as though
they were jurors assessing a case. After being welcomed to the study, participants first read a
trial transcript. We used a transcript that depicted a case where a child was hit by a car
outside a school at pick-up time by one of the other parents at the school. The transcript was
2147 words and should have taken about 5 minutes to read. After reading through the transcript,
participants were asked to assess the extent to which the driver was guilty of negligent driving
and/or dangerous driving. Negligent driving was defined as not paying proper care or attention
to their driving and dangerous driving was defined as driving that was dangerous OR under the
influence of alcohol or drugs. We included two different charges in line with plausible charges
in this particular case.
Immediately after reading the trial summary, participants provided initial ratings on
how guilty they found the defendant in response to the charges of negligent driving and
dangerous driving using two separate slider scales from 0 to 100, with higher numbers
indicating higher guilt.
Eyewitness Statements and Subsequent Guilt Ratings.
The key contention in the case was how far away the defendant was from the child when
the child ran out to the road. Thus after reading the initial case description, participants then
heard two witnesses (one from the defense and one from the prosecution) who spoke to this
issue of distance in brief audio clips.
Two female undergraduate students were recorded reading each witness account, thus
allowing us to counterbalance for witness voice. The recordings were edited using iMovie
10.1.14 to create high- and low-quality audio versions. The high-quality version was unedited,
and the low-quality version was created using the ‘large room’ filter, which increases the echo
and decreases the clarity of the speaker. As in Experiment 1, the recordings were pilot-tested
on a small sample (N = 19), naive to the hypothesis, and volunteers were able to accurately
transcribe the audio showing that the crucial content (distance from the child) was not lost with
depleted audio quality. More importantly, we actually tested this with our study participants.
Across conditions, participants either heard
LOW AUDIO QUALITY AND CREDIBILITY 19
(1) the prosecution witness in high-quality audio and the defense witness in low-quality audio,
or (2) the defense witness in high-quality audio and the prosecution witness in lowquality audio.
The order of the witness statements were counterbalanced. Participants were randomly assigned
to conditions via Qualtrics.
After listening to each witness statement, participants were asked to evaluate (1) How
accurate do you think the witness was? (2) How reliable did you feel the witness was? (3)
How much do you trust the witness? (e.g., 1 ‘Not very accurate at all’ to 5 ‘Very accurate’).
While we have combined these ratings into a general impression of the witness for highquality
and low-quality audio (both Cronbach’s a >.87), we find the same significant pattern of results
when we analyse accuracy, reliability and trust separately. To check that participants had
encoded the crucial information from each witness, participants were also asked to report the
distance the witness said the defendant’s car was from the child. This process was repeated
with the second witness statement. Finally, participants were asked to make final guilt ratings
on both charges using the scales described above.
Results and Discussion
As in Experiments 1 and 2, audio quality influenced evaluations of the testimony. When
the audio quality was high, participants evaluated the witness more favorably (top panel of
Figure 3). At an item level, the effect of audio quality was consistent for both the prosecution
and defense witness statement combinations (lower panel of Figure 3). Further, audio quality
influenced evidence weighting, with participants shifting their guilt ratings towards the witness
statement that was processed fluently (Figure 4). In the following analyses, we only included
the participants (N = 118) who were able to accurately state the distance reported in each of the
witness accountsan exclusion that is particularly important for the shift in guilt analysis,
where we had to ensure that people had encoded the critical information from each witness
account.
Witness Ratings
As above, a paired-samples t-test of the overall ratings of the high and low audio quality
witness showed that participants rated the high-audio quality witness (M = 3.60, SD = 0.72)
higher than the low-audio quality witness (M = 3.18, SD = 0.78), t(117) = 4.54, p < .001, d =
0.55, 95% CI [0.31, 0.81]. Notably, we find the same significant pattern with the full sample
included.
This pattern held when we included witness combination as a condition in the analysis
(whether people heard the prosecution witness description or defense description in highquality
LOW AUDIO QUALITY AND CREDIBILITY 20
audio) in our analysis. A repeated measures mixed ANOVA 2 (audio quality: high, low) x 2
(witness combination: prosecution high-quality/defense low-quality, prosecution low
quality/defense high-quality), with witness combination as the between-subjects variable
showed the expected main effect for audio quality (as shown in the paired t-test above), F(1,
116) = 20.51, p < .001, !2p = .15, [.06, .25]. The interaction for witness combination did not
meet the threshold of significance, F(1, 116) = 3.83, p = .053, !2p= .03, [.00, .12] suggesting
the effect of audio quality held regardless of which witness description appeared in high or low
audio quality type (see Figure 3). There was also no main effect for the witness combination
condition, F < 1.00.
Figure 3
Mean ratings of witnesses across high and low audio quality conditions. The top panel is
collapsed across the witness event, the lower panel presents means at an item-level analysis
for the prosecution and defense witness statements.
LOW AUDIO QUALITY AND CREDIBILITY 21
Note. Error bars represent 95% CIs.
LOW AUDIO QUALITY AND CREDIBILITY 22
Guilt Ratings
After reading the initial case, people rated the defendant on the “how guilty” 0-100
slider scale on the negligent driving charge as very close to midpoint (M = 53.8, 95% CI [49.07,
58.52]), but participants were less willing to conclude that the driver was guilty of dangerous
driving (M = 36.34, [31.38, 41.31]).
To address the question of whether fluent evidence was weighed more heavily in final
guilty ratings, we measured change in guilt ratings. More specifically, we measured the change
of initial guilt ratings (after reading the trial summary), to final guilt ratings (after listening to
both witness accounts). If fluent evidence was weighed more heavily, people’s guilt ratings
would update in line with the fluent evidence--those who heard the prosecution evidence in
high-quality audio would shift towards higher estimates of guilt, and those who heard the
defense evidence in high-quality audio would shift towards lower estimates of guilt. This is
what we found, but as Figure 4 shows this shift largely occurred in the defense condition. Given
that each participant encountered exactly the same content, and reported it correctly for each
witness, this shift cannot be attributed to the influence of content; instead, it suggests that people
took their metacognitive experience of ease or difficulty into account when weighing the
evidence.
As Figure 4 shows, people updated their guilt ratings in line with the witness who had
high-quality audio. Hearing from the defense in fluent audio led people to reduce their guilt
assessments, relative to those listening to the prosecution in fluent audio. Run as a repeated
measures ANOVA 2 (witness combination: prosecution high/defense low, prosecution
low/defense high) x 2 (charge: negligence, dangerous) a main effect emerged for witness
combinationpeople reduced their impressions of guilt when they were in the condition that
heard the defense in high audio quality, relative to people who heard the prosecution in high
audio quality: F(1, 116) = 5.85, p = .017, !2p = .05, 90% CI [.00, .12] (Figure 4), and a main
effect of charge, such that people tended to shift to lower ratings of guilt for the negligent
driving charge, relative to the dangerous driving charge F(1, 116) = 4.32, p = .040, !2p = .04,
90% CI [.00, .12]. There was no interaction between charge and witness audio quality, F <
1.00, suggesting the same pattern of results for both charges, though see the separate analysis
in supplemental materials suggesting more robust effects for negligence than dangerous
drivingthese analyses comparing both initial and final guilt scores, for both guilt ratings
separately are presented in supplementary materials.
LOW AUDIO QUALITY AND CREDIBILITY 23
Figure 4
Mean change in initial to final guilt ratings for between-subject conditions of witness statements
presented in high audio quality (prosecution high/defense low; prosecution low/defense high).
Higher scores indicate a shift towards higher final guilt estimates, lower scores indicate a shift
towards lower final guilt estimates. The left panel shows negligent driving guilt ratings and the
right panel shows dangerous driving guilt ratings
Note. Error bars represent 95% CIs.
Witness Ratings and Final Guilt Ratings
Next we examined final guilt ratings and the extent to which the influence of audio
quality on guilt ratings was explained by people’s impressions of the prosecution and defense
witnesses. We conducted a mediation analysis for both the defense witnesses and the
prosecution witnesses separately and expected that impressions of the witnesses would
partially, if not fully mediate the relationship between audio condition and guilt. That is, when
witness impressions were added as a mediator, we expected that the direct relationship between
audio quality and guilt ratings would be significantly reduced.
LOW AUDIO QUALITY AND CREDIBILITY 24
We used a hierarchical regression for the negligent driving charge (there was no direct
relationship with audio condition for dangerous driving, thus we only ran the analysis for
negligent driving here). The outcome variable for both prosecution and defense negligent
driving analyses were final guilt ratings. The predictor variable for both the analyses was
witness audio quality (whether the prosecution or defense witness was presented in high- or
low-quality audio). Witness ratings (collapsed across accuracy, reliability and trust) of the
prosecution and defense witnesses were entered as mediators into the two models. As Figure 5
shows, for both prosecution and defense analyses, witness ratings fully mediated the
relationship between fluency condition and final guilt ratings for the negligent driving charge.
Prosecution
Our analyses show that indeed as reported above, participants who heard the
prosecution witness in high-quality audio rated the prosecution witness more favorably,
whereas participants who heard the prosecution witness in low-quality audio rated the
prosecution witness less favorably. The mediation analysis shows that the effect of audio
quality on guilt was fully mediated by impressions of the witnesses: Participants who rated the
prosecution witness more favourably gave higher final negligent driving guilt ratings than
participants who rated the prosecution witness lower.
As shown in the top panel of Figure 5, the regression predicting final negligent driving
guilt ratings from fluency condition was significant, " = .226, p = .014. The regression
predicting overall prosecution ratings from audio condition was significant, " = .274, p = .003.
Overall prosecution ratings, controlling for audio, significantly predicted final negligent
driving guilt ratings, " = .263, p = .005, whereas the relationship between witness audio quality
condition and final negligent driving guilt ratings became non-significant once overall
prosecution ratings were entered into the model, " = .154, p = .094. A Sobel test was conducted
and confirmed full mediation in the model (z = 2.04, p = .041)
Defense
Replicating results presented above, participants hearing the defense witness in
lowquality audio rated the defense witness less favourably, whereas participants who heard the
defense witness in high-quality audio rated the defense witness more favourably. Our results
from the mediation analysis show that the effect of audio quality on guilt was fully mediated
by impressions of the witnesses: The more favorably participants rated the defense witness, the
lower their final guilt rating for negligent driving.
LOW AUDIO QUALITY AND CREDIBILITY 25
As shown in the bottom panel of Figure 5, the regression predicting final negligent
driving guilt ratings from fluency condition was significant, " = .226, p = .014. The regression
predicting overall defense ratings from witness audio condition was significant, " = -.262, p =
.004. Overall defense ratings, controlling for audio, significantly predicted final negligent
driving guilt ratings, " = -.289, p = .002, whereas the relationship between witness audio
condition and final negligent driving guilt ratings became non-significant once overall defense
ratings were entered into the model, " = .150, p = .098. A Sobel test was conducted and
confirmed full mediation in the model (z = 2.10, p = .035)
In sum, audio quality influences perceptions of testimony and witnesses even when
there is rich background context such as a trial summary. Further, we found that participants
used fluently processed evidence more heavily in evaluating guilt, adjusting their assessments
in line with fluent testimony. Our findings also suggest that the guilt ratings that support the
witness presented in high-quality audio was driven by ratings of witnesses in higher audio on
assessments of accuracy, reliability, and trust.
LOW AUDIO QUALITY AND CREDIBILITY 26
Figure 5. Mediation analyses for final negligent driving guilt ratings, by witness audio
quality and overall prosecution and defense witness ratings as mediators
General Discussion
While courts adapt to incorporate new forms of technology, and obvious errors are
addressed by procedural interventions, more insidious tech glitches may hurt the integrity of a
trial by systematically biasing people’s impressions of witnesses and how they use their
evidence. Across three experiments, we found that low-quality audio systematically leads to
less favorable evaluations of witnesses, poorer memory for factual evidence, and reduced
LOW AUDIO QUALITY AND CREDIBILITY 27
weighting of evidence in decision making. These findings highlight that audio quality is a
technological feature of evidence that warrants procedural consideration.
Our findings are consistent with the literature on cognitive fluency and extend that
literature to decision making in the courtroom. While many studies examined how the content
of evidence or attributes of a witness can influence people’s judgements, less empirical research
has focused on how the phenomenological experience of processing information can alter
people’s decisions (see Newman et al., 2019). In our studies, participants listened to the same
witness making the same statementand arrived at different impressions of the witness and
different guilt ratings, depending on whether the statement was played to them in high or low
audio quality. The audio quality also influenced their memory for what the witness said and
their weighting of the evidence. These effects were observed even though the speaker had no
control over the quality of the audio delivery and reflect that people are sensitive to their
metacognitive experience of ease or difficulty when processing information, but insensitive to
the variables that cause the difficulty (for reviews, see Schwarz, 2015; Schwarz et al., 2021).
People typically misread processing difficulty as a sign that something is wrong with the
content they process, which results in less favorable impressions of the trustworthiness of the
communicator (e.g., Newman et al., 2014) and the credibility and truth of the message (e.g.,
Reber & Schwarz, 1999).
As seen in Experiment 3, the observed influence of audio quality is robust, even when
participants have other relevant and diagnostic information to use in their decisions. Indeed,
people’s initial assessments of guilt, provided after reading case information and prior to
hearing witnesses, were important in shaping their final judgments, as observed in earlier work
(Scurich & John, 2018). Nevertheless, the ease of processing the witness audio influenced their
final ratings of guilt (see supplemental analyses). These findings square with research on
cognitive fluency and repetition where both prior beliefs or declarative content and an
experience of processing fluency can contribute to decisions (e.g., Fazio et al., 2015; Unkelbach
& Greifeneder, 2018). Following this line of logic, in a given individual case, the extent to
which audio variations influence case outcomes may operate in concert with more declarative
variables such as the strength of the evidence and whether people are close to the threshold in
switching verdicts. In such scenarios, small experiential changes in the fluency of processing
evidence may be particularly consequential.
The influence of processing difficulty is usually most pronounced when its source is
subtle and decreases when people become aware that processing may solely be difficult due to
LOW AUDIO QUALITY AND CREDIBILITY 28
some incidental influence, which usually requires that the source of disfluency is explicitly
brought to their attention (Schwarz et al., 2021). Future research may examine whether bringing
the audio variation to people’s attention moderates the size of these effects. Further, one may
also consider the extent to which bringing audio quality to people’s attention requires explicit
cuesin the momentabout technological disruption, in effectively reducing the effect of
audio quality on people’s impressions about witnesses and their evidence. These are empirical
questions worthy of future research.
In the fluency literature more generally, there has been a debate regarding whether
difficulty in processing can aid or impede learning and memory. In Experiment 2, we tested
whether disfluency increased memory for information as in more educational settings (Alter et
al., 2007), or led to reduced memory for evidence perhaps because people tuned out,
experienced strained cognitive resources, or did not trust the witness (e.g., Exp 1; Eitel et al.,
2014; Kühl et al., 2014). Our results supported the latter notion, with participants performing
worse on a memory test after hearing the child testify in low-quality audio. The cognitive or
social-cognitive mechanisms producing this effect may be interesting avenues for future
research, but the decreased number of correct answers in the disfluent condition suggest that
audio disfluency in the courtroom may produce cognitive consequences that reach beyond the
evaluation of a given witness. Notably, the effects we found for this difference in memory
performance were most robust when people did not know that a memory test would follow
disfluent audio, suggesting that motivation to remember may moderate this effect. This is an
interesting question to pursue in future research.
Our findings on evidence weighting extended prior work by Shah and Oppenheimer
(2007): People relied more heavily on fluently processed information in decision-making with
impressions of guilt being influenced by audio quality. This pattern fits with previous findings;
however our paradigm extended this previous research in two ways. First, because we gave
participants an initial trial summary, we could measure how decisions were revised with new
evidence. Second, the mediation analysis provides insights into why participants shifted their
judgments in the direction of the fluent information. Our mediation analyses suggest that
participants rated the prosecution and defense witnesses higher or lower on assessments of
accuracy, reliability, and trust depending on who had high- or low-quality audio recordings and
shifted their ratings of guilt in the direction of the side they rated more favourably, suggesting
that fluency can operate via social cues interacting with final evidence evaluation.
LOW AUDIO QUALITY AND CREDIBILITY 29
Of course, there are several possible sources of audio disfluency in the courtroom and
in virtual court contexts. Wilson and Sasse (2000) identified at least five other major sources
of audio distortions along with echoes that may influence the experience of the listener, with
excess loudness, bad microphone and significant dropout generally having the greatest effect
on both subjective assessments (e.g., ratings of comfort and fatigue) and physiological
responses compared to a condition with only minor dropout (Wilson & Sasse, 2000). While
existing research shows that audio disruptions influence perceptions of others even when
presented with video content (Newman & Schwarz, 2018), this study did not examine the role
of visual displays. While visual cues can enhance conceptual processing in legal contexts and
influence judgements (e.g., Derksen et al., 2020; Sanson et al., 2020), visual discrepancies can
be a source of cognitive disfluency. For instance, research subjects find it harder to detect
emotions when two visual distortions are introducedreductions in image refreshment rate and
pixel resolution (Wallbott, 1992). These additional forms of distortion pose interesting
questions for future research and are important for policy development going forward. While
more pronounced disruptions might cause trial participants to talk over each other, repeated
testimony and general confusion, judges presiding over such trials will likely intervene
(Lapinski et al., 2020). Having empirically derived estimations of how more nuanced
technological features of evidence, such as low-quality audio, may affect decision making in
the court will help to inform conditions under which procedural interventions are necessary.
Trials in the common law tradition largely rely on oral testimony, such as that used in
this study from defendants or eyewitnesses. So it is important that decision makers can focus
on the content of the information without having to adjust for audio distortion, or deal with the
extra stress involved in listening to poor-quality sound (see Wilson & Sasse, 2000). The issue
is relevant not just to juries, but to judges who may be hearing applications for bail or passing
sentence. It may also affect the ability of witnesses to tell their story in a considered way if
the voice of the lawyer asking them questions is distorted, the witness could feel less confident
they know what they are being asked, and appear less credible in their response.
The implications of this research also stretch to the socio-cultural domain: It is possible
that the testimony that is affected by poor internet connection may be from more disadvantaged
trial participants who have less access to up-to-date technology. There is evidence that some
trial participants have been joining from remote locations and using Zoom on their phones
(Raczynski, 2020). Whilst a possible solution is to ensure that all participants have the same
technology such as court appointed devices, also installing new Wi-Fi routers to boost their
LOW AUDIO QUALITY AND CREDIBILITY 30
internet connection is an expensive step. This is relevant to remote witnesses, who will continue
to take part in court hearings from home or other remote sites now that the pandemic has
highlighted its potential (including cost savings). It is unlikely that remote juries will be used
in the long term, apart from in a very limited number of civil jury matters. However, audio
problems can also arise from the physical attributes and arrangements of regular court rooms,
particularly older ones, which may be more salient for older jurors, and sometimes judges. So
the issues this paper raises are potentially relevant for in-person hearings in physical
courtrooms. Echoes on calls from remote witnesses may, this study suggests, make the witness
less credible; but audio distortions from other participants within the courtroom may also have
an impact on how the speaker is perceived.
While long pauses and disconnections may produce significant procedural interruptions
in trial proceedings, the experiments reported here suggest that low-quality audio may warrant
similar interventions often used with cases of complete dropout or long delays. Procedures in
the courtroom are carefully attuned to content, but these experiments suggest that experienced
ease of processing information is a subtle feature of evidence that can bias juror decision
making.
References
Alter, A. L., & Oppenheimer, D. M. (2009). Uniting the tribes of fluency to form a
metacognitive nation. Personality and Social Psychology Review, 13(3), 219-235.
https://doi.org/10.1177/1088868309341564
Alter, A. L., Oppenheimer, D. M., Epley, N., & Eyre, R. N. (2007). Overcoming intuition:
Metacognitive difficulty activates analytic reasoning. Journal of Experimental Psychology:
General, 136(4), 569-576. https://doi.org/10.1037/0096-3445.136.4.569
Barrie, D. (2020, August 6). Coronavirus in Scotland: Cinemas could be the key for a virtual jury.
The Times. https://www.thetimes.co.uk/article/coronavirus-in-scotland-cinemas-couldbe-the-key-for-
a-virtual-jury-dznmfcth0
Besken, M., & Mulligan, M. W. (2014). Perceptual fluency, auditory generation, and metamemory:
Analysing the perceptual fluency hypothesis in the auditory modality. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 40(2), 429-440.
https://doi.org/10.1037/a0034407
Boys, T. & Sams, A. (2020). COVID-19 and the virtual courtroom is technology a friend or foe?
[Web Article]. https://www.holdingredlich.com/covid-19-and-the-virtual-courtroom-istechnology-
a-friend-or-foe
LOW AUDIO QUALITY AND CREDIBILITY 31
Brown, A. S., Brown, L. A., & Zoccoli, S. L. (2002). Repetition-based credibility enhancement
of unfamiliar faces. The American Journal of Psychology, 115(2), 199209.
https://doi.org/10.2307/1423435
Byrom, N., Beardon, S., & Kendrick, A. (2020). Rapid review: The impact of COVID-19 on
the Civil Justice System. Civil Justice Council and the Legal Education Foundation.
https://www.judiciary.uk/wp-content/uploads/2020/06/FINAL-REPORT-CJC-4-June-
2020.v2-accessible.pdf
Derksen, D. G., Giroux, M. E., Connolly, D. A., Newman, E. J., & Bernstein, D. M. (2020).
Truthiness and law: Nonprobative photos bias perceived credibility in forensic contexts. Applied
Cognitive Psychology, 34(6), 1335-1344. https://doi.org/10.1002/acp.3709
Diemand-Yauman, C., Oppenheimer, D. M., & Vaughan, E. B. (2011). Fortune favors
the bold (and the italicised): Effects of disfluency on educational outcomes.
Cognition, 118(1), 114-118. https://doi.org/10.1016/j.cognition.2010.09.012
Eitel, A., Kühl, T., Scheiter, K., & Gerjets, P. (2014). Disfluency meets cognitive load in
multimedia learning: Does harder-to-read mean better-to-understand? Applied
Cognitive Psychology, 28(4), 488-501. https://doi.org/10.1002/acp.3004
Espinoza, R. K. E. & Willis-Esqueda, C. (2008). Defendant and defense attorney
characteristics and their effect on juror decision-making and prejudice against Mexican-
Americans. Cultural Diversity and Ethnic Minority Psychology, 14(4), 364-371.
https://doi.org.//10.1037/a0012767
Fazio, L. K., Brashier, N. M., Payne, B. K., & Marsh, E. J. (2015). Knowledge does not protect
against illusory truth. Journal of Experimental Psychology: General, 144(5), 993-
1002. https://doi.org/10.1037/xge0000098
Federal Circuit Court of Australia. (2020, April 22). Practitioner and litigant guide to virtual
hearings and Microsoft Teams. shorturl.at/gnD28
Fiechter, J. L., Fealing, C., Gerrard, R., & Kornell, N. (2018). Audiovisual quality impacts
assessments of job candidates in video interviews: Evidence for an AV quality bias.
Cognitive Research: Principles and Implications, 3(47), 2-5. https://doi.org/10.1186/s41235-018-
0139-y.
Fobes, C. (2020). Rule 43 (A): Remote Witness Testimony and a Judiciary Resistant to Change.
Lewis & Clark Law Review, 24(1), 299-324.
Fontaine, G. & Kiger, R. (1978). The effects of defendant dress and supervision on judgments of
simulated jurors: An exploratory study. Law and Human Behaviour, 2, 63-71.
https://doi.org/10.1007/BF01047503
French, M. M. J., Blood, A., Bright, N. D., Futak, D., Grohmann, M. J., Hasthorpe, A.,
Heritage, J., Poland, R. L., Reece, S., & Tabor, J. (2013). Changing fonts in education: How
the benefits vary with ability and dyslexia. The Journal of Educational Research, 106(4),
LOW AUDIO QUALITY AND CREDIBILITY 32
301304. https://doi.org/10.1080/00220671.2012.736430
Gourdet, C., Witwer, A. R., Langton, L., Banks, D., Planty, M. G., Woods, D., & Jackson, B. A.
(2020). Court appearances in criminal proceedings through telepresence. RAND Corporation.
https://doi.org/10.7249/RR3222
Koehler, J. J., Schweitzer, N. J., Saks, M. J., & McQuiston, D. E. (2016). Science, technology, or
the expert witness: What influences jurors’ judgments about forensic science testimony?
Psychology, Public Policy, and Law, 22(4), 401-413. https://doi.org/10.1037/law0000103
Kühl, T., Eitel, A., Damnik, G., & Körndle, H. (2014). The impact of disfluency, pacing, and
students’ need for cognition on learning with multimedia. Computers in
Human Behaviour, 35, 189-198. https://doi.org/10.1016/j.chb.2014.03.004
Laham, S. M., Koval, P., & Alter, A. L. (2012). The name-pronunciation effect: Why people like
Mr. Smith more than Mr. Colquhoun. Journal of Experimental Social Psychology, 48, 752-756.
https://doi.org/10.1016/j.jesp.2011.12.002
Lapinski, J., Hirschhorn, R., & Blue, L. (2020, September 29). Zoom jury trials: The idea
vastly exceeds the technology. Texas Lawyer.
https://www.law.com/texaslawyer/2020/09/29/zoom-jury-trials-the-idea-vastly-exceed s-the-
technology/
Lev-Ari, S., & Keysar, B. (2010). Why don’t we believe non-native speakers? The influence
of accent on credibility. Journal of Experimental Social Psychology, 46, 1093-1096.
https://doi.org/10.1016/j.jesp.2010.05.025
Liu, Y., Liu, R., Star J. R., Wang, J. & Tong, H. (2020). The effect of perceptual fluency on
overcoming the interference of the More A-More B intuitive rule among primary school students in a
perimeter comparison task: the perspective of cognitive load. European Journal of Psychology of
Education, 35, 357-380. https://doi.org/10.1007/s10212-019-00424-w
Martire, K. A., Edmond, G., & Navarro, D. (2020). Exploring juror evaluations of expert
opinions using the expert persuasion expectancy framework. Legal and Criminological
Psychology, 25(2), 90-110. https://doi.org/10.1111/lcrp.12165
McIntyre, J., Olijnyk, A., & Pender, K. (2020). Civil courts and COVID-19: Challenges and
opportunities in Australia. Alternative Law Journal, 45(3), 195-201.
https://doi.org/10.1177/1037969X20956787
McKimmie, B. M., Newton, S. A., Schuller, R. A., & Terry, D. J. (2013). It’s not what she says,
it’s how she says it: The influence of language complexity and cognitive load on the
persuasiveness of expert testimony. Psychiatry, Psychology and Law, 20, 578589.
http://dx.doi.org/10.1080/13218719.2012.727068
Newman, E. J., Jalbert, M., & Feigenson, N. (2019). Cognitive fluency in the courtroom. In
R. Bull & I. Blandón-Gitlin (Eds.), The Routledge International Handbook of Legal and
Investigative Psychology (pp. 102-115). Routledge.
https://doi.org/10.4324/9780429326530
LOW AUDIO QUALITY AND CREDIBILITY 33
Newman, E. J., Sanson, M., Miller, E. K., Quigley-McBride, A., Foster, J. L., Bernstein, D.
M., & Garry, M. (2014). People with easier to pronounce names promote truthiness of claims.
PloS ONE, 9(2), e88671. https://doi.org/10.1371/journal.pone.0088671
Newman, E. J. & Schwarz, N. (2018). Good sound, good research: How audio quality
influences perceptions of the research and the researcher. Science Communication, 40(2), 246-257.
https://doi.org/10.1177/1075547018759345
Norton Rose Fulbright. (2020). COVID-19 and the global approach to further court
proceedings, hearings. shorturl.at/bpyMQ
Oppenheimer, D. M. (2004). Spontaneous discounting of availability in frequency judgement
tasks. Psychological Science, 15(2), 100-105. https://doi.org/10.1111/j.0963-
7214.2004.01502005.x
Oppenheimer, D. M. (2008). The secret life of fluency. Trends in Cognitive Sciences, 12(6),
237-241. https://doi.org/10.1016/j.tics.2008.02.014
Oppenheimer, D. M., & Alter, A. L. (2014). The search for moderators in disfluency research.
Applied Cognitive Psychology, 28(4), 502-504. https://doi.org/10.1002/acp.3023
Pieger, E., Mengelkamp, C., & Bannert, M. (2017). Fostering analytic metacognitive processes
and reducing overconfidence by disfluency: The role of contrast effects. Applied Cognitive
Psychology, 31(3), 291-301. https://doi.org/10.1002/acp.3326
Raczynski, J. (2020, July 22). The current status of the (virtual) courts. Thomson Reuters.
https://www.legalexecutiveinstitute.com/virtual-courts/
Reber, R., & Schwarz, N. (1999). Effects of perceptual fluency on judgments of truth.
Consciousness and Cognition, 8(3), 338-342. https://doi.org/10.1006/ccog.1999.0386
Remote Courts Worldwide. (2020). https://remotecourts.org/
Rossner, M., Tait, D., & McCurdy, M. (2021). Justice reimagined: Challenges and opportunities
with implementing virtual courts. Current Issues in Criminal Justice. Advance online publication.
https://doi.org/10.1080/10345329.2020.1859968
Rossner, M., Tait, D., McKimmie, B., & Sarre, R. (2017). The dock on trial: Courtroom
design and the presumption of innocence. Journal of Law and Society, 44(3), 317- 344.
https://doi.org/10.1111/jols.12033
Rowden, E., & Wallace, A. (2019). Performing expertise: The design of audiovisual links and
the construction of the remote expert witness in court. Social & Legal Studies, 28(5),
698-718.
Rummer, R., Schweppe, J., & Schwede, A. (2016). Fortune is fickle: Null-effects of
disfluency on learning outcomes. Metacognition and Learning, 11(1), 57-70.
LOW AUDIO QUALITY AND CREDIBILITY 34
https://doi.org/10.1007/s11409-015-9151-5
Ryan, M., Harker, L., & Rothera, S. (2020). Remote hearings in the family justice system: A
rapid consultation. Nuffield Family Justice Observatory.
https://www.nuffieldfjo.org.uk/resource/remote-hearings-rapid-consultation
Sanson, M., Crozier, W. E., & Strange, D. (2020). Court case context and fluency-promoting
photos inflate the credibility of forensic science. Zeitschrift für Psychologie, 228(3), 221-
225. https://doi.org/10.1027/2151-2604/a000415
Schwarz, N. (2015). Metacognition. In M. Mikulincer, P.R. Shaver, E. Borgida, & J. A.
Bargh (Eds.), APA Handbook of Personality and Social Psychology: Attitudes and
Social Cognition (pp. 203-229). American Psychological Association.
https://doi.org/10.1037/14341-006
Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991).
Ease of retrieval as information: Another look at the availability heuristic. Journal of
Personality and Social Psychology, 61(2), 195-202. https://doi.org/10.1037/0022-
3514.61.2.195
Schwarz, N., Jalbert, M.C., Noah, T., & Zhang, L. (2021). Metacognitive experiences as
information: Fluency in consumer judgment and decision making. Consumer Psychology
Review, 4(1), 4-25. https://doi.org/10.1002/arcp.1067
Scurich, N., & John, R. S. (2018). Jurors’ presumption of innocence. Journal of Legal
Studies, 46, 187-206. https://doi.org/0047-2530/2017/4601-0006$10.00
Shah, A. K., & Oppenheimer, D. M. (2007). Easy does it: The role of fluency in cue
weighting. Judgment and Decision Making, 2(6), 371-379. shorturl.at/hpIMV
Silva, R. R., Chrobot, N., Newman, E., Schwarz, N., & Topolinski, S. (2017). Make it short
and easy: Username complexity determines trustworthiness above and beyond objective
reputation. Frontiers in Psychology, 8, 1-21, https://doi.org/10.3389/fpsyg.2017.02200
Song, H., & Schwarz, N. (2008). Fluency and the detection of misleading questions: Low
processing fluency attenuates the Moses illusion. Social Cognition, 26(6), 791-799.
https://doi.org/10.1521/soco.2008.26.6.791
Stanislaw, H., & Todorov, N. (1999). Calculation of signal detection theory measures.
Behavior Research Methods, Instruments, & Computers, 31(1), 137-149.
https://doi.org/10.3758/BF03207704
Sungkhasettee, V. W., Friedman, M. C., & Castel, A. D. (2011). Memory and metamemory for
inverted words: Illusions of competency and desirable difficulties. Psychonomic Bulletin
& Review, 18(5), 973-976. https://doi.org/10.3758/s13423-011-0114-9
Todd, R., Lim, W., Cheeseman, J., Strachan, J., Talas, T., & Kearney, M. (2020, April 20). The
remote courtroom: Tips and tricks for online hearings. Ashurst. https://www.ashurst.com/en/news-
and-insights/legal-updates
LOW AUDIO QUALITY AND CREDIBILITY 35
Totenberg, N. (2020, May 4). Supreme court arguments resume but with a twist. National
Public Radio. https://www.npr.org/
Unkelbach, C., & Greifeneder, R. (2018). Experiential fluency and declarative advice jointly
inform judgments of truth. Journal of Experimental Social Psychology, 79, 78-86.
https://doi.org/10.1016/j.jesp.2018.06.010
Wallbott, H. G. (1992). Effects of distortion of spatial and temporal resolution of video
stimuli on emotion attributions. Journal of Nonverbal Behavior, 16(1), 5-20.
https://doi.org/10.1007/BF00986876
Weisbuch, M. & Mackie, D. (2009). False fame, perceptual clarity, or persuasion? Flexible fluency
attribution in spokesperson familiarity effects. Journal of Consumer Psychology,
19(1), 62-72. https://doi.org/10.1016/j.jcps.2008.12.009.
Wilson, G. M., & Sasse, M. A. (2000). Investigating the impact of audio degradations on
users: Subjective vs objective assessment methods. CHISIG: the Computer Human
Interaction Special Interest Group of the Ergonomics Society of Australia.
https://core.ac.uk/download/pdf/1785691.pdf.
... Complications that arise with the implementation of technology-based measures are also detrimental to the fairness of the trial because they can adversely affect the perceived credibility of the complainant (Burrows & Powell, 2014;Westera & Powell, 2015). Indeed, findings in a recent experimental study showed that when adult community members listened to a witness' evidence via low-quality audio, they rated the witness as less credible, had inferior memory for the key components of the witness' narrative and relied on the witness' evidence less when they formulated a judgement for the case, compared to participants who heard the same evidence in a high-quality format (Bild et al., 2021). Given that complainants' evidence is typically central in child sexual assault trials due to lack of corroborating evidence, the credibility of the complainant is likely to have a pivotal influence on trial outcome (Shead, 2014). ...
Article
Giving evidence from the witness stand is often distressing for complainants of child sexual abuse. Technology-based special measures (eg, CCTV/AV link) enable complainants to give evidence from a location outside the courtroom, making the process less intimidating for complainants, which in turn enhances the quality of their evidence. Although the implementation of these measures has increased in prevalence over the preceding decades, little is known about the frequency and nature of problems that arise when special measures are utilised. In this study, we systematically coded transcripts from Australian child sexual assault trials to examine how (if at all) the implementation of special measures varied across child, adolescent and adult complainants. We also explored how often problems arose when special measures were used, and the nature of these problems. Our findings revealed that technology-based measures were regularly used for complainants in all age groups, but problems using these measures arose in the majority of trials. These results highlight the need for adequate training of professionals who are involved in operating special measures, as well as the need to outfit courts with appropriate equipment and facilities.
... Moreover, fluent processing due to repetition (Hasher et al., 1977), cultural familiarity (Oyserman, 2019), audio What makes narratives 'feel' right? --3 quality (Bild et al., 2021), color contrast (Reber & Schwarz, 1999), rhyme (McGlone & Tofighbakhsh, 2000), related nonprobative photos (Newman & Zhang, 2021), and many other incidental variables increases the acceptance of claims as true (Brashier & Marsh, 2020;. ...
Article
Full-text available
Johnson, Bilovich, and Tuckett’s (Beh Brain Scie, in press) conviction narrative theory holds that reasoners adopt “a narrative that feels ‘right’ to explain the available data” and use “that narrative to imagine plausible futures” (p. 1). Drawing on feelings-as-information theory, this commentary reviews the role of metacognitive experiences of ease or difficulty and highlights that fluently processed narratives are more likely to “feel right.” [This version is the accepted manuscript, post peer-review and prior to copy-editing.]
Article
The current study investigated the joint contribution of visual and auditory disfluencies, or distortions, to actual and predicted memory performance with naturalistic, multi-modal materials through three experiments. In Experiments 1 and 2, participants watched food recipe clips containing visual and auditory information that were either fully intact or else distorted in one or both of the two modalities. They were asked to remember these for a later memory test and made memory predictions after each clip. Participants produced lower memory predictions for distorted auditory and visual information than intact ones. However, these perceptual distortions revealed no actual memory differences across encoding conditions, expanding the metacognitive illusion of perceptual disfluency for static, single-word materials to naturalistic, dynamic, multi-modal materials. Experiment 3 provided naïve participants with a hypothetical scenario about the experimental paradigm used in Experiment 1, revealing lower memory predictions for distorted than intact information in both modalities. Theoretically, these results imply that both in-the-moment experiences and a priori beliefs may contribute to the perceptual disfluency illusion. From an applied perspective, the study suggests that when audio-visual distortions occur, individuals might use this information to predict their memory performance, even when it does not factor into actual memory performance.
Article
Full-text available
This article provides an overview of the response of Australian courts to the COVID-19 crisis, and critically examines a number of structures and systemic issues that arise from the shift to the online deliver of justice. It places the current responses in the context of the emerging literature regarding online dispute resolution, and draws upon that literature to consider issues including open justice, symbolism and ‘court architecture’ in the digital space, technological limitations, access to justice and issues of systemic bias. It argues that by examining these issues, the present crisis will help map opportunities for future reform.
Article
Full-text available
Thinking is accompanied by metacognitive experiences of ease or difficulty. People draw on these experiences as a source of information that can complement or challenge the implications of declarative information. We conceptualize the operation of metacognitive experiences within the framework of feelings-as-information theory and review their implications for judgments relevant to consumer behavior, including popularity, trust, risk, truth, and beauty.
Article
Full-text available
Abstract Video job interviews have become a common hiring practice, allowing employers to save money and recruit from a wider applicant pool. But differences in job candidates’ internet connections mean that some interviews will have higher audiovisual (AV) quality than others. We hypothesized that interviewers would be impacted by AV quality when they rated job candidates. In two experiments, participants viewed two-minute long simulated Skype interviews that were either unedited (fluent videos) or edited to mimic the effects of a poor internet connection (disfluent videos). Participants in both experiments rated job candidates from fluent videos as more hirable, even after being explicitly told to disregard AV quality (experiment 2). Our findings suggest that video interviews may favor job candidates with better internet connections and that being aware of this bias does not make it go away.
Article
While video technology has long been a feature of courtrooms, during the pandemic, courts underwent a seismic shift towards virtual hearings. Physical courtrooms shut their doors and hearings were moved to a virtual space. This transformation was fast, radical, and likely to permanently alter the landscape of justice. In this article, we review the strategies courts in Australia and the United Kingdom (UK) adopted in response to the pandemic and discuss the implications for the practice of justice. We provide a close examination of the design, framing, and ritual elements of a virtual hearing that can reveal the challenges that participants face when interacting within a virtual court, and point us towards ways of reimagining a more respectful and inclusive practice.
Article
Faulty forensic science sometimes makes its way into the courtroom where jurors must evaluate its credibility. But at least two factors may inflate how credible jurors find claims about forensic science: the mere context of a court case and the cognitive fluency of the evidence. To investigate, we asked people to judge various claims about forensic science as true or false. In Experiment 1 ( N = 287), we manipulated court case context by either attributing the claims to an expert in court or not specifying their origin. In Experiment 2 ( N = 320), we manipulated courtroom setting orthogonal to source expertise. In both, we manipulated fluency via the presence of related but nonprobative photos. We found each factor increased people’s bias to judge forensic science claims true. Our findings suggest the justice system must improve the quality of forensic science upstream from the courtroom to ensure jurors’ credulity is warranted.
Article
Non‐probative but related photos can increase the perceived truth value of statements relative to when no photo is presented (truthiness ). In 2 experiments, we tested whether truthiness generalizes to credibility judgements in a forensic context. Participants read short vignettes in which a witness viewed an offence. The vignettes were presented with or without a non‐probative, but related photo. In both experiments, participants gave higher witness credibility ratings to photo‐present vignettes compared to photo‐absent vignettes. In Experiment 2, half the vignettes included additional non‐probative information in the form of text. We replicated the photo presence effect in Experiment 2, but the non‐probative text did not significantly alter witness credibility. The results suggest that non‐probative photos can increase the perceived credibility of witnesses in legal contexts. This article is protected by copyright. All rights reserved.
Article
Factfinders in trials struggle to differentiate witnesses who offer genuinely expert opinions from those who do not. The Expert Persuasion Expectancy (ExPEx) framework proposes eight attributes logically relevant to this assessment: foundation, field, specialty, ability, opinion, support, consistency, and trustworthiness. We present two experiments examining the effects of these attributes on the persuasiveness of a forensic gait analysis opinion. Jury‐eligible participants rated the credibility, value, and weight of an expert report that was either generally strong (Exp. 1; N = 437) or generally weak (Exp. 2; N = 435). The quality of ExPEx attributes varied between participants. Allocation to condition (none, foundation, field, specialty, ability, opinion, support, consistency, trustworthiness) determined which attribute in the report would be weak (cf. strong; Exp. 1), or strong (cf. weak; Exp. 2). In Experiment 1, the persuasiveness of a strong report was significantly undermined by weak versions of ability, consistency, and trustworthiness. In Experiment 2, a weak report was significantly improved by strong versions of ability and consistency. Unplanned analyses of subjective ratings also identified effects of foundation, field, specialty, and opinion. We found evidence that ability (i.e., personal proficiency), consistency (i.e., endorsement by other experts), and trustworthiness (i.e., objectivity) attributes influence opinion persuasiveness in logically appropriate ways. Ensuring that factfinders have information about these attributes may improve their assessments of expert opinion evidence.
Article
Recent research has shown that reducing the perceptual fluency of shape processing can be an effective means for overcoming the interference of the More A-More B intuitive rule among grade 3 students in a perimeter comparison task. From the perspective of cognitive load, the current study focused on the mechanism of perceptual fluency on overcoming the interference of the More A-More B intuitive rule among grade 3 students in a perimeter comparison task. The existing studies have suggested that perceptual disfluency will inevitably increase ECL in the task-solving process and thus potentially detrimental toward learning. However, we could argue that overcoming the interference of the More A-More B intuitive rule in the disfluent condition could be interpreted as a simultaneous increase in cognitive GCL. Because of the theoretically complementary relationship between ECL and GCL, two experiments were designed to respectively examine ECL and GCL under different perceptual fluency conditions in the perimeter comparison task. Experiment 1 (N = 33) used a dual-task paradigm to examine participants’ ECL, manipulating the clarity of shapes. The results indicated that participants experienced significantly less interference from the More A-More B intuitive rule under the low-perceptual-fluency condition than under the high-perceptual-fluency condition, while ECL was significantly higher under the former condition than under the latter one. Experiment 2 (N = 72) explored GCL in the perimeter comparison task through a self-designed transfer test, using an identical manipulation method of perceptual fluency as in experiment 1. Compared to the high-perceptual-fluency group, participants in the low-perceptual-fluency group performed significantly better in the perimeter comparison task and transfer test. It was concluded that low perceptual fluency resulted in participants’ ECL while at the same time producing GCL during completion of the perimeter comparison task.
Article
Processing fluency, the experienced ease of ongoing mental operations, influences judgments such as frequency, monetary value, or truth. Most experiments keep to-be-judged stimuli ambiguous with regards to these judgment dimensions. In real life, however, people usually have declarative information about these stimuli beyond the experiential processing information. Here, we address how experiential fluency information may inform truth judgments in the presence of declarative advice information. Four experiments show that fluency influences judged truth even when advice about the statements' truth is continuously available and labelled as highly valid; the influence follows a linear cue integration pattern for two orthogonal cues (i.e., experiential and declarative information). These data underline the importance of processing fluency as an explanatory construct in real-life judgements and support a cue integration framework to understand fluency effects in judgment and decision making.
Article
This article reports on empirical research conducted into the use of audiovisual links (videolinks) to take expert testimony in jury trials. Studies reveal ambivalent attitudes to court use of videolink, with most previous research focussed on its use for vulnerable witnesses and defendants. Our study finds there are issues unique to expert witnesses appearing by videolink, such as compromised ability to gesture and interact with exhibits and demonstrative tools, and reductions in availability of feedback to gauge juror understanding. Overall, the use of videolinks adds an additional cognitive load to the task of giving expert evidence. While many of these issues might be addressed through environmental or technological improvements, we argue this research has broader ramifications for expert witnesses and the courts. The use of videolinks for taking expert evidence exposes the contingent nature of expertise and the cultural scaffolding inherent in its construction. In reflecting on the implications of these findings, and on the way that reliability, credibility and expertise are defined and established in court, we suggest a more critical engagement with the relationship between content and mode of delivery by stakeholders.