Full Terms & Conditions of access and use can be found at
Psychology, Crime & Law
ISSN: 1068-316X (Print) 1477-2744 (Online) Journal homepage: https://www.tandfonline.com/loi/gpcl20
Seeking or controlling the truth? An examination
of courtroom questioning practices by Canadian
Christopher J. Lively, Laura Fallon, Brent Snook & Weyam Fahmy
To cite this article: Christopher J. Lively, Laura Fallon, Brent Snook & Weyam Fahmy (2019):
Seeking or controlling the truth? An examination of courtroom questioning practices by Canadian
lawyers, Psychology, Crime & Law, DOI: 10.1080/1068316X.2019.1669595
To link to this article: https://doi.org/10.1080/1068316X.2019.1669595
Published online: 09 Oct 2019.
Submit your article to this journal
View related articles
View Crossmark data
Seeking or controlling the truth? An examination of
courtroom questioning practices by Canadian lawyers
Christopher J. Lively , Laura Fallon, Brent Snook and Weyam Fahmy
Department of Psychology, Memorial University of Newfoundland, St. John’s, NL, Canada
The questioning practices of Canadian lawyers were examined.
Courtroom examinations (N= 91) were coded for the type of
utterance, the assumed purpose of the utterance, and the length
of utterance. Results showed that approximately one-ﬁfth of all
utterances were classiﬁed as productive for gathering reliable
information (i.e. open-ended, probing); less than one percent of all
utterances were open-ended. Direct examinations contained more
closed yes/no, probing, and open-ended questions. Cross-
examinations contained more leading and clariﬁcation questions,
and opinions. Moreover, cross- (vs. direct) examinations contained
more questions with a ‘challenging the witness’purpose. The
longest utterances were opinions, followed by multiple and
forced-choice questions. The longest answers were in response to
open-ended questions, followed by multiple and probing
questions. Implications for the truth-seeking function of the
judiciary are discussed.
Received 26 March 2019
Accepted 26 August 2019
question types; justice
A central objective of the criminal justice system is to establish the truth about crimes that
have been committed. To achieve this goal, triers of fact use evidence –from forensic and
human sources –to make decisions about culpability. Few would dispute that the evi-
dence used to make such decisions should be uncontaminated from the time the crime
is initially reported until the trier(s) of fact render(s) a ﬁnal verdict. The failure of the
police to handle evidence properly may lead lawyers to ask for contaminated evidence
to be ruled inadmissible. One predominant type of evidence that lawyers challenge are
the statements provided by witnesses, suspects, and victims; with a focus on how
police oﬃcers ask questions. Lawyers who are aware of the frailties of memory (e.g. it is
malleable and reconstructive), and how questions can inﬂuence memory quality, some-
times raise these memory issues to challenge the statement admissibility (Loftus, 1979;
Neath & Surprenant, 2003; Simons & Chabris, 2011; see R. v. Forbes,2006;R. v. Klaus,
2017;R. v. Morgan,2013;R. v. Sterling,1995). Beyond objections from the opposing
lawyers, there are no checks or balances in place to account for the inﬂuence of lawyer
questioning on the quality of witness evidence. Very little is known about how lawyers
© 2019 Informa UK Limited, trading as Taylor & Francis Group
CONTACT Christopher J. Lively firstname.lastname@example.org Department of Psychology, Science Building, Memorial University
of Newfoundland, St. John’s, NL, Canada
*This research has been presented, in part, at the 78th Canadian Psychological Association Convention (Toronto, ON,
Canada), the 1st Forensic Psychology in Canada Conference (Ottawa, ON, Canada), and the 19th Aldrich Multidisciplinary
Graduate Research Conference (St. John’s, NL, Canada).
PSYCHOLOGY, CRIME & LAW
gather information from witnesses. As a consequence, the goal of the current study is to
quantify the questioning practices of lawyers during courtroom examinations of adult wit-
nesses in order to gauge the extent to which evidence is being contaminated during
Questioning and Its eﬀects on memory recall and information production
Given that asking questions is arguably the sine qua non of the truth-seeking function in
the justice system, much research has examined the eﬀect of questioning practices on
memory performance (Milne & Bull, 2003). The general consensus in the scientiﬁc literature
is that productive questions are those that maximise the completeness and accuracy of
information extracted. Speciﬁcally, open-ended questions (i.e. those that start with tell,
explain, or describe) and follow-up questions that probe the account (i.e. those that
start with who, what, when, where, why, and how) are thought to be the most productive
question types (Fisher & Geiselman, 1992; Griﬃths & Milne, 2006; Milne & Bull, 2003). Open-
ended questions require respondents to be active in the information provision process,
whereby the onus is put on the respondents to recall information in a free and uninhibited
manner (e.g. no interruptions or prompting from the questioner, no time constraints).
Letting a witness have control over their memory retrieval process reduces the likelihood
that the questioner will inﬂuence the information that is provided. In terms of the amount
of information generated from open-ended questions, a study by Snook, Luther, Quinlan,
and Milne (2012) found that, on average, almost 100 words are provided by suspects in
response to this question type; over ﬁve times more than the next highest response
length (e.g. multiple questions).
Asking probing questions –although resulting in the extraction of information that is
narrower in scope than the information gained through open-ended questions –allows
respondents to engage in cued recall (Griﬃths & Milne, 2006). Such questions are
meant to help interviewers achieve clarity and comprehension of information that was
provided in response to open-ended questions. As well, probing questions are used to
explore new lines of inquiry with respondents, and –similar to open-ended questions –
have minimal inﬂuence on the answers provided by the respondent, and generate
additional information. In terms of the response length to probing questions, Snook
et al. (2012) reported that asking probing questions resulted in suspects providing, on
average, 16 words per response (also see Oxburgh, Myklebust, & Grant, 2010, for a
review on productive questions).
By contrast, unproductive questions (Griﬃths & Milne, 2006; Oxburgh et al., 2010) are
those that inhibit the memory retrieval process by, for example, encouraging short
answers, introducing confusion, and contaminating responses (Fisher, 1995). Four main
unproductive question types are closed yes/no, forced-choice, multiple, and
leading questions. Closed yes/no questions generally refer to those that elicit a response
in a yes/no format. Closed yes/no questions can lead to guessing, acquiescing, and can
prevent the respondent from providing unsolicited, but important, information (Fisher &
Geiselman, 1992). Forced-choice questions provide the respondent with two (or more)
options to choose from in formulating their response. Ultimately, this type of question
forces the respondent to choose from one of the provided response choices, when it is
possible that the correct answer may not be one of the available options; potentially
2C. J. LIVELY ET AL.
resulting in the provision of inaccurate information. Asking multiple questions at once is
also considered a problematic questioning technique, as it can create confusion when
the respondent has to decide which question to respond to ﬁrst and has to multitask to
remember and respond to the remaining questions. Moreover, if the respondent provides
an answer to the string of questions, it is not always clear to which question the given
answer is linked. Leading questions suggest directly or imply a speciﬁc response to the
respondent –one that may or may not be correct. In other words, the interviewer is pro-
viding the respondent with the answer they want to receive. Asking leading questions
strays from the fundamental interviewing task of listening to what the witness knows
and shifts towards telling the witness what they know (see Loftus, 2005; Zaragoza, Belli,
& Payment, 2006 for overviews on how the adoption of post-event misinformation pre-
sented within leading questions has the potential to aﬀect a person’s memory; i.e. the mis-
information eﬀect). As for the response length to the aforementioned unproductive
questions, Snook et al. (2012) found that closed yes/no, forced choice, and leading ques-
tions all resulted in, on average, less than 13 words per response, while multiple questions
elicited an average of 17 words per response from suspects.
Two other question types identiﬁed within the literature (Griﬃths & Milne, 2006; Milne &
Bull, 2003; Oxburgh et al., 2010; Snook et al., 2012) are re-asked and clariﬁcation questions.
Re-asked questions have been deemed by some researchers as concerning because a
respondent may feel as though their original answer to the question did not satisfy the
questioner, and may subsequently change their response; that is, the practice may
appear coercive to witnesses (e.g. Brock, Fisher, & Cutler, 1999; Gilbert & Fisher, 2006;
Henkel, 2014; Poole & White, 1991). In contrast, other researchers have found that re-
asked questions do not have this negative eﬀect, and can even result in additional infor-
mation from witnesses (e.g. Scrivner & Safer, 1988; Turtle & Yuille, 1994). Clariﬁcation ques-
tions refer to reciting verbatim what the respondent has answered, or paraphrasing the
response back to the respondent in the form of a question. Clariﬁcation questions could
be considered a productive type of question as they may serve to enhance the ques-
tioner’s comprehension (e.g. Oxburgh et al., 2010). However, some experts have argued
that, rather than clarifying information mid-interview, it would be better to perform a
summary of the reported information with the respondent after all of the information
gathering questions have been asked on the topic of interest (Fisher & Geiselman, 1992;
Milne & Bull, 2003). Given the limited and conﬂicting research available on re-asked and
clariﬁcation questions, it is unclear currently whether they should be categorized as pro-
ductive or unproductive question types. From the point of view of how much information
is obtained by these types of questions, researchers have reported that re-asked and clar-
iﬁcation questions generate, on average, 14 and 11 words per response, respectively
(Snook et al., 2012).
Other important utterances that emerge during an interview –but are not questions –
are opinions and facilitators. Posing a personal belief or viewpoint to the respondent has
the potential to sway the respondent’s answer and can result in the respondent adopting
the questioner’s opinion into his or her answer. Similar to the concerns related to unpro-
ductive question types, uncertainty regarding the origin of the information is a factor to
consider when the questioner oﬀers an opinion. Facilitators are verbal indicators or
encouragements uttered during the interview. Given the diﬃculty to deﬁne them in the
interview setting (Oxburgh et al., 2010), facilitators could be viewed as either helpful or
PSYCHOLOGY, CRIME & LAW 3
harmful during the questioning process, depending on the context, tone, and expression
of the questioner. On the one hand, muttering ‘mmhmm’or ‘yes, okay’may simply be the
questioner’s way of displaying their engagement and may encourage the respondent to
continue providing information. On the other hand, however, such encouragements or
acknowledgements may be interpreted by the respondent as an indication that he or
she is providing the ‘correct’answer; such interpretations can become risky when trying
to collect accurate information. Snook et al. (2012) reported the amount of information
obtained from suspects via opinions as being approximately 12 words per response,
however, in their study opinions and statements were collapsed together into one ques-
tioning category. Snook et al. did not report on facilitators in their study, and to our knowl-
edge, no data appears to be available from any other studies to suggest the typical
response length to facilitators.
Research on questioning practices in the criminal justice system
Given that accessing memory is a fundamental piece of the information gathering process,
questioners who are seeking the truth aim to avoid the contamination of the memory
retrieval process –much the same way that police oﬃcers are taught to secure a crime
scene (see St-Yves, 2014). Therefore, questioning should be conducted in a way that pre-
serves the witnesses’memory; it is imperative that police do not ask questions that con-
taminate memories or impede recall. Unfortunately, research on police interviewing
practices has shown that untrained oﬃcers tend to use unproductive questions frequently
(e.g. Cliﬀord & George, 1996; Davies, Westcott, & Horan, 2000; Fisher, Geiselman, &
Raymond, 1987; Myklebust & Alison, 2000; Snook et al., 2012; Snook & Keating, 2011;
Wright & Alison, 2004). This body of empirical literature suggests that police interviewers
often do not follow best practices for gathering information, despite some oﬃcers having
had received training about proper interviewing techniques (Lamb, Hershkowitz, Orbach,
& Esplin, 2008).
Unsurprisingly, defence lawyers sometimes raise faults in police interviews during trial
proceedings because it speaks to the quality of the evidence provided by human sources
and the integrity of the investigation. Given what is known about the quality of police
interviewing practices (e.g. use of unproductive questioning techniques), this legal strat-
egy is warranted. Pointing out faults in interviewing quality during court proceedings
has the potential to result in evidence (e.g. inculpatory statements) being dismissed or
given little weight –both of which can impact decisions about culpability. Even worse,
it may be the case that police questioning practices are so ineﬀectual that they verge
on being negligent, thereby resulting in legal arguments that such police malpractice
ought to warrant the acquittal of a defendant and result in civil proceedings (see e.g.
Hill v. Hamilton-Wentworth Regional Police Services Board,2007).
It has not escaped some researchers that the criticisms directed toward police question-
ing practices –often by lawyers themselves –are also applicable during courtroom exam-
inations. Although the purpose of a courtroom examination is seemingly diﬀerent from a
police interview (e.g. the aim of the police is to construct an accurate account of the events
that transpired, whereas lawyers aim to convince triers of fact to accept their version of
facts; Kebbell, Deprez, & Wagstaﬀ,2003; Westera, Zydervelt, Kaladelfos, & Zajac, 2017),
the information gathering process is identical. Witnesses in both scenarios are asked to
4C. J. LIVELY ET AL.
recall a past experience, and the information obtained serves as evidence upon which con-
sequential decisions are made. Thus, the identiﬁed concerns surrounding the impact of
asking unproductive questions on witness evidence are directly applicable to the
Some research has shown that lawyers are very similar to untrained police oﬃcers in
the way they gather evidence from human sources in criminal proceedings (e.g. Kebbell
et al., 2003). Existing data suggests that lawyer questioning practices have the potential
to taint the evidence used by triers of fact. To date, much of the research on courtroom
questioning practices pertains to the eﬀect of confusing courtroom language on wit-
nesses (i.e. legal terminology and vocabulary; e.g. Hanna & Henderson, 2018; Kebbell &
Johnson, 2000; Perry et al., 1995; see Gibbons, 2003), and how lawyers examine vulner-
able witnesses (e.g. children and witnesses with intellectual disabilities; Kebbell et al.,
2003; Zajac & Cannan, 2009). Most of what is known about how lawyers speciﬁcally
ask questions in the courtroom pertains to the examination of children in New
Zealand, Scotland, and the US (Andrews & Lamb, 2016,2019; Andrews, Lamb, & Lyon,
2015; Hanna, Davies, Crothers, & Henderson, 2012; Klemfuss, Quas, & Lyon, 2014; Zajac
& Cannan, 2009; Zajac, Gross, & Hayne, 2003). The general ﬁnding from this body of
research is that lawyers mostly ask closed yes/no questions, and that open-ended
questions are asked rarely.
Regarding diﬀerences between lawyers, data suggests that prosecutors use more pro-
ductive questions with children than defence lawyers: prosecutors are more likely to ask
open-ended and probing questions, whereas defence lawyers are more likely to ask sug-
gestive (i.e. leading) questions. It is important to note, however, that a large proportion of
the questions asked by prosecutors are closed yes/no and leading questions (e.g. Klemfuss
et al., 2014). Of course, these ﬁndings need to be considered within the context of whether
or not the witness under examination is ‘friendly’to the prosecutor or defence, and how
the witness’testimony is helpful to the goal of each lawyer. It is logical that lawyers may be
more apt to use favorable questions during their witness examination if the witness is
called to the stand from their side.
To our knowledge, three published studies have quantiﬁed lawyers’questioning prac-
tices of adult witnesses. In one of the ﬁrst studies, Kebbell et al. (2003) computed the fre-
quency of question types asked during courtroom examinations of six rape trials. The
researchers were interested in comparing the types of questions asked to complainants
and defendants during the direct and cross-examinations. No meaningful diﬀerences
were found with respect to the types of questions asked to the complainant or defendant,
but diﬀerences did emerge for question type asked as a function of examination type.
Speciﬁcally, for direct examinations of complainants, approximately 49% of questions
asked were closed yes/no, 27% probing, 22% open-ended, 3% multiple, and 1% heavily
leading. By contrast, for cross-examinations of complainants, approximately 82% of ques-
tions were closed yes/no, 15% heavily leading, 10% probing, 9% multiple, and only 6%
were open-ended. When the witness was a defendant, for direct (vs. cross-) examinations,
50% (vs. 78%) of questions were closed yes/no, 27% (vs. 10%) were probing, 20% (vs. 12%)
were open-ended, 3% (vs. 7%) were multiple, and 1% (vs. 14%) were heavily leading.
In a similar vein, Kebbell, Hatton, and Johnson (2004) conducted a follow-up study that
compared how lawyers asked questions to witnesses from the general population with
how questions were asked to witnesses with intellectual disabilities, as a function of
PSYCHOLOGY, CRIME & LAW 5
examination type. For general population witnesses, Kebbell et al. (2004) found that direct
(vs. cross-) examinations contained a higher proportion of open-ended (36% vs. 16%) and
probing questions (15% vs. 5%), whereas cross-examinations contained more closed yes/
no (83% vs. 46%), leading (30% vs. 9%), multiple (3% vs. 1%), and re-asked questions (2%
vs. 0.5%) than direct examinations; similar trends emerged in the sample of witnesses with
intellectual disabilities (see Kebbell et al., 2004). In sum, Kebbell and colleagues’(2003,
2004) work suggests that the majority of questions asked by lawyers –particularly
during cross-examination –are suboptimal for the purpose of obtaining accurate and
In a subsequent study, Zajac and Cannan (2009) extended the work of Kebbell and
colleagues (2003,2004) by examining how lawyers asked questions to adult witnesses
and compared the types of questions asked by prosecutor and defence lawyers.
Among others, the types of questions considered by the researchers were what they
referred to as open, closed, and leading questions (see Zajac & Cannan, 2009,for
remaining coding categories).
They found that prosecutors asked more open (45%)
and closed questions (40%) than defence lawyers (11% and 23%, respectively). Conver-
sely, defence lawyers asked more leading questions (66%) compared to prosecutors
In line with the work of Kebbell et al. (2003,2004), these results indicate
that the overall quality of lawyers’questioning practices is substandard. Similar to
the consensus of the research on child questioning discussed above, Zajac and
Cannan (2009) also showed that defence lawyers ask a greater proportion of unproduc-
tive questions than prosecutors when gathering information from adult witnesses. As
mentioned earlier, it is important to note that all of these ﬁndings need to be con-
sidered within the context of the witness’allegiance (i.e. whether they were called
by the prosecution or defence).
The generalizability of the ﬁndings from the aforementioned studies is limited because the
data pertains to only two countries (the UK and New Zealand), are based on small sample
sizes (N=6,N= 32, and N=30, respectively), and utilise relatively homogenous samples
(i.e. only sexual assault trials). The goal of the current study was to expand on the existing
research by assessing the types of questions asked to adult witnesses in the Supreme
Court of Newfoundland and Labrador in Canada, and to examine whether or
not lawyers adhere to best practices when engaging in a truth-seeking function.
Building on the approaches used by Kebbell and colleagues (2003,2004), who con-
sidered courtroom questions as a function of examination type (i.e. direct vs. cross-exam-
ination), and Zajac and Cannan (2009), who considered courtroom questions as a function
of lawyer type (i.e. prosecutor vs. defence lawyer), the current study aimed to explore how
questions asked in the courtroom are related to examination type and lawyer type com-
bined. Based on the data reported previously, we predict:
Hypothesis 1A: Direct examinations will contain signiﬁcantly more probing and open-ended
questions than cross-examinations.
Hypothesis 1B: Cross-examinations will contain signiﬁcantly more leading questions than direct
6C. J. LIVELY ET AL.
Hypothesis 1C: Regardless of examination type, open-ended questions will be asked infrequently
(i.e. less than 5%).
Hypotheses 2A: Prosecutors will ask signiﬁcantly more open-ended and probing questions than
Hypothesis 2B: Defence lawyers will ask signiﬁcantly more closed yes/no and leading questions
Due to the lack of data on the use of other possible types of questions asked by lawyers
(e.g. forced-choice, multiple, re-asked, clariﬁcation, opinion, facilitator), we refrained from
In addition to the aforementioned goal of the current study, we were also interested
exploring the assumed purpose of each utterance (e.g. administrative, information gather-
ing, challenging the witness’account, unknown). Since no prior studies appear to have
considered the purpose of each utterance spoken by lawyers during courtroom examin-
ations, we refrained from making predictions about the expected outcomes of assumed
purpose type as a function of examination type and lawyer type. Rather, our objective
in exploring this avenue was to contribute novel ﬁndings to the ﬁeld of courtroom ques-
Previous research has well established that asking productive questions (i.e. open-
ended, probing) produces much longer responses –a fact that has been documented
in police interview (e.g. Snook et al., 2012) and courtroom questioning studies (e.g.
Kebbell et al., 2004). Based on data previously reported, we predict:
Hypothesis 3A: Productive questions (i.e. open-ended, probing) will produce the longest witness
Hypothesis 3B: Unproductive questions (i.e. closed yes/no, forced-choice, multiple, leading) will
produce the shortest witness responses.
The only study that we are aware of that included data regarding response lengths with
adults to other utterance types (e.g. re-asked, clariﬁcation, opinion) is that of Snook and
colleagues (2012), albeit facilitator was not included. Although the current study considers
these utterances in an arena diﬀerent from a police interview (i.e. courtroom), we could not
think of any theoretical reason as to why witness response lengths to the aforementioned
utterances would diﬀer in the courtroom as compared to the police interview room. Con-
sequently, we also predict:
Hypothesis 3C: The remaining utterance types (i.e. re-asked, clariﬁcation, opinion) will produce
responses that are similar in length to one another.
Given the lack of existing data on response length to facilitators, we elected not to oﬀer
any prediction for this utterance type.
Verbatim transcriptions of 12 criminal cases, heard between 1991 and 2014, were
obtained from the Supreme Court of Newfoundland and Labrador (Trial Division) in
PSYCHOLOGY, CRIME & LAW 7
St. John’s, Canada. Crimes were categorized into one of four author-constructed groups,
namely person (e.g. aggravated assault, sexual assault, domestic violence), property (e.g.
break and enter, embezzlement of money, arson), hybrid (i.e. a combination of person
and property crimes; e.g. fraud, drug traﬃcking, driving under the inﬂuence), or
unknown crimes (i.e. crime information not provided or unable to be determined
from court transcript). In this sample, only two crime types emerged; six (50.0%)
were person crimes and six (50.0%) were hybrid crimes. In terms of lawyer composition,
ten trials (83.33%) had one prosecutor and one defence lawyer, one trial (8.33%) had
two prosecutors and one defence lawyer, and one trial (8.33%) had two prosecutors
and two defence lawyers. A total of 25 diﬀerent lawyers were involved in the cases;
two (8.0%) of those lawyers were advocates in more than one case (i.e. two cases
each). Sixteen (64.0%) of the lawyers were men.
A convenience sample of 91 witness testimony examinations (henceforth referred to as
examinations) were extracted from the 12 cases. On average, 7.58 (SD = 5.04, Range =1–
18) examinations were extracted from a single court case ﬁle. Testimony was provided
by 47 diﬀerent witnesses; all witnesses (100%) underwent a direct examination, while
44 (93.62%) also underwent a cross-examination. Of the 47 witnesses, 26 (55.32%)
testiﬁed on behalf of the prosecution and 21 (44.68%) testiﬁed on behalf of the
defence. In terms of witness type, 23 (48.93%) were eyewitnesses, 15 (31.91%) were
police oﬃcers, ﬁve (10.64%) were defendants, three (6.38%) were victims, and one
(2.13%) was a character witness. Twenty-nine (61.70%) of the witnesses were men. The
mean number of examinations conducted by each of the 25 lawyers was 3.64 (SD =
2.86, Range =1–9). Of the 91 examinations, 47 (51.65%) were conducted by a prosecutor
and 44 (48.35%) were conducted by a defence lawyer.
A 24-item coding guide and associated content dictionary was author-constructed (a
copy can be obtained by visiting https://osf.io/ab3w7/, or by contacting the correspond-
ing author). The following seven trial/demographic variables were coded:
examination type (1 = direct,2=cross), lawyer type (1 = prosecutor,2=defence), lawyer
gender (1 = male,2=female), witness type (1 = victim,2=eyewitness,3=police oﬃcer,
4=accused,5=character,6=speciﬁed other), witness gender (1 = male,2=female),
crime type (1 = person,2=property,3=hybrid,4=unknown), and the year that the trial
took place. Every utterance in each examination was assigned an identiﬁcation
number and classiﬁed, mutually exclusively, as one of 13 utterance types and as one
of four purpose types (see Table 1). It is important to note that of the 13 utterance
types, three were not considered to be questions (i.e. statements, commands, and incom-
pletes). However, we included these utterance types in our classiﬁcation system to
ensure that every utterance in our transcripts could be sorted into an appropriate cat-
egory. We chose not to go into detail about these three utterance types because they
were removed from the subsequent data analyses and not are not discussed any
further. Utterances were only coded if the witness was actively testifying under oath
on the stand. That is, if the witness was asked to stand down while the lawyers and
judge entered into a voir dire, then the utterances spoken during the voir dire proceed-
ings were not coded.
8C. J. LIVELY ET AL.
Table 1. Descriptions and examples of utterance and purpose types coded.
Utterance Type Description Example
Open-ended Invite witness to recall answers freely from memory.
Allow for a wide range of responses.
Typically start with ‘tell,’‘explain,’or ‘describe.
‘Tell me about the party you attended.’
Probing Tap into cued recall memory.
Answers narrower in scope compared to those from open-ended questions.
Goal is to obtain additional information from the witness.
Start with ‘who,’‘what,’‘why, ‘where,’‘when,’or ‘how’.
‘How many people were at the party?’
‘When did you ﬁrst notice the ﬁght?’
Closed yes/no Tap into recognition.
Answered with a ‘yes’or ‘no’response.
‘Did you drink alcohol?’
Forced-choice Oﬀers the witness a limited number of response options (usually two). ‘Did you kick or punch the man?’
Multiple Asking several questions at once without giving the witness a chance to respond. ‘How did you get there? What did you do inside? Did you say
anything to anyone?’
Leading Suggests/implies an answer to witness.
Desired answer embedded in question.
‘You were drunk, right?’
Re-asked Asking a question that was already asked and answered previously Questioner: ‘Where did you go last night?’Respondent: ‘Nowhere. I
Questioner: ‘Okay, come on Joe, where did you go last night?’
(emphasis added to illustrate re-asked utterance)
Clariﬁcation Duplicate or paraphrase the answer that the witness has given
Provide the questioner with a better understanding of what the witness said
Respondent: ‘John said he went to a party.’Questioner: ‘Okay, so
John went to a party?’Respondent: ‘That’s right.’(emphasis
added to illustrate clariﬁcation utterance)
Opinion Provide a personal opinion or belief related to the allegations before the court. ‘I think you assaulted Kirk when you saw him.’
Facilitator Verbal utterance that encourages the ﬂow of conversation. ‘Um-hmm’,‘Yes’,‘Okay’
Statement Statement of fact not in the form of a question. ‘That water is there for you, Mr. Barron’
Command Giving a directive or telling the witness to do something. ‘Speak up now.’
Incomplete Questioner was interrupted or cut oﬀby another speaker or witness.
Questioner did not complete the thought.
Utterances transcribed as ‘(inaudible)’or ‘(unintelligible)’in transcript.
‘So how often would you –’
Purpose Type Description Example
Administrative Pertaining to procedural aspects of the courtroom.
Unrelated to the case or charges.
‘The microphone does not amplify your voice, but rather is there for
Case-Based: Information Gathering Unique details about the crime. ‘Describe what happened when you saw the body on the ﬂoor.’
Case-Based: Challenge Challenge the reliability of the witness account ‘An oﬃcer asked whether you punched him, and you said that you
didn’t remember. Now you’re saying you never punched him. Do
you see any diﬀerence in that?’
Purpose Unknown Does not ﬁt within other purpose types Used for facilitators or incomplete utterance types.
PSYCHOLOGY, CRIME & LAW 9
All 91 examinations were coded by the ﬁrst author. Reliability of the data were measured
by having the fourth author code 19 (20.88%) random examinations; eight direct and 11
cross-examinations. After receiving training to learn about the coding process, the struc-
ture and content of the coding guide, and the content dictionary, the fourth author then
practiced coding two examinations (not included in the current study sample) prior to
commencing the inter-rater coding duties. During the inter-rater reliability task, the
fourth author was blind to the nature and conditions of the study and was not privy to
any of the hypotheses or expected outcomes. Excellent inter-rater agreement was
achieved for classiﬁcation of both utterance type (κ= .70) and purpose type (κ= .83;
Cohen, 1960; Landis & Koch, 1977).
Following the example of previous studies (e.g. Kebbell et al., 2003,2004; Snook et al.,
2012; Zajac & Cannan, 2009), the questioning practices of lawyers in the current study
were quantiﬁed as a proportion (i.e. mean percent) of questions asked during each exam-
ination. Frequency analyses were conducted to determine the number of unique utter-
ances. Descriptive analyses of the proportion of utterance types were conducted for
utterance types overall, and also as a function of examination type and lawyer type;
these descriptive analyses were repeated for the proportion of purpose types overall,
and as well as a function of examination type and lawyer type. An analysis of variance
test was conducted using each unique utterance type and purpose type (i.e. dependent
variable) to examine any diﬀerences between their use in direct and cross-examinations
(examination type; i.e. independent variable), and their use by prosecutors and defence
lawyers (lawyer type; i.e. independent variable).
To display the magnitude of any signiﬁcant diﬀerences found, we presented eﬀect sizes
as Cohen’sd(Cohen, 1988) using a within-participants design eﬀect size calculator.
Cohen’sdis used to determine if comparative results have meaningful diﬀerences. For
ease of interpretation, Cohen proposed four levels of eﬀect sizes: no eﬀect (d< 0.19; no
practical signiﬁcance); a small eﬀect (0.20 < d< 0.49; low practical signiﬁcance); a
medium eﬀect (0.50 < d< 0.79; moderate practical signiﬁcance); and a large eﬀect (d>
0.80; high practical signiﬁcance).
Given the availability of numerous pre-existing (i.e. nuisance) variables contained within
the provided court transcripts, we decided to also conduct unplanned step-wise
regression analyses to allow for further exploration into whether or not utterance type
(and purpose type) could be predicted by various qualities or characteristics related to
the speciﬁc court case. The purpose of these analyses was to determine if information
could be found to help further explain the ﬁndings revealed by the analysis of variance
testing, and to contribute additional knowledge to the literature. Prior to conducting
the regression analyses, multicollinearity was assessed by conducting a variation
inﬂation factor (VIF) analysis.
To accurately determine the total number of words spoken by lawyers and witnesses
during each examination, the word count feature in Microsoft Word 2016 was used to cal-
culate total number of words in each utterance and response per examination. Mean word
10 C. J. LIVELY ET AL.
length for each utterance type was calculated and compared to each other, and were
further examined as a function of examination type and lawyer type, uniquely; mean
word response length was also calculated and compared among each other, and
also examined as a function of examination type and lawyer type, uniquely. The magni-
tude of any signiﬁcant eﬀects were reported as Cohen’sd(Cohen, 1988).
A total of 8,312 utterances spoken by courtroom questioners (e.g. lawyers, judges,
clerk, unidentiﬁed speakers) were extracted from the 91 examinations. Since the
main focus of the study was to analyze lawyers’questioning practices, a total of
1,038 questioner utterances were removed because they were spoken by a judge
(n= 983), clerk (n= 54), or an unidentiﬁed speaker (n= 1). An additional 1,116 utter-
ances were removed prior to analyses because they were classiﬁed as statements
(n= 880), incomplete phrases (n= 173), or commands (n= 63). The total number of
lawyer utterances comprising the ﬁnal sample was 6,158, which corresponded to a
total of 5,911 witness response utterances.
The average number of lawyer utterances per examination was 67.66 (SD = 47.99, N= 91,
Range =14–220, 95% CI = 57.67, 77.65). The distribution of utterance types is contained in
Table 2. As can be seen in the ﬁrst column, closed yes/no questions were the most fre-
quently asked question, followed by probing and leading questions. Open-ended
Table 2. Mean percentage of utterance type as a function of examination type and lawyer type.
(N= 6,158) Examination Type Lawyer Type
Open-ended 0.44 (0.99)
Probing 21.59 (14.19)
Closed yes/no 28.43 (11.13)
Forced-choice 2.14 (2.83)
Multiple 7.09 (4.23)
Leading 20.14 (16.00)
Re-asked 0.41 (1.12)
Clariﬁcation 10.38 (7.54)
Opinion 0.19 (0.82)
Facilitator 9.19 (10.31)
Note: The Standard Deviations are contained within the round brackets, while the 95% Conﬁdence Intervals are contained
within the square brackets.
PSYCHOLOGY, CRIME & LAW 11
questions comprised less than one percent of all utterances, rendering support for
Hypothesis 1C. Taken together, open-ended and probing questions accounted for
22.03% of all utterances, while closed yes/no, forced-choice, multiple, and leading ques-
tions accounted for 57.80% of all utterances; the remaining utterance types collectively
accounted for 20.17% of all utterances.
When split by examination type, 3,107 (50.45%) of the utterances occurred during a
direct examination, and the remaining 3,051 (49.55%) occurred during a cross-examin-
ation. Direct examinations contained, on average, 66.09 (SD = 46.36, 95% CI = 52.47,
79.70) utterances, and cross-examinations contained, on average, 69.34 (SD = 50.16,
95% CI = 54.09, 84.59) utterances; the eﬀect of examination type on the total the
number of utterances spoken was not practically signiﬁcant, d= 0.05. As can be seen in
Table 2, direct examinations contained a larger proportion of probing (d= 1.41), closed
yes/no (d= 0.71), and open-ended (d= 0.59) questions than cross-examinations, thereby
supporting Hypothesis 1A. Hypothesis 1B was supported by the ﬁnding that cross-exam-
inations contained a greater proportion of leading questions (d= 1.63) and clariﬁcations (d
= 0.60) than direct examinations. All other eﬀect sizes were small (d< 0.33).
When split by lawyer type, 3,459 (56.18%) of the total utterances were spoken by pro-
secutors, and the remaining 2,699 (43.82%) were spoken by defence lawyers; the average
number of utterances spoken by prosecutors was 72.40 (SD = 43.82, 95% CI = 59.54, 85.27),
and was 62.60 (SD = 52.11, 95% CI = 46.75, 78.43) for defence lawyers; the eﬀect of lawyer
type on the total the number of utterances spoken was not practically signiﬁcant, d= 0.15.
The eﬀect sizes for the diﬀerences in the proportion of utterance types between prosecu-
tors and defence lawyers were all small (d< 0.31; see Table 2); consequently, Hypotheses
2A and 2B were not supported.
The distribution of purpose types can be found in Table 3. As shown in the ﬁrst column,
information gathering was the most frequent purpose type, followed by unknown, chal-
lenging the account, and administrative. Overall, 89.73% of the utterances were used to
obtain information relevant to the case (SD = 10.62, 95% CI = 87.52, 91.94).
The eﬀect sizes for the diﬀerences in the proportion of purpose types as a function
of examination type and lawyer type were small or had no practical signiﬁcance
Table 3. Mean percentage of purpose type as a function of examination type and lawyer type.
(N= 6,158) Examination Type Lawyer Type
Administrative 1.06 (1.89)
Information Gathering 87.92 (10.51)
Challenge 1.81 (3.56)
Unknown 9.21 (10.33)
Note: The Standard Deviations are contained within the round brackets, while the 95% Conﬁdence Intervals are contained
within the square brackets.
12 C. J. LIVELY ET AL.
(all ds < 0.21), with the exception of challenges to the witness’account with regards to
examination type; challenges were more frequent during cross-examinations, as com-
pared to direct examinations (d= 0.72; see Table 3 for distributions of utterance types).
Unplanned regression analyses
To examine the impact of the nuisance variables on the data, a step-wise regression was
performed on each of the ten utterance types retained in the sample using eleven avail-
able predictor variables (a Bonferroni’s corrected alpha of .005 was used; i.e. α= .05/10).
There were no concerns regarding multicollinearity (all VIFs < |2.6|; rs < 0.67). Examination
type emerged as the only signiﬁcant predictor of the proportion of open-ended questions
asked, F(1, 89) = 13.25, β=−.36, p= .001, R
= .13. Lawyers tended to ask open-ended
questions more during direct examinations, thereby further supporting Hypothesis 1A.
For probing questions, two predictors (examination type and year) explained 40% of
the variance, F(2, 88) = 28.93, p= .001. Lawyers asked more probing questions during
direct examinations (β=−.58, p= .001) –rendering further support for Hypothesis 1A –
and probing questions were less likely in more recent trials (β=−.27, p= .002). For the
closed yes/no questions, examination type was a signiﬁcant predictor F(1, 89) = 14.23, β
=−.37, p= .001, R
= .14. Direct examinations contained more closed yes/no questions
than did cross-examinations. Examination type was also a signiﬁcant predictor of the pro-
portion of leading questions asked, F(1, 89) = 56.83, (β= .62, p= .001, R
= .39). Speciﬁcally,
more leading questions were asked during the cross-examinations, which strengthened
support for Hypothesis 1B.
Examination type was a signiﬁcant predictor of proportion of clariﬁcation questions
asked, F(1, 89) = 11.81, p= .001, β= .34, R
= .12. Lawyers were more likely to ask clarifying
questions during cross-examinations. As for facilitators, the year emerged as a signiﬁcant
predictor, F(1, 89) = 13.53, β= .36, p= .001, R
= .13. Lawyers tended to use facilitators more
often in more recent cases. The step-wise regression analyses for the remaining utterance
types (i.e. forced-choice, multiple, re-asked, opinion) did not produce any signiﬁcant pre-
dictors (all ps > .005). Since lawyer type did not emerge as a signiﬁcant predictor for any
utterance type in the regression analyses, this further conﬁrmed no support for Hypoth-
eses 2A and 2B.
Similarly, regression analyses were conducted with the purpose types using all of the same
independent variables listed above. Using a series of step-wise regressions, all of the avail-
able independent variables were entered as predictors while each purpose type was indi-
vidually assigned as the outcome variable. A step-wise regression test using the
challenging the accounts/details purpose type as the dependent variable revealed that
three predictors (examination type, victim as witness, and defendant as witness) explained
31% of the variance for utterances that were challenging in nature, F(3, 87) = 13.06, p
= .015. Speciﬁcally, cross-examinations (β= .41, p< .001, R
= .18), victim as a witness (β
= .31, p= .001, R
= .09), and defendant as a witness (β= .22, p= .015, R
= .05) signiﬁcantly
predicted the lawyers’tendency to speak challenging utterances toward witnesses on the
stand. A step-wise regression using administrative purpose type as the dependent variable
PSYCHOLOGY, CRIME & LAW 13
revealed that crime type explained 7% of the variance for utterances that were adminis-
trative in nature, F(1, 89) = 6.34, p= .014. A step-wise regression using the information
gathering purpose type as the dependent variable revealed that year the case went to
court explained 10% of the variance for utterances that were information gathering in
nature, F(1, 89) = 9.32, p= .003. We did not conduct a regression analyses for the
unknown purpose type because it was a miscellaneous category designed to capture
utterance types where the intention of what the speaker was trying to achieve is
unclear (i.e. facilitator, incomplete).
The average length of lawyer utterances is shown in Figure 1. As can be seen, opinions
contained the most words on average, followed by multiple and forced-choice questions.
Facilitators contained the least amount of words, with clariﬁcation questions having the
Figure 1. Average number of words spoken per utterance type overall by questioner and respondent,
and associated 95% conﬁdence interval, per examination (N= 91).
14 C. J. LIVELY ET AL.
next least amount of words. When split by examination type, open-ended questions (d=
1.08) and opinions (d= 1.79) contained more words during direct examinations as com-
pared to cross-examination. Re-asked questions (d= 0.30) contained more words during
cross-examinations (vs. direct examinations). No other meaningful diﬀerences were
found in terms of the length of the remaining utterance types (all other ds < 0.17).
When split by lawyer type, re-asked questions (d= 0.69) were found to contain more
words when asked by a defence lawyer (vs. prosecutor). No other meaningful diﬀerences
in length were found for the remaining utterance types (all other ds < 0.11; see Table 4 for
The average length of response to each utterance type is also shown in Figure 1. Open-
ended questions resulted in the longest responses, followed by multiple and probing
questions, thereby supporting Hypothesis 3A. The shortest replies were provided in
response to leading and clariﬁcation questions, followed by opinions rendering support
for the prediction regarding leading questions, but not closed yes/no (i.e. Hypothesis
3B). As can be seen in Table 5, opinions, re-asked and clariﬁcation questions were all
found to have similar response lengths, thereby supporting Hypothesis 3C. When split
by examination type, responses to open-ended (d= 0.52) and closed yes/no questions
(d= 0.23), and opinions (d= 0.62) contained more words when these replies occurred
during direct examinations (vs. cross-examinations). No other meaningful diﬀerences of
response length emerged for the remaining utterance types with respect to examination
type (all other ds < 0.20). When split by lawyer type, responses to re-asked questions (d=
0.25) and opinions (d= 0.49) contained more words when asked by a defence lawyer (vs.
prosecutor). No other meaningful diﬀerences were found with respect to lawyer type (all
other ds < 0.17; see Table 5 for further breakdown).
Table 4. Mean number of words spoken by questioner per utterance type as a function of examination
type and lawyer type.
Utterance Type Overall Examination Type Lawyer Type
Direct Cross Prosecutor Defence
Open-ended 13.48 (7.93)
Probing 11.35 (8.64)
Closed yes/no 16.92 (14.29)
Forced-choice 18.45 (13.78)
Multiple 26.67 (18.37)
Leading 17.25 (14.23)
Re-asked 11.92 (9.60)
Clariﬁcation 11.13 (8.82)
Opinion 60.53 (58.49)
Facilitator 1.24 (0.67)
Note: The Standard Deviations are contained within the round brackets, while the 95% Conﬁdence Intervals are contained
within the square brackets.
PSYCHOLOGY, CRIME & LAW 15
The goal of the current study was to quantify the utterance types used by Canadian
lawyers during witness examinations; a proxy for the extent to which evidence is likely
being contaminated during courtroom proceedings. Our results revealed that the majority
of utterances ran counter to recommended practices for gathering complete and accurate
information –conﬁrming ﬁndings reported in previous courtroom questioning studies
(e.g. Kebbell et al., 2003). When analyzing examination type, we found that more pro-
ductive questions (e.g. open-ended) were asked during the direct examinations and
more unproductive questions (e.g. leading) were asked during the cross-examinations.
We also found that the type of lawyer (i.e. prosecutor vs. defence lawyer) did not have
an eﬀect on the types of utterances spoken; this is contrary to the ﬁndings from previous
studies (e.g. Zajac & Cannan, 2009). In terms of the assumed purpose of each utterance, the
majority of utterances were oriented toward the gathering of information. When looking
at the amount of information provided in response to utterance type, it was revealed that
open-ended questions were followed with the provision of much more information than
all other utterance types; leading questions produced the least amount of information. The
ﬁndings pertaining to the relationship between utterance type and response length match
results reported previously (e.g. Kebbell et al., 2004; Snook et al., 2012). Broadly, the
ﬁndings from the current study raise concerns about the way Canadian lawyers ask ques-
tions when seemingly trying to establish the truth of the matter before the court.
Our analyses show that lawyers tend to ask questions that inhibit the ability of wit-
nesses to speak and provide information freely in court. As reported above, the most
common utterance type overall was the closed yes/no question, followed by probing
and leading questions. Decades of research has conﬁrmed that asking open-ended
Table 5. Mean number of words spoken by witness in response to utterance type as a function of
examination type and lawyer type.
Utterance Type Overall Examination Type Lawyer Type
Direct Cross Prosecutor Defence
Open-ended 71.96 (96.76)
Probing 20.51 (31.32)
Closed yes/no 16.92 (14.29)
Forced-choice 12.98 (17.35)
Multiple 22.36 (34.80)
Leading 17.25 (14.23)
Re-asked 12.20 (13.84)
Clariﬁcation 8.08 (15.33)
Opinion 11.63 (17.26)
Facilitator 1.24 (0.67)
within the square brackets.
16 C. J. LIVELY ET AL.
questions, along with follow-up probing questions, is the best way of helping witnesses
directly, and uninhibitedly, provide information of their own volition (Fisher & Geiselman,
1992; Milne & Bull, 2003; Powell, Fisher, & Wright, 2005). The use of open-ended questions
also protects against the questioner tainting the reported information –either directly or
indirectly. In line with Hypothesis 1C, we found that open-ended questions were used
rarely (less than one percent of all utterances) by lawyers in a courtroom setting. Moreover,
we found that probing questions comprised approximately one-ﬁfth of all utterances,
which is similar to what has been reported by other researchers for probing style questions
in both the police and courtroom arenas (Kebbell et al., 2003,2004; Snook et al., 2012;
Snook & Keating, 2011). Collectively, these data suggest that lawyers do not adhere to
the guidelines that they themselves expect of expert questioners (i.e. police interviewers)
in their quest for obtaining high-quality evidence. These ﬁndings are somewhat intriguing
and rather duplicitous when considering that police questioning practices have come
under scrutiny by lawyers as being a factor for some miscarriages of justice (see Lamer,
2006). The ﬁndings in the current study suggest that the very individuals who sometimes
make a case against the police for using unproductive questioning practices (i.e. lawyers)
are apparently no better at performing the same task. Moreover, these ﬁndings raise con-
cerns about the quality and quantity of information being aﬀorded to triers of fact who are
tasked with making consequential decisions.
Comparing across examination type, we found our results to be comparable to the
ﬁndings reported by Kebbell and colleagues (2003,2004). In our study, more closed
yes/no, probing, and open-ended questions were asked during direct examinations
than cross-examinations, while cross-examinations contained more leading questions,
clariﬁcation questions, and opinions than direct examinations. In other words, the data
provide support for Hypotheses 1A and 1B. Given that direct examinations are supposed
to be relatively open in nature, it is interesting to observe that closed yes/no questions
were employed more frequently by direct examiners. We suspect that this was done pur-
posefully to exhibit control over the lawyer’s own witness, thereby regulating the quality of
information that is being elicited by the witness before the court –information that can
then be tested during the cross-examination (see Morley, 2009, for advice given to
lawyers regarding questioning practices).
The two largest eﬀects were for probing and leading utterance types, and in opposite
direction to one another. The ﬁnding that more probing utterance were spoken in the
direct (vs. cross-) examination is not entirely surprising since these questions help
achieve the goal of the direct examination (i.e. obtaining information in a relatively
open manner). Likewise, the higher amount of leading questions used during cross- (vs.
direct) examinations is likely related to the strategy of trying to get the witness to
adopt an alternative explanation for the case at hand or a revised version of their
account. Although leading questions are thought by interviewing experts to be the
bane of all question types, proponents of courtroom questioning recognize the leading
question as a main tenant and strength of a cross-examination (Evans, 1995; Glissan,
1991; Morley, 2009; Stone, 1995; Wellman, 1997).
Open-ended questions were asked more frequently during direct examinations than
cross-examinations. Although this is an encouraging ﬁnding, it is important to keep in
mind that the proportion of open-ended questions asked during direct examinations
was minuscule. Over 6,000 utterances were coded in this study, and only 29 were
PSYCHOLOGY, CRIME & LAW 17
identiﬁed as being open-ended questions. Speciﬁcally, 24 of these were asked during a
direct examination while only ﬁve were asked during a cross-examination. A lack of use
of this productive utterance type during the truth-seeking process would likely be
viewed by some as ineﬀectual, and suggests that courtroom questioners are deviating
from what is recognized as the gold-standard of questioning practices. Moreover, consid-
ering the positive outcomes associated with open-ended questions (e.g. removes ques-
tioner bias, produces uncontaminated and longer responses), a failure to ask questions
in this way deprives triers of fact from having complete and accurate information to
help them render a verdict.
Contrary to Hypotheses 2A and 2B, lawyer type did not predict any of the utterance
types. These ﬁndings run counter to research that has reported diﬀerences between pro-
secutors and defence lawyers in the questions they ask to witnesses (Zajac et al., 2003;
Zajac & Cannan, 2009). However, the lack of an eﬀect in this study may be due to diﬀer-
ences in the operationalization and classiﬁcation of question types between studies.
Nevertheless, our ﬁnding is logical when one considers that all lawyers in Canada
receive comparable guidance on courtroom examination strategies. It might also be the
case that it is not about the type of lawyer or the side they are on, but rather it is about
the goal they are trying to achieve –that is, whether they are trying to present a story
(direct examination) or unravel a story (cross-examination).
Our analyses of utterance and response lengths illustrate the eﬀect that utterance type
has on the amount of information, and is similar to the ﬁndings by Snook and colleagues
(2012), but is diﬀerent from that reported by Kebbell and colleagues (2004). This is likely
due to the diﬀerence in operationalization and classiﬁcation systems between the current
study Kebbell et al. (2004; see notes #1 and #2). Our data shows that opinions contained
the most words when spoken by lawyers, but the responses to the opinions were relatively
short. In contrast, but in line with Hypothesis 3A, open-ended questions contained rela-
tively few words and elicited the longest responses. In fact, open-ended questions elicited
responses that were, on average, nearly 50 words longer than responses given to multiple
questions (i.e. the second highest response length). Considering that responding to
multiple questions would logically require a longer response (i.e. more words) than
would a single question, this ﬁnding demonstrates the eﬀect of asking open-ended
questions on the amount of information obtained. Moreover, in line with Hypothesis
3B, leading questions elicited the shortest responses; a ﬁnding that illustrates yet
another reason why such questions are ineﬀective for gathering information. Not
only do leading questions have the potential to contaminate the witness’memory
and introduce misinformation (e.g. Loftus, 2005), but they also do not elicit much infor-
mation. Hypothesis 3C was supported, as opinions, re-asked, and clariﬁcation questions
all yielded similar –and minimal –response lengths. Interestingly, facilitators were
found to elicit slightly longer responses as compared to the opinions, re-asked, and
clariﬁcation questions. One explanation for this may be because facilitators encourage
the respondent to keep talking.
We also found that the greatest proportion of questions asked were for the purpose of
gathering information; nearly 90% of the utterances were dedicated to this purpose. At the
very least, such a ﬁnding suggests that courtroom questioners are focusing their pursuits
toward gathering information for their case. Having said that, this ﬁnding needs to be con-
sidered in light of the types of questions being asked to achieve this purpose.
18 C. J. LIVELY ET AL.
Approximately 20% of the utterances deemed to be for the purpose of gathering infor-
mation were the productive utterance types (i.e. open-ended, probing).
Police organizations have been under scrutiny for decades regarding the way they
gather information from human sources, and more speciﬁcally how their questioning prac-
tices impact the quality of evidence being presented in court. Historically, police inter-
viewers arguably played the role of both prosecutor and defence lawyer, attempting to
both gather a story and break down any resistance from the accused person. In response
to those concerns, some police organizations around the world have developed,
implemented, and maintained ethical, science-based interviewing protocols and practices
(e.g. PEACE; see Milne & Bull, 2003). More contemporary police organizations have estab-
lished protocols that are focused on using high-quality questions (i.e. open-ended, fol-
lowed by probing questions) to gather as much complete and accurate information as
possible from all interviewees (i.e. victims, suspects). Once all information has been gath-
ered, interviewers then proceed to dissect the quality of the reported information and seek
to clarify discrepancies in the information provided. By ensuring that high-quality infor-
mation is gathered and discordances are resolved, it is possible for oﬃcers to make
safer decisions (e.g. laying charges). We have yet to learn of a satisfactory reason as to
why Canadian lawyers could not follow similar protocols when presenting and examining
witness evidence. If such a change towards the use of science-based interviewing proto-
cols is suitable for police oﬃcers, it seems reasonable to expect that such an approach
would also be suitable for lawyers.
Proponents who endorse standard questioning practices of lawyers will no doubt
object to our arguments. Critics may try to argue that lawyers are allowed to ask these
unproductive types of questions because it is their job to represent their client well and
by all means necessary. Furthermore, the current rules of engagement in Canadian court-
rooms allow for such questioning practices to occur (e.g. leading questions during the
cross-examination). Yet, if this same logic was applied to police oﬃcers –who arguably
have the job of representing the community –then a new question about the justice
system needs to be addressed: why is it acceptable to call foul on the police for asking
unproductive questions, but is viewed as necessary practice for lawyers?
Limitations and Future Direction
The reported ﬁndings presented in the current study need to be considered in light of
some limitations. One limitation pertains to the generalizability of lawyer questioning
practices because all lawyers in the current sample were practicing in the province of New-
foundland and Labrador, Canada. It would have been preferable to use a larger sample
that included a representation of lawyers from across Canada; logistical restraints
related to obtaining and coding such a sample requires researchers to study the issue
on a more local basis (i.e. per province analyses that culminate in a meta-analytic study).
Related to the localized nature of the sample is the courtroom environment itself. All of
the criminal cases in this sample were tried by judge only. Perhaps the presence of a jury
may render a slight change in lawyers’questioning strategies and practices. In a jury trial,
not only do lawyers need to ask certain questions in order to elicit the information from
the witnesses, but lawyers also need to guide the jury members –whom assumedly
have limited legal expertise –along strategically to accept their version of the case facts
PSYCHOLOGY, CRIME & LAW 19
(e.g. story model; Pennington & Hastie, 1986,1988; but see Simon, 2004). The eﬀect of jury
presence on courtroom questioning practices remains to be tested and should be incor-
porated in future replications.
Another concern pertains to the dependency issue in our sample, whereby some
lawyers conducted more than one examination. We attempted to address this depen-
dency issue by conducting identical analyses with a subsample that contained only one
examination per lawyer. We found that there were minimal diﬀerences in the descriptive
data and eﬀect sizes between the samples (i.e. N= 91 vs. N= 25; see note #5). Given that it
was not possible to conduct inferential statistics to compare the two samples, it is impera-
tive that replications are conducted using larger and completely independent samples.
The current study provided new knowledge with respect to the questioning practices in
the courtroom, particularly focusing on lawyers’questioning of witnesses on the stand.
Future studies may also want to explore the questioning practices of judges to witnesses.
In a criminal bench trial, it is not uncommon for a judge to ask questions to the witness
during the trial. To our knowledge, no data have been published on the questioning prac-
tices of judges to witnesses on the stand directly. Research examining how judges ask
questions to witnesses also aims to serve as a completion of all stages of questioning
throughout the justice-seeking process (e.g. police questioning, lawyer questioning,
judge questioning). At the very least, an analysis of courtroom questioning practices of
judges to witnesses is called for in both adversarial and inquisitorial models of justice.
Our results lead to the provisional conclusion that the way Canadian lawyers ask questions
tends to run counter to what is recommended practice for gathering complete and accu-
rate information. The data raise interesting questions about the extent to which this part of
the adversarial system may need to be reformed and amended in order to protect the
quality of evidence that being used by triers to fact when making consequential decisions.
The best way to ensure that the tenants of Lady Justice (e.g. impartiality, fairness, and
objectivity) are upheld is to use evidence-based practices from the outset of an investi-
gation until the verdict is rendered.
1. Zajac and Cannan’s(2009)deﬁnition of an ‘open’question is comparable to what the current
paper deﬁnes as a probing question. Additionally, Zajac and Cannan’sdeﬁnition of a ‘closed’
question is comparable to a combination of closed yes/no and forced-choice questions as out-
lined in the current paper.
2. It is important to note in both Kebbell et al. (2003) and Zajac and Cannan (2009), questions
types were not mutually exclusive; a single question might have been coded as both a
closed and leading question.
3. Although the number of examinations conducted by a prosecutor (i.e. 47) and defence lawyer
(i.e. 44) match the number of direct examinations (i.e. 47) and cross-examinations (i.e. 44) that
occurred, these numbers are not one in the same or linked to each other. Rather, it was mere
coincidence that the numbers matched.
4. The total number of lawyer utterances (n= 6,158) and witness response utterances (n= 5,911)
are not equal because in some cases, other questioners interrupted before the witness could
20 C. J. LIVELY ET AL.
provide a response utterance (e.g. opposing lawyer objects to question before witness
5. A total of 17 of the 25 lawyers in our sample conducted more than one examination. As such,
there were concerns about independence –that is, having multiple examinations conducted
by the same lawyer(s) may have skewed the results. To address concerns about this depen-
dency issue, a single examination was selected randomly for each lawyer who conducted
more than one examination, leaving a subsample size of 25 examination transcripts upon
which the same aforementioned analyses for the main sample were conducted. Across all
utterance types, there was, on average, a 1.04% (SD = 0.74) diﬀerence in the proportion of
utterance types asked per examination between the means of the two samples (i.e. N=91
vs. N= 25). Speciﬁcally, the mean proportion for the subsample and the absolute diﬀerence
in proportion between samples are as follows: open-ended (M= 0.58, SD = 1.32, 95% CI =
0.03, 1.12, M
= 0.14), probing (M= 23.65, SD = 16.45, 95% CI = 16.86, 30.44, M
closed yes/no (M= 27.73, SD = 11.95, 95% CI = 22.80, 32.66, M
= 0.70), forced-choice (M=
1.64, SD = 2.68, 95% CI = 0.53, 2.74, M
= 0.50), multiple (M= 5.70, SD = 3.82, 95% CI = 4.12,
= .39), leading (M= 17.75, SD = 14.95, 95% CI = 11.97, 23.53, M
= .39), re-asked
(M= 0.19, SD = 0.59, 95% CI = 0.00, 0.43, M
= 0.22), clariﬁcation (M= 11.80, SD = 10.10, 95%
CI = 7.63, 15.97, M
= 1.42), opinion (M= 0.38, SD = 1.36, 95% CI = 0.00, 0.94, M
and facilitator (M= 10.60, SD = 10.91, 95% CI = 6.10, 15.10, M
The diﬀerence in the eﬀect sizes for the diﬀerence between direct and cross-examinations
for each utterance type was negligible. The average diﬀerence in d-values between the two
samples was 0.20 (SD = 0.13). The eﬀect sizes for each utterance as a function of examination
type in the subsample, and the diﬀerence in the size of the d-values between the two samples
are as follows: open-ended (d= 0.62, d
= 0.03), probing (d= 1.88, d
= 0.47), closed yes/no
(d= 0.35, d
= 0.36), forced-choice (d= 0.47, d
= 0.27), multiple (d= 0.12, d
leading (d= 1.50, d
= 0.13), re-asked (d= 0.05, d
= 0.27), clariﬁcation (d= 0.80, d
0.20), opinion (d= 0.41, d
= 0.01), and facilitator (d= 0.09, d
The diﬀerence in the eﬀect sizes for the diﬀerence between prosecutors and defence
lawyers for each utterance type was negligible. The average diﬀerence in d-values between
the original sample and subsample was 0.13 (SD = 0.09). The eﬀect sizes for each utterance
as a function of lawyer type in the smaller sample, and the diﬀerence in the size of the d-
values between the two samples are as follows: open-ended (d= 0.29, d
= 0.18), probing
(d= 0.11, d
= 0.02), closed yes/no (d= 0.01, d
= 0.29), forced-choice (d= 0.22, d
multiple (d= 0.35, d
= 0.29), leading (d= 0.18, d
= 0.10), re-asked (d= 0.00, d
= 0.03), clar-
iﬁcation (d= 0.11, d
= 0.10), opinion (d= 0.13, d
= 0.07), and facilitator (d= 0.26, d
Likewise, the trends for purpose type in the smaller sample largely remained the same; it
was found that 88.06% of the utterances were used to obtain information relevant to the case
(SD = 11.57, 95% CI = 83.28, 92.83). Across all purpose types, there was, on average, a 0.84%
(SD = 0.43) diﬀerence in the proportion of purpose types classiﬁed between the means of
the two samples. Speciﬁcally, the means for the subsample and the absolute diﬀerence in pro-
portion between the samples are as follows: administrative (M= 1.34, SD = 1.94, 95% CI = 0.54,
= 0.28), information gathering (M= 87.32, SD = 11.70, 95% CI = 82.49, 92.15, M
0.60), challenge (M= 0.74, SD = 1.63, 95% CI = 0.06, 1.41, M
= 1.07), and purpose unknown
(M= 10.60, SD = 10.91, 95% CI = 6.10, 15.10, M
The diﬀerence in the eﬀect sizes for the diﬀerence between direct and cross-examinations
for each purpose type was negligible; the average diﬀerence in d-values between the two
samples was 0.04 (SD = 0.02). The eﬀect sizes for each utterance as a function of examination
type for the subsample, and the diﬀerence in the size of the d-values between the two
samples are as follows: administrative (d= 0.10, d
= 0.04), information gathering (d= 0.16,
= 0.04), challenge (d= 0.73, d
= 0.01), and purpose unknown (d= 0.09, d
Negligible diﬀerences were found in proportions of purpose types as a function of lawyer
type, with the exception of challenges. That is, prosecutors in the subsample asked more chal-
lenging questions than defence lawyers in the subsample; however, it is important to note
PSYCHOLOGY, CRIME & LAW 21
that the diﬀerence in the quantity of the questions challenging the witness were minimal
between the original sample (e.g. prosecutors –1.70%; defence lawyers –1.92%) and sub-
sample (e.g. prosecutors –1.12%; defence lawyers –0.32%), respectively. The average diﬀer-
ence in d-values between the larger and smaller sample was 0.11 (SD = 0.15). The eﬀect sizes
for each utterance as a function of lawyer type in the smaller sample, and the diﬀerence in the
size of the d-values between the two samples are as follows: administrative (d= 0.10, d
0.00), information gathering (d= 0.20, d
= 0.01), challenge (d= 0.40, d
= 0.36), and
purpose unknown (d= 0.26, d
Given the similarity in ﬁndings between the main variables of interest (e.g. examination
and lawyer types) for the N= 91 and N= 25 samples, the secondary analyses conducted to
explore the impact of any dependency issues as explained here were not included in the
results section in order to avoid redundancy and confusion. The conclusions drawn based
on the subsample data remained the same.
Thank you to the clerks at the Supreme Court of Newfoundland and Labrador (Trial Division) court-
house in St. John’s, NL, Canada, for providing the researchers with the dataset. Portions of this
research was completed as part of the ﬁrst author’s master of science thesis at Memorial University
of Newfoundland, St. John’s, NL, Canada.
No potential conﬂict of interest was reported by the authors.
Christopher J. Lively http://orcid.org/0000-0002-8702-7542
Andrews, S. J., & Lamb, M. E. (2016). How do lawyers examine and cross-examine children in
Scotland? Applied Cognitive Psychology,30, 953–971. doi:10.1002/acp.3286
Andrews, S. J., & Lamb, M. E. (2019). Lawyers’question content and children’s responses in Scottish
criminal courts. Psychology, Crime and Law. Advance online publication. doi:10.1080/1068316X.
Andrews, S. J., Lamb, M. E., & Lyon, T. D. (2015). Question types, responsiveness and self-contradic-
tions when prosecutors and defense attorneys question alleged victims of child sexual abuse.
Applied Cognitive Psychology,29, 253–261. doi:10.1002/acp.3103
Brock, P., Fisher, R. P., & Cutler, B. C. (1999). Examining the cognitive interview in a double test para-
digm. Psychology, Crime and Law,5,29–45. doi:10.1080/10683169908414992
Cliﬀord, B. R., & George, R. (1996). A ﬁeld evaluation of training in three methods of witness/victim
investigative interviewing. Psychology, Crime and Law,2, 231–248. doi:10.1080/
Cohen, J. (1960). A coeﬃcient for agreement for nominal scales. Educational and Psychological
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.
Davies, G. M., Westcott, H. L., & Horan, N. (2000). The impact of questioning styles on the content of
investigative interviews with suspects child abuse victims. Psychology, Crime and Law,6,81–97.
Evans, K. (1995). Advocacy in court. A beginner’s guide. London, UK: Blackstone Press Limited.
22 C. J. LIVELY ET AL.
Fisher, R. P. (1995). Interviewing victims and witnesses of crime. Psychology, Public Policy, and Law,1,
Fisher, R. P., & Geiselman, R. E. (1992). Memory enhancing techniques for investigative interviewing: The
cognitive interview. Springﬁeld, IL: Thomas.
Fisher, R. P., Geiselman, R. E., & Raymond, D. S. (1987). Critical analysis of police interview techniques.
Journal of Police Science and Administration,15, 177–185.
Gibbons, J. (2003). Forensic linguistics: An introduction to language in the justice system. Oxford, UK:
Blackwell Publishing Ltd.
Gilbert, J. A. E., & Fisher, R. P. (2006). The eﬀects of varied retrieval cues on reminiscence in eyewitness
memory. Applied Cognitive Psychology,20, 723–739. doi:10.1002/acp.1232
Glissan, J. L. (1991). Cross-examination practice and procedure: An Australian perspective (2nd ed.).
Sydney, Australia: Butterworths.
Griﬃths, A., & Milne, R. (2006). Will it all end in tiers? Police interviews with suspects in Britain. In T. A.
Williamson (Ed.), Investigative interviewing: Rights, research, regulation (pp. 167–189). Devon: Willan.
Hanna, K., Davies, E., Crothers, C., & Henderson, E. (2012). Questioning child witnesses in New
Zealand’s criminal justice system: Is cross-examination fair? Psychiatry, Psychology and Law,19,
Hanna, K., & Henderson, E. (2018). ‘[expletive], that was confusing, wasn’t it?’defence lawyers’and
intermediaries’assessment of the language used to question a child witness. International
Journal of Evidence and Proof,22, 411–427. doi:10.1177/1365712718796527
Henkel, L. A. (2014). Do older adults change their eyewitness reports when re-questioned? The
Journals of Gerontology Series B: Psychological Sciences and Social Sciences,69, 356–365. doi:10.
Hill v. Hamilton-Wentworth Regional Police Services Board.(2007). 3 SCR 129.
Kebbell, M. R., Deprez, S., & Wagstaﬀ,G.F.(2003). The direct and cross-examination of complainants
and defendants in rape trials: A quantitative analysis of question type. Psychology, Crime and Law,
Kebbell, M. R., Hatton, C., & Johnson, S. D. (2004). Witnesses with intellectual disabilities in court: What
questions are asked and what inﬂuence do they have? Legal and Criminological Psychology,9,23–
Kebbell, M. R., & Johnson, S. D. (2000). Lawyers’questioning: The eﬀect of confusing questions on
witness conﬁdence and accuracy. Law and Human Behavior,24, 629–641. doi:10.1023/
Klemfuss, J. Z., Quas, J. A., & Lyon, T. D. (2014). Attorneys’questions and childrens’productivity in
child sexual abuse criminal trials. Applied Cognitive Psychology,28, 780–788. doi:10.1002/acp.3048
Lamb, M. E., Hershkowitz, I., Orbach, Y., & Esplin, P. W. (2008). Tell me what happened: Structured inves-
tigative interviews of child victims and witnesses. Hoboken, NJ: John Wiley & Sons Inc.
Lamer, A. (2006). The Lamer commission of inquiry pertaining to the cases of: Ronald Dalton, Gregory
Parsons and Randy Druken. St. John’s, NL: Oﬃce of the Queen’s Printer.
Landis, R., & Koch, G. (1977). The measurement of observer agreement for categorical data.
Biometrics,33, 159–174. doi:10.2307/2529310
Loftus, E. F. (1979). Eyewitness testimony. Cambridge, MA: Harvard University Press.
Loftus, E. F. (2005). Planting misinformation in the human mind: A 30-year investigation of the mal-
leability of memory. Learning and Memory,12, 361–366. doi:10.1101/lm.94705
Milne, R., & Bull, R. (2003). Investigative interviewing: Psychology and practice. Chichester: Wiley.
Morley, I. (2009). The devil’s advocate: A short polemic on how to be seriously good in court. London:
Sweet and Maxwell.
Myklebust, T., & Alison, L. J. (2000). The current state of police interviews with children in Norway:
How discrepant are they from models based on current issues in memory and communication?
Psychology, Crime and Law,6, 331–351. doi:10.1080/10683160008409810
Neath, I., & Surprenant, A. M. (2003). Human memory (Second ed.). Toronto: Thomson Wadsworth.
Oxburgh, G., Myklebust, T., & Grant, T. (2010). The question of question types in police interviews: A
review of the literature from a psychological and linguistic perspective. International Journal of
Speech, Language, and the Law: Forensic Linguistics,17,45–66. doi:10.1558/ijsll.v17il.45
PSYCHOLOGY, CRIME & LAW 23
Pennington, N., & Hastie, R. (1986). Evidence evaluation in complex decision making. Journal of
Personality and Social Psychology,51, 242–258. doi:10.1037/0022-35220.127.116.11
Pennington, N., & Hastie, R. (1988). Explanation-based decision making: Eﬀects of memory structure
on judgement. Journal of Experimental Psychology: Learning, Memory, and Cognition,14, 521–533.
Perry, N., McAuliﬀ, B., Tam, P., Claycomb, L., Dostal, C., & Flanagan, C. (1995). When lawyers question
children: Is justice served? Law and Human Behavior,19, 609–629. doi:10.1007/BF01499377
Poole, D. A., & White, L. T. (1991). Eﬀects of question repetition on the eyewitness testimony of chil-
dren and adults. Developmental Psychology,27, 975–986. doi:10.1037/0012-1618.104.22.1685
Powell, M. B., Fisher, R. P., & Wright, R. (2005). Investigating interviewing. In N. Brewer, & K. Williams
(Eds.), Psychology and law: An empirical perspective (pp. 11–42). New York, NY: Guilford.
R. v. Forbes.(2006). TCC 377 (CanLII).
R. v. Klaus.(2017). ABQB 537 (CanLII).
R. v. Morgan.(2013). ONSC 6462,  O.J. No. 5827.
R. v. Sterling.(1995). CanLII 4037 (SK CA).
Scrivner, E., & Safer, M. A. (1988). Eyewitnesses show hypermnesia for details about a violent event.
Journal of Applied Psychology,73, 371–377. doi:10.1037/0021-9010.73.3.371
Simon, D. (2004). A third view of the black box: Cognitive coherence in legal decision making.
University of Chicago Law Review,71, 511–586.
Simons, D. J., & Chabris, C. F. (2011). What people believe about how memory works: A representative
survey of the U.S. Population. PLoS One,6, e22757. Retrieved from .http://journals.plos.org/
Snook, B., & Keating, K. (2011). A ﬁeld study of adult witness interviewing practices in a Canadian
police organization. Legal and Criminological Psychology,16, 160–172. doi:10.1348/
Snook, B., Luther, K., Quinlan, H., & Milne, R. (2012). Let ‘em talk! A ﬁeld study of police questioning
practices of suspects and accused persons. Criminal Justice and Behavior,39, 1328–1339. doi:10.
St-Yves, M. (2014). Investigative interviewing: The essentials. Toronto: Thomson Reuters Canada
Stone, M. (1995). Cross-examination in criminal trials (2nd ed.). London: Butterworths.
Turtle, J. W., & Yuille, J. C. (1994). Lost but not forgotten details: Repeated eyewitness recall leads to
reminiscence but not hypermnesia. Journal of Applied Psychology,79, 260–271. doi:10.1037/0021-
Wellman, F. L. (1997). The art of cross examination. New York, NY: Touchstone.
Westera, N., Zydervelt, S., Kaladelfos, A., & Zajac, R. (2017). Sexual assault complainants on the stand:
A historical comparison of courtroom questioning. Psychology, Crime and Law,23,15–31. doi:10.
Wright, A. M., & Alison, L. J. (2004). Questioning sequences in Canadian police interviews:
Constructing and conﬁrming the course of events? Psychology, Crime and Law,10, 137–154.
Zajac, R., & Cannan, P. (2009). Cross-examination of sexual assault complainants: A developmental
comparison. Psychiatry, Psychology and Law,16, S36–S54. doi:10.1080/13218710802620448
Zajac, R., Gross, J., & Hayne, H. (2003). Asked and answered: Questioning children in the courtroom.
Psychiatry, Psychology and Law,10,199–209. doi:10.1375/pplt.2003.10.1.199
Zaragoza, M. S., Belli, R. S., & Payment, K. E. (2006). Misinformation eﬀects and the suggestibility of
eyewitness memory. In M. Garry, & H. Hayne (Eds.), Do justice and let the sky fall: Elizabeth
F. Loftus and her contributions to science, law, and academic freedom (pp. 35–63). Hillsdale, NJ:
Lawrence Erlbaum Associates.
24 C. J. LIVELY ET AL.