ArticlePDF Available

Psychometric Properties of the Miranda Rights Comprehension Instruments With a Juvenile Justice Sample

SAGE Publications Inc
Assessment
Authors:

Abstract and Figures

This article describes the psychometric properties of the Miranda Rights Comprehension Instruments, the revised version of Grisso’s Miranda instruments. The original instruments demonstrated good reliability and validity in a normative sample. The revised instruments updated the content of the original instruments and were administered to a sample of 183 youth in pre- and postadjudication facilities. Analyses were conducted to establish the psychometric properties of the revised instruments and included similar analyses to those conducted by Grisso, as well as additional calculations (e.g., standard errors of measurement, intraclass correlation coefficients, Kappa coefficients). Results revealed sound psychometric properties, similar to those observed for the original instruments.
This content is subject to copyright.
http://asm.sagepub.com/
Assessment
http://asm.sagepub.com/content/18/4/428
The online version of this article can be found at:
DOI: 10.1177/1073191111400280
2011 18: 428 originally published online 9 March 2011Assessment Wolbransky
Naomi E. Sevin Goldstein, Christina L. Riggs Romaine, Heather Zelle, Rachel Kalbeitzer, Constance Mesiarik and Melinda
Sample With a Juvenile JusticeMiranda Rights Comprehension InstrumentsPsychometric Properties of the
Published by:
http://www.sagepublications.com
can be found at:AssessmentAdditional services and information for
http://asm.sagepub.com/cgi/alertsEmail Alerts:
http://asm.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://asm.sagepub.com/content/18/4/428.refs.htmlCitations:
What is This?
- Mar 9, 2011 OnlineFirst Version of Record
- Nov 24, 2011Version of Record >>
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
Assessment
18(4) 428 –441
© The Author(s) 2011
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1073191111400280
http://asm.sagepub.com
Psychometric Properties of the Miranda
Rights Comprehension Instruments
With a Juvenile Justice Sample
Naomi E. Sevin Goldstein1, Christina L. Riggs Romaine1, Heather Zelle1,2,
Rachel Kalbeitzer3, Constance Mesiarik4, and Melinda Wolbransky1,2
Abstract
This article describes the psychometric properties of the Miranda Rights Comprehension Instruments, the revised version of
Grisso’s Miranda instruments. The original instruments demonstrated good reliability and validity in a normative sample.
The revised instruments updated the content of the original instruments and were administered to a sample of 183 youth
in pre- and postadjudication facilities. Analyses were conducted to establish the psychometric properties of the revised
instruments and included similar analyses to those conducted by Grisso, as well as additional calculations (e.g., standard
errors of measurement, intraclass correlation coefficients, Kappa coefficients). Results revealed sound psychometric
properties, similar to those observed for the original instruments.
Keywords
Miranda rights, juvenile justice, reliability, validity, adolescent, psychometrics, forensic assessment
To inform public policy, in the 1970s, Grisso designed four
instruments to evaluate juveniles’ comprehension of Miranda
rights (Grisso, 1998). Since that time, psychologists have
widely adopted these instruments as clinical tools (Archer,
Buffington-Vollum, Stredny, & Handel, 2006; Lally, 2003;
Ryba, Brodsky, & Shlosberg, 2007). This article describes
the recent revisions to Grisso’s original instruments, presents
the psychometric properties of the updated instruments with
a juvenile justice sample, and compares them with the psy-
chometric properties of the original instruments.
In 1966, the U.S. Supreme Court established procedural
safeguards to protect suspects in custodial interrogations
from making self-incriminating statements unknowingly or
as the result of police coercion (Miranda v. Arizona, 1966).
The Miranda Court, recognizing the intimidating nature of
interrogations and that those interrogations can undermine
the right against self-incrimination, established that a sus-
pect’s statements are only admissible in court if the suspect
waived his rights knowingly, intelligently, and voluntarily.
In other words, the suspect must understand the vocabulary
in the warning and the basic meaning of the rights, appreci-
ate the consequences of waiving those rights, and provide
the waiver free from police coercion or intimidation (Grisso,
1981). In the decisions of Kent v. United States (1966) and In
re Gault (1967), the due process rights of criminal proceedings
were extended to juveniles, thereby extending Miranda pro-
tections to youthful suspects.
Since these decisions, three decades of research have
raised questions about whether juveniles are able to benefit
from the protections that the Miranda warnings were inten-
ded to provide. Approximately 80% of all suspects waive
their rights during police interrogations (Cassell & Hayman,
1996; Leo, 1996), with juveniles waiving the rights to coun-
sel even more frequently (e.g., Abramovitch, Peterson-Badali,
& Rohan, 1995; Ferguson & Douglas, 1970; Grisso &
Pomicter, 1977; Viljoen, Klaver, & Roesch, 2005). In addi-
tion, juveniles frequently fail to understand the nature of
their rights and to appreciate the consequences of waiving
those rights (Abramovitch et al., 1995; Colwell et al., 2005;
N. E. S. Goldstein, Condie, Kalbeitzer, Osman, & Greier,
2003; Grisso, 1981). As a result, many of the suppression
hearings that challenge the admissibility of incriminating
statements under Miranda involve juvenile defendants.
1Drexel University, Philadelphia, PA, USA
2Villanova Law School
3Essex County Juvenile Court Lynn, MA, USA
4Mission Kids, Child Advocacy Center of Montgomery County,
Blue Bell, PA, USA
Corresponding Author:
Naomi E. Sevin Goldstein, Department of Psychology, Drexel University,
MS 626, 245 N. 15th Street, Philadelphia, PA 19102-1192, USA
Email: neg23@drexel.edu
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
Goldstein et al. 429
Grisso’s instruments were originally designed to assess
juveniles’ Miranda understanding and appreciation, although
his research also included adult participants (Grisso, 1981).
Consequently, to compare the psychometric properties of
the revised instruments with those of the original, we, too,
focused on juvenile justice youth, a population at great risk for
failing to benefit from the Miranda protections (Abramovitch
et al., 1995; N. E. S. Goldstein et al., 2003; Viljoen & Roesch,
2005). Nevertheless, recent research demonstrated that adult
defendants, as well as college students, hold a number of
misconceptions about the meaning of the different Miranda
rights (Rogers et al., 2010). These misconceptions point to
a need to also provide updated information about the use of
the revised instruments with adult samples, and such studies
are currently in development.
Original Instruments for Assessing
Understanding and Appreciation of
Miranda Rights
To evaluate juveniles’ comprehension of the Miranda
warnings, Grisso, with the assistance of a panel of psychol-
ogists and lawyers, developed a set of instruments (Grisso,
1998). The instruments included two understanding measures
(Comprehension of Miranda Rights [CMR] and Comprehen-
sion of Miranda Rights–Recognition [CMR-Recognition]),
a vocabulary measure (Comprehension of Miranda Vocabu-
lary [CMVocabulary]), and an appreciation measure (Function
of Rights in Interrogation [FRI]). Grisso based the scoring
of each of these instruments on structured criteria created by
the expert panel. Each instrument represented a separate facet
of understanding or appreciation; therefore, scores on the
instruments were not designed to be combined or averaged,
and there was no total score across instruments (Grisso,
1981, 1998). The norms were established by administering
the instruments to youth in St. Louis, Missouri, in the 1970s.
The psychometric properties of the Instruments for Assess-
ing Understanding and Appreciation of Miranda Rights were
established by Grisso (1998) and reevaluated by Colwell
et al. (2005). Grisso (1998) examined test–retest reliability
for only the CMR (Pearson r = .84). Interrater reliability
was examined for the CMR (r = .92-.96, across pairs of
scorers), CMVocabulary (r = .97-.98), and FRI (r = .94-.96;
Grisso, 1998). Colwell et al. (2005) calculated interrater
reliability in their study and found intraclass correlation
coefficients of .86, .69, and .71 for the CMR, CMVocabulary,
and FRI, respectively. They also calculated internal consis-
tency for the CMR (a = .44), CMVocabulary (a = .66), and
FRI (a = .41). Construct validity was demonstrated for the
instruments through correlations with factors theoretically
related to Miranda comprehension, including IQ (Grisso:
r = .47-.59; Colwell et al.: r = .43-.59) and age (Grisso: r =
.19-.34; Colwell et al.: r = .26-.44).
Revised Miranda
Instruments: Miranda Rights
Comprehension Instruments
Although Grisso designed his instruments as research tools
to inform public policy, they have since been adopted by
forensic psychologists to evaluate juvenile and adult defen-
dants’ capacities to understand and appreciate the Miranda
warnings in the context of waiver validity challenges. They
were published as clinical forensic tools (Grisso, 1998) and are
the recommended tool for forensic evaluations involving cha-
llenges to Miranda rights waivers (Oberlander & Goldstein,
2001), with widespread use in both juvenile (Oberlander &
Goldstein, 2001) and adult (Cooper & Zapf, 2007; Lally,
2003) cases.
Although Grisso’s instruments are well respected among
judges, attorneys, and psychologists (Archer et al., 2006;
Lally, 2003; Ryba et al., 2007), the instruments and associ-
ated findings may be outdated. The instruments were deve lope d
and normed in the 1970s and reflect one county’s out-of-
date wording of the warnings and may not represent modern
youths’ comprehension of Miranda rights. Standards of test
evaluation and documentation advise that tests should be
“revised when new research data, significant changes in the
domain represented, or newly recommended conditions of
test use may lower the validity of test score interpretations”
(American Educational Research Association, American Psy-
chological Association, & National Council on Measurement
in Education [AERA, APA, & NCME], 1999, p. 48).
Since the instruments’ creation, most jurisdictions have si m-
plified the language of the warnings (e.g., using “questioning”
in place of “interrogation,” “lawyer” instead of “attorney”)
and added a fifth warning (Oberlander & Goldstein, 2001)
informing suspects of the continued privilege to exercise
their rights (Oberlander, 1998). There remains wide vari-
ability between versions of the warnings, however, because
jurisdictions are free to draft the warnings as they see fit, as
long as the warnings incorporate the elements outlined by
the Court in Miranda v. Arizona (Rogers, Harrison, Shuman,
Sewell, & Hazelwood, 2007).
Case law also suggested that the instruments would be
improved through revision. For example, in People v. Cole,
24 A.D.3d 1021 (2005), a New York appellate court upheld
exclusion of expert testimony about the original instru-
ments because the warnings in the instruments differed
dramatically from the warnings delivered to the defendant.
Testi mony about and results from the original instruments
have also been excluded on the grounds that the instruments
do not meet the Frye v. United States (1923) general accep-
tance standard for admissibility [e.g., Carter v. State,
697 So.2d 529 (1997); People v. Rogers, , 247 App. Div.
2d 765, 766, 669 N.Y.S.2d 678, appeal denied, 91 N.Y.2d
976, 695 N.E.2d 725, 672 N.Y.S.2d 856 (1998)]), although
it appears that inadequate testimony by experts led to these
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
430 Assessment 18(4)
courts’ conclusion that the original instruments were not
generally accepted. Nonetheless, renorming of the original
instruments without making revisions to account for
changes in Miranda practice (e.g., inclusion of a fifth warn-
ing, simpler vocabulary) could decrease acceptance of the
instruments’ in the field and, therefore, jeopardize their
admissibility.
In a related vein, Rogers (2008) recognized the need for
continued Miranda research and is currently developing a
new set of Miranda measures that address Miranda warning
vocabulary, comprehension, and reasoning (Rogers, 2008).
Although his instruments are not yet published, Rogers et al.
(2009) have published data on the initial validation of the
Miranda Vocabulary Scale with an adult population, and his
instruments seem as if they will provide a well-developed,
complimentary approach to Miranda assessment.
To maintain the utility of the instruments in forensic
evaluations of cases involving challenges to Miranda rights
waivers, N. E. S. Goldstein, Zelle, and Grisso (2011-b)
revised Grisso’s (1998) assessment tools (now titled
Miranda Rights Comprehension Instruments). The revised
instruments use a modern version of the Miranda warning
(Oberlander & Goldstein, 2001), which includes the now
common fifth warning about the continuing nature of the
rights, as well as simplified warning language, minor revi-
sions to clarify scoring criteria, and 10 additional words on
the vocabulary measure.1,2
The revised instruments maintain the same format of using
a single version of the Miranda warnings. A review of
Miranda warnings from approximately 65 police depart-
ments3 was conducted. The warnings generated for the
revised instruments are an amalgam of the subset of the
collected warnings with the simplest vocabulary, simplest
grammar, most straightforward presentations, and fewest words
(Goldstein, Zelle, et al., 2011-b; Oberlander & Goldstein,
2001). The warning selected for use in the instruments con-
tains 87 words, compared with the average of 99 words
(SD = 24.5, range = 21-231) found across a national sample
of warnings (Rogers, Hazelwood, Sewell, Harrison, &
Shuman, 2008). It also has a Flesch-Kincaid reading level
of 6.2 (reading levels of the individual warnings ranged
from 2.3 to 11.7) as compared with an average of 6.8 (SD =
2.08, range = 2-18, average reading levels of the individual
warnings ranged from 3.16 to 10.16) found in the national
sample (Rogers et al., 2008).
To maintain the utility of the revised instruments and
their admissibility in court, the psychometric properties of
the Miranda Rights Comprehension Instruments (MRCI)
needed to be evaluated. In a commentary about the original
instruments, Rogers, Jordan, and Harrison (2004) recom-
mended specific reliability and validity analyses to improve
the credibility and interpretability of the instruments, such
as inclusion of standard errors of measurement, interrater
reliability between individuals with less intensive training,
and longer intervals between collection of test and retest
data. Some suggestions were inapp licable, however,
because of the instruments’ defined purpose of measuring
understanding and appreciation of Miranda rights—not
“competency-to-confess” (Grisso, 2004). For example,
testing criterion validity by comparing instrument scores to
legal outcomes (judicial determinations of waiver validity)
was inappropriate for two primary reasons. First, measure-
ment of Miranda comprehension is only one element of a
comprehensive assessment that should be completed by
the evaluator. Second, Miranda decisions involve a totality
of circumstances approach; courts may choose to heavily
emphasize cognitive factors in determining the validity of
defendants’ Miranda waivers or they may choose to
emphasize other factors not measured by the Miranda
instruments. Many of Rogers and colleagues’ suggestions
are addressed in this article, along with other reliability and
validity analyses that were important to establish the psy-
chometric properties of the revised instruments. The
current article reviews these psychometric results in accor-
dance with the standards of test evaluation and
documentation presented in the Standards for Educational
and Psychological Testing (AERA, APA, & NCME, 1999).
In addition, the psychometric properties of the MRCI are
compared with the psychometric properties of the original
instruments established by Grisso (1981, 1998) and Col-
well et al. (2005).
Method
Participants
Inclusion eligibility was subject only to admission to the
designated facilities and was not based on gender, race, eth-
nicity, first language, or health status. Youths were excluded
if they had florid mental health symptoms, severe develop-
mental disabilities, or were unable to speak English fluently.
No youth met the exclusion criteria.
Participants were 183 youth (140 boys, 43 girls) from
three sites: a residential postadjudication facility in Massa-
chusetts (n = 55), a Philadelphia detention center (n = 112),
and a short-term postadjudication center for youth awaiting
placement in the Philadelphia area (n = 16). Participants
across the three sites ranged in age from 12 to 19 years (M =
16.45, SD = 1.72). Attempts were made to recruit multiple
youth of each age. However, youth at the lower and upper
ends of the age range were rarely placed at the designated
facilities. The final sample included the following numbers
of youth: Age 12, 1; Age 13, 14; Age 14, 27; Age 15, 28;
Age 16, 38; Age 17, 39; Age 18, 21; and Age 19, 15. Par-
ticipants were from diverse racial and ethnic backgrounds;
46.4% were African American, 15.8% Caucasian, 15.8%
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
Goldstein et al. 431
Hispanic, and 1.6% Asian American. In addition, 11.5%
of the participants identified as being of another ethnicity
(including biracial), and 8.7% did not report ethnicity. Most
youth (86.3%) reported English as their primary language;
4.9% reported another language as primary, and 8.7% did
not report a primary language.
Because facility procedures at the Philadelphia detention
center required that the mental health screening instrument
(Massachusetts Youth Screening Instrument–Second Ver-
sion [MAYSI-2]) be administered by facility staff, mental
health information was available to the researchers for only
23% (n = 43) of the sample. This tool identifies youth fall-
ing in the “caution” (i.e., would likely score in the clinically
significant range on other specialized assessment tools) and
“warning” (i.e., scale scores fall within the 90th percentile
of the normative sample) ranges. Of the data available, 43%
of youth scored in the caution range on the Somatic Com-
plaints scale, 35% on the Angry/Irritable scale, 35% on the
Depressed/Anxious scale, 21% on the Suicidal Ideation
scale, and 28% on the Alcohol/Drug Use scale; in addition,
38% of the male participants who completed the MAYSI-2
scored in the caution range on the Thought Disturbance scale
(this scale is not psychometrically sound for use with girls;
Grisso & Barnum, 2000). As expected, smaller numbers of
youth scored in the warning ranges on these scale; 5% to 11%
of youth met this criteria on each scale, with the greatest per-
centage of youth scoring in the warning range on the Alcohol/
Drug Use (11%) and Depressed/Anxious (11%) scales.
The Massachusetts Department of Youth Services provi-
ded participation consent for all youth in the post adjudication
facility at which the research was conducted.4 Parents were
also contacted by mail and given the opportunity to deny
participation. Assent was obtained from all youth partici-
pants. In Massachusetts, no parent denied participation, and
all youth agreed to participate. Youths at the Pennsylvania
facilities were represented by the public defenders’ office.5
All participants were placed at the designated facilities and
had no open cases involving confessions. Parental/guardian6
consent was sought for youths aged 18 years and younger.
Of parents/guardians reached, less than 8% declined partici-
pation.7 If a parent/guardian could not be reached, consent
was waived and youth were assented in the presence of a
participant advocate.8
Youth assent procedures were the same across all three
sites, with one exception. In Pennsylvania facilities, assent
was obtained in the presence of the participant advocate. In
the Massachusetts facility, the participant advocate cleared
each youth for participation but was not present at the time
assent was obtained. Youth were informed about the study
and assented before participating. Informed consent was
obtained from youths aged 18 and 19 years.
Measures
Miranda Rights Comprehension Instruments. The MRCI is
composed of four separate measures. First, Comprehension
of Miranda Rights–II (CMR-II) assesses general understand-
ing of Miranda rights by asking individuals to paraphrase
each of the five Miranda warnings. Second, Comprehension
of Miranda Rights–Recognition-II (CMR-Recognition-II)
assesses general understanding without reliance on verbal
expressive abilities, skills with which youthful offenders
frequently demonstrate difficulty (Grisso, 1981). Individuals
are asked to compare three preconstructed sentences to each
Miranda warning and indicate whether the statements are
semantically identical. Because there are three questions asked
for each of the five warnings, the CMR-Recognition-II is made
up of five subscales that reflect the under standing of e ach war-
ning. Third, Comprehension of Miranda Vocabulary–II
(CMVocabulary-II) tests examinees’ comprehension of
16 words commonly used in Miranda warnings. The exam-
iner reads each word aloud, uses it in a sentence, and asks
the examinee to define the word. Fourth, Function of Rights
in Interrogation (FRI) assesses examinees’ appreciation of
the significance of the warnings. The examiner presents to
the examinee four visual stimuli with accompanying brief
vignettes about legal proceedings. The examiner then asks
15 standardized questions about the vignettes. The FRI
has three subscales that reflect youths’ understanding of
the Nature of Interrogation, Right to Counsel, and Right to
Silence. Additional information about the instruments and their
development is available in N. E. S. Goldstein et al. (2003).
Three MRCI instruments (CMR-II, FRI, and CM Voca-
bulary-II) require evaluators to judge the quality of
responses, with structured scoring guidance from the
manual. As in Grisso’s original instruments, responses are
scored as adequate (2 points), questionable (1 point), or
inadequate (0 points), with higher scores on each measure
suggesting better comprehension. Only the CMR-II and
CMVocabulary-II have substantially revised scoring crite-
ria. The few changes made to the FRI scoring criteria
involved clarifying example responses. A panel of psycho-
logical and legal professionals developed the items and
scoring criteria for all the original instruments (Grisso,
1998), and scoring for new items in the revised instruments
was generated based on the original criteria and was
reviewed by psychological and legal experts on Miranda
issues (i.e., four clinical forensic psychologists who used
the original instruments regularly; one forensic psychiatrist
who focuses on competency-related issues; four attorneys,
including two defense attorneys, one assistant DA, and one
academically based attorney; and a judge with expertise in
both criminal and juvenile court). The experts were first
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
432 Assessment 18(4)
consulted about the meaning of the fifth warning. Scoring
criteria were then drafted using the same scoring format
used with the first four warnings in the original instruments,
including example responses. The experts reviewed the
drafted criteria, along with all manual revisions, and minor
modifications were made based on their suggestions.
Scoring for the CMR-Recognition-II requires no judg-
ment by the evaluator, as each preconstructed sentence is
either semantically identical to or semantically different from
a Miranda warning statement. Correct responses are assigned
1 point, and incorrect responses are assigned 0 points. Scores
for the CMR-Recognition-II are represented as subtotals for
each of the five Miranda warnings.
Wechsler Abbreviated Scale of Intelligence (WASI). The WASI
is a standardized, psychometrically sound measure of intel-
lectual functioning (The Psychological Corporation, 1999).
It consists of two subtests that measure verbal intellectual
functioning and two subtests that measure performance intel-
lectual functioning. The two verbal subtests (Vocabulary and
Similarities) were administered in the current study because
verbal capacities are most rel evant to Miranda understanding
and appreciation (Colwell et al., 2005).
Massachusetts Youth Screening Instrument–Second Version
(MAYSI-2). The MAYSI-2 is a screening tool that identifies
youth who may have substance use and/or mental health
problems (Grisso & Barnum, 2000). It was designed spe-
cifically for use in juvenile justice settings and has sound
psychometric properties that are comparable to more com-
prehensive measures of child psychopathology (Grisso &
Barnum, 2000).
Demographic questionnaire. A brief, structured interview
was used for this study, which included questions about
general demographic information (e.g., age, history of spe-
cial education), social environment (e.g., number of parents
living at home; number of relatives, friends, or acquaintances
in juvenile detention or adult jail/prison), legal history (e.g.,
age at first arrest, delinquencies that resulted in commit-
ment), and Miranda history (e.g., recollection of discussing
Miranda warning with lawyer).
Procedure
The protocol required approximately 3 hours to complete.
Data were collected during two 1.5-hour sessions. During
the first session, participants individually completed the four
measures of the MRCI9 and WASI (verbal scales). During
the second session, participants completed the Gudjonsson
Suggestibility Scale 2 (Gudjonsson, 1997), Wechsler Individ-
ual Achievement Test (verbal scales; 1992), and demo graphic
questionnaire.10 Pennsylvania youth also were administered
the MAYSI-2 by facility staff at the time of admission to
the facilities. Each youth received a gift certificate to a local
music store for participating.
To assess test–retest reliability, 47 youth were administered
three of the MRCI measures (CMR-II, CMR-Recognition-II,
and FRI). Because of the length of time required to admi-
ni ster both the CMVocabulary-II and the supplemental
mea sure of waiver behaviors, youth were only administered
one of these two instruments on retest (n = 24 for the
CMVocabulary-II). Time between test and retest ranged
from 0 to 56 days (M = 8.02, SD = 10.72); 83% of the par-
ticipants completed retesting within 2 weeks of their first
testing session, and 96% completed retesting within 4 weeks.
Stability of scores was unrelated to the time span (i.e.,
number of days) between test and retest (CMR-II: b = -.02,
SEs = .03, p = .36, R2 = .02, R2
Adj = -.003; CMR-Recognition-
II: b = .001, SE = .02, p = .97, R2 = .00, R2
Adj = -.02; FRI: b =
.03, SE = .05, p = .51, R2 = .01, R2
Adj = -.01; CMVocabulary-
II: b = .04, SE = .08, p = .67, R2 = .01, R2
Adj = -.04).
Trained graduate and postbachelor research assistants
(RAs; n = 15) administered the testing battery. All RAs
completed rigorous training in the use and scoring of the
instruments, which included the following: (a) formal didac-
tic training by the primary investigator (PI) in the use and
administration of the instruments, (b) practice administra-
tion of the instruments, (c) scoring of 15 sample protocols,
(d) comparison of the sample scores to those of the PI’s for
identification of scoring errors, (e) observation of a senior
research team member administering at least two protocols to
participants, and (f) prior to testing participants alone, a senior
member of the research team observed each RA admini ster at
least one protocol to confirm that the RA accurately followed
the designated administration and scoring procedures.
Method of Analysis
Reliability of the MRCI was assessed by calculating the
internal consistency, interrater reliability, test–retest reliabil-
ity, and standard error of measurement for each instrument.
For internal consistency, Cronbach’s alpha and Pearson
correlations were generated to determine the relationship
between scores for each of the items within that instrument
(interitem correlations). Item–total correlations were also
obtained to determine the relationship between scores for
each item and total scores for that instrument.
For interrater reliability, 15 trained raters independently
scored all 183 CMR-II, FRI, and CMVocabulary-II proto-
cols, with two raters scoring each protocol. Intraclass
correlation coefficients (ICC), representing total score
variability, were calculated, as were Kappa coefficients,
repre senting variability within each item. Kappa was used
because it provides a conservative estimate of the reliability
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
Goldstein et al. 433
of the item and is appropriate for categorical data. Although
the 0, 1, 2 scale of the CMR-II, FRI, and CMVocabulary-II
can be considered continuous, it may also be conceptual-
ized as representing different categories of understanding.
Interrater reliability was not necessary for the CMR-Recog-
nition-II because this instrument does not require the
evaluator’s judgment.
For test–retest reliability, Pearson correlations were gen-
erated to examine the relationship between scores at Time 1
and Time 2 on both individual items and instrument total
scores.
Standard error of measurement was calculated for each
instrument using each instrument’s standard deviation and
reliability coefficient. In addition, 90% and 95% confidence
intervals were calculated for each instrument.
Content validity was established through examination of
MRCI item language and content. For concurrent validity,
Pearson correlations were calculated to examine the rela-
tionship between age,11 IQ, and scores on each of the MRCI
measures. Convergent validity was established by examin-
ing the correlations between the component instruments.
To determine whether the psychometric properties of the
revised instruments were consistent with those obtained for
the original instruments, we compared internal consistency,
interrater reliability, and test–retest reliability estimates for
the MRCI with those estimates obtained by Grisso (1981,
1998) and by Colwell et al. (2005). In addition, results of
correlational analyses establishing construct validity for the
revised instruments were compared with those obtained by
Grisso (1998).
Results
Comprehension of Miranda Rights-II
A Cronbach’s a of .58 was obtained for the CMR-II. Inter-
item correlations ranged from .11 to .38 (mean correlation =
.23), and item–total correlations ranged from .54 to .66 (mean
correlation = .61; see Table 1 for item–total correlation coeffi-
cients for specific items).
An ICC of .95 was obtained for the CMR-II, and the
average Kappa coefficient for the five CMR-II items was
.68 (see Table 1 for specific Kappa coefficients for indi-
vidual items). Pearson r coefficients ranged from .68 to
1.00 (p < .01 for all) for individual CMR-II items and
equaled .90 (p < .01) for total CMR-II scores.
The Pearson r coefficient for test–retest reliability of
total CMR-II scores was .68 (p < .01). Test–retest reliability
for each of the Miranda warnings is presented in Table 1.
CMR-II scores for youth who completed test and retest
ranged from 0 to 10 (out of a possible 10); on average, indi-
viduals’ scores increased by .91 points (SD = 1.79) from the
first administration to the second. See Table 2 for specific
changes in scores from the first to second administrations.
The standard error of measurement for the CMR-II was
1.66. Confidence intervals are presented in Table 3.12
Comprehension of Miranda
Rights–Recognition-II
A Cronbach’s a of .54 was obtained for the CMR-
Recognition-II. Subtotal–subtotal correlations (e.g., co rre l ation
between Subscale I total scores and Subscale II total scores)
for the CMR-Recognition-II ranged from -.02 to .35 (mean
correlation = .18), and subtotal–total correlations ranged
from .35 to .69 (mean correlation = .58; see Table 1 for
subtotal–total correlations for each Miranda warning).
The Pearson r coefficient for test–retest reliability was
.75 (p < .01). Test–retest reliability of each Miranda warn-
ing statement on the CMR-Recognition-II is presented in
Table 1. CMR-Recognition-II scores for youth who com-
pleted test and retest ranged from 6 to 15 (out of a possible
15); on average, scores increased by .21 points (SD = 1.53)
from the initial administration. See Table 2 for specific
changes in scores from the first administration to the second.
Inspection of CMR-Recognition-II scores revealed mod-
erate test–retest reliability for all subscales, except Subscale
II (“Anything you say can be used against you in court”),
which produced a lower reliability estimate. To better
understand the stability of Subscale II, we examined the
reliability of each of the three items within the subscale.
Item 1 (“What you say might be used to prove you are
guilty”) demonstrated low reliability, r = .16, p = .29, as did
Item 2 (“If you won’t talk to the police, then that will be
used against you in court”), r = .05, p = .77. Item 3 (“If you
tell the police anything, it can be repeated in court”) demon-
strated perfect reliability (r = 1.00, p < .01). Despite these
values, the large majority of youth answered Items 1 (94%)
and 2 (81%) identically on test and retest, suggesting fairly
stable performance across time. The standard error of mea-
surement for the CMR-Recognition-II was 1.43 (see Table 3
for confidence intervals).
Function of Rights in Interrogation
A Cronbach’s a of .54 was obtained for the FRI as a whole.
Cronbach’s alphas and interitem correlations were calcu-
lated for items within each subscale. Within the Nature of
Interrogation (NI) subscale, a was .20, and interitem cor-
relations ranged from -.08 to .17 (mean correlation = .06).
Item–NI subscale total correlations ranged from .31 to .73
(mean correlation = .52). Within the Right to Counsel (RC)
subscale, a was .22 and interitem correlations ranged from
-.04 to .15 (mean correlation = .05). Item–RC subscale
total correlations ranged from .37 to .67 (mean correlation =
.48). Within the Right to Silence (RS) subscale, a was .53,
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
434 Assessment 18(4)
interitem correlations ranged from -.02 to .39 (mean cor-
relation = .19), and item–RS subscale total correlations
ranged from .50 to .65 (mean correlation = .59). Correla-
tions between subscale total scores and FRI total scores
ranged from .44 to .84 (mean correlation = .64; see Table 1
for subscale–total correlation coefficients).
For interrater reliability, an ICC of .87 (p < .01) was
obtained for the FRI (see Table 1 for specific ICC values for
each subscale). The average Kappa coefficient was .71
(range = .50-.89) for individual items. The Pearson r coef-
ficients ranged from .49 to .93 (p < .01 for all) for individual
items, .63 to .90 (p < .01 for all) for individual subscales,
and equaled .77 (p < .01) for FRI total scores.
The Pearson r coefficient for test–retest reliability was
.53 (p < .01). Test–retest reliability for each of the subscales
on the FRI is presented in Table 1. Test–retest reliability
for the NI subscale is misleadingly low, given that percent
agreement from test to retest ranges from 78% (NI-1) to
93% (NI-2). Test–retest reliability was lowest for items
NI-1 (r = -.07) and NI-2 (r = -.03). The low reliability
observed on these two items appears to be because of the
limited variability in scores. The vast majority of youth
received 2 point scores on these items at both test and retest
(81% and 96%, respectively);13 thus, a ceiling effect may
have occurred, and skew may have produced unrepresenta-
tively low correlations. The limited range of possible scores,
combined with the limited variance observed, may have
deflated the correlation and magnified the differences that
occurred. FRI total scores for youth who completed this
instrument at test and retest ranged from 12 to 30 (out of a
Table 1. Reliability of Scores for Subscales and Individual Items
Internal Consistency Interrater Test–Retest
CMR-II (item–total)
I r = .66** Kappa = .71** r = .63**
II .57** .67** .17
III .66** .49** .48**
IV .54** .70** .24
V .64** .82** .21
CMR-Recognition-II (subscale–total)
I r = .66** r = .67**
II .35** — .14
III .69** — .71**
IV .66** — .54**
V .56** — .52**
FRI (subscale–total)
Nature of interrogation r = .44** ICC = .76** r = .12
Right to counsel .64** .92** .49**
Right to silence .84** .95** .64**
CMVocabulary-II (item–total)
Consult r = .44** Kappa = .81** r = .48*
Attorney .49** .78** .12
Questioning .24** 1.00** a
Used against .39** .53** .65**
Right .48** .62** .46*
Lawyer .43** .59** .50*
Statement .47** .86** .37
Entitled .62** .72** .43*
Afford .31** .59** -.11
Advice .39** .54** .02
Interrogation .54** .80** .80**
Remain .39** .70** a
Appoint .46** .71** .60**
Present .31** .71** .26
Confession .59** .87** .54**
Represent .68** .68** .39
Note. CMR-II = Comprehension of Miranda Rights–II; FRI = Function of Rights in Interrogation; CMVocabulary-II = Comprehension of Miranda
Vocabulary–II; ICC = intraclass correlation coefficient.
a. Raters did not assign the full range of scores (0-2) for these items, preventing the calculation of test–retest reliability.
*p < .05. **p < .01.
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
Goldstein et al. 435
possible 30). Individuals’ total scores increased an average
of 1.16 points (SD = 3.24) from the first administration to
the second. See Table 2 for specific changes in scores from
the first administration to the second.
The standard error of measurement for the FRI was 2.52.
The standard error of measurement for the NI subscale was
1.00; it was 1.38 for the RC subscale and 1.79 for the RS
subscale (see Table 3 for confidence intervals).14
Comprehension of Miranda Vocabulary–II
A Cronbach’s a of .75 was obtained for the CMVocabu lary-II.
Item–item correlations for the CMVocabulary-II ranged
from .00 to .43 (mean correlation = .15), and item–total cor-
relations ranged from .24 to .68 (mean correlation = .45; see
Table 1 for item–total correlations for individual words).
Interrater reliability analyses produced an ICC of .96 for
the CMVocabulary-II. The average Kappa coefficient for
CMVocabulary-II items was .72 (see Table 1 for Kappa
coefficients for individual words). Pearson r values ranged
from .67 to 1.00 (p < .01 for all items) for individual
CMVocabulary-II items and equaled .93 (p < .01) for total
CMVocabulary-II scores.
The Pearson r coefficient for test–retest reliability was
.84 (p < .01). Test–retest reliability of Miranda vocabulary
is presented in Table 1. Test–retest reliability could not be
calculated for questioning and remain because nearly all the
participants obtained the same score during the first and
second administrations, so there was virtually no change to
estimate. CMVocabulary-II scores of youth who completed
both test and retest ranged from 6 to 31 (out of a possible
32). On average, scores decreased by .79 points (SD = 3.06)
from the initial administration of the test to the second admin-
istration. See Table 2 for specific changes in scores from the
first administration to the second. The standard error of mea-
surement for the CMVocabulary-II was 2.59 (see Table 3 for
confidence intervals).
Validity
Content, construct, and concurrent validity were established
for all the original Miranda comprehension instruments on
which the revised instruments are based. Content of the
CMR, CMR-Recognition, and CMVocabulary was based
on language in actual Miranda warnings (Grisso, 1998). The
content validity of the revised instruments has been maintained
by updating the language in the instruments and incor porating
the fifth Miranda warning statement to reflect the changes
in wording and delivery of Miranda warnings in many juri-
sdictions. Scenarios presented in the FRI des cribe actual
situations that may arise during police inter rogations. FRI
items assess appreciation in three areas: nature of interroga-
tion, right to silence, and right to counsel. The FRI’s content
validity is established by the fact that its subscales assess
appreciation of consequences related to waiving the rights
that underlie the Miranda warnings, as well as appreciation
of the primary context in which the rights arise (i.e., custo-
dial interrogation).
To establish concurrent validity, performance on an instru-
ment should correlate with factors that the instrument is
intended to represent. Comprehension, in general, is related
to intelligence (Kaufman & Lichtenberger, 2002), and the
CMR-II, CMR-Recognition-II, FRI, and CMVocabulary-II
assess comprehension of Miranda rights and the understand-
ing of words pertaining to those rights. Therefore, scores on
these measures should correlate at least moderately with
intelligence. The verbal subtests of the WASI were used to
measure intelligence because Verbal IQ correlates signifi-
cantly with general intelligence (Kaufman & Lichtenberger,
2002), and it was more strongly associated with Miranda
comprehension than was Full Scale IQ or Performance
IQ in previous research (Colwell et al., 2005). In addition,
Table 2. Changes in CMR-II Scores from Test to Retest
Changes in Scores Frequency (%)
CMR-II
Improved 6 points 1 (2.1)
Improved 3 or 4 points 5 (10.6)
Improved 1 or 2 points 19 (40.4)
No change 15 (31.9)
Worsened 1 or 2 points 6 (12.8)
Worsened 3 points 1 (2.1)
CMR-Recognition-II
Improved 3 points 4 (8.5)
Improved 1 or 2 points 14 (29.8)
No change 16 (34.0)
Worsened 1 or 2 points 11 (23.4)
Worsened 3 or 4 points 2 (4.3)
FRI
Improved 5 or more points 8 (18.2)
Improved 3 or 4 points 5 (11.4)
Improved 1 or 2 points 8 (18.2)
No Change 9 (20.5)
Worsened 1 or 2 points 8 (18.2)
Worsened 3 or 4 points 6 (13.6)
CMVocabulary-II
Improved 5 or more points 1 (4.2%)
Improved 3 or 4 points 2 (8.3)
Improved 1 or 2 points 7 (29.2)
No change 2 (8.3)
Worsened 1 or 2 points 5 (20.8)
Worsened 3 or 4 points 4 (16.7)
Worsened 5 or more points 3 (12.5)
Note. CMR-II = Comprehension of Miranda Rights–II; CMR-Recognition-
II = Comprehension of Miranda Recognition–II; FRI = Function of
Rights in Interrogation; CMVocabulary-II = Comprehension of Miranda
Vocabulary–II.
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
436 Assessment 18(4)
construct validity was evaluated by examining the relation-
ships between performance on each MRCI measure and
age, assuming better performance at older ages.15 Table 4
presents the specific correlations between scores on each
instrument and IQ and age.
To obtain convergent validity, performance on a test
(e.g., CMR-II) should correlate with performance on other
tests (e.g., CMR-Recognition-II) that are associated with
the same construct (e.g., Miranda understanding). CMR-II
scores were significantly correlated with scores on the
CMR-Recognition-II (r = .57, p < .01), CMVocabulary-II
(r = .62, p < .01), and FRI (r = .45, p < .01). Scores on the
CMR-Recognition-II were similarly correlated with
CMVocabulary-II (r = .50, p < .01) and FRI (r = .39, p <
.01) scores. Finally, a significant correlation was observed
between scores on the CMVocabulary-II and FRI (r = .48,
p < .01). Moderate relationships among all the MRCI instru-
ments and between the instruments and intelligence provide
evidence of convergent validity.
Comparison of Psychometrics Between
Original and Revised Instruments
We compared internal consistency of the original and revised
CMR and CMVocabulary measures. Item–item and item–
total correlations were similar on the two instruments. As
one might expect, item–item and item–total correlations of
the first four CMR-II items (i.e., the items included in the
original CMR) were more similar to those calculated by
Grisso (1998) than were item–total correlations for the entire
revised instruments (i.e., including the fifth warning), which
were lower than in Grisso’s original instruments (see
Table 5 for specific ranges of correlation coefficients and
comparison data).
Item–item correlations for the CMVocabulary-II were
slightly higher than those obtained for the CMVocabulary,
and item–total correlations were slightly lower (see Table 5
for specific correlation coefficients). Changes are largely
attributable to the addition of the 10 new words on the
Table 3. Standard Errors of Measurement for MRCI Instruments
Instrument/Subscale SEM 90% Confidence Interval 95% Confidence Interval
CMR-II 1.66 Examinee’s score ± 2.73 Examinee’s score ± 3.23
CMR-Recognition-II 1.43 Examinee’s score ± 2.36 Examinee’s score ± 2.80
FRI 2.52 Examinee’s score ± 4.15 Examinee’s score ± 4.93
Nature of interrogation 1.00 Examinee’s score ± 1.66 Examinee’s score ± 1.97
Right to counsel 1.38 Examinee’s score ± 2.27 Examinee’s score ± 2.70
Right to silence 1.79 Examinee’s score ± 2.95 Examinee’s score ± 3.50
CMVocabulary-II 2.59 Examinee’s score ± 4.27 Examinee’s score ± 5.08
Note. MRCI = Miranda Rights Comprehension Instruments; CMR-II = Comprehension of Miranda Rights–II; CMR-Recognition-II = Comprehension of
Miranda Recognition–II; FRI = Function of Rights in Interrogation; CMVocabulary-II = Comprehension of Miranda Vocabulary–II. Although full confidence
intervals are provided, scores’ upper limits are truncated by the instruments’ ceilings.
Table 4. Correlations Between the Instruments and IQ and Age
Instrument IQ Age
CMR
Original r = .47 r = .19
Revised .49, p < .01 .14, p = .06
CMR-Recognition
Original .45 .21
Revised .50, p < .01 .13, p = .08
FRI
Original Data not available Data not available
Revised .36, p < .01 .11, p = .14
CMVocabulary
Original .59 .34
Revised .62, p < .01 .08, p = .28
Note. CMR = Comprehension of Miranda Rights; CMR-Recognition =
Comprehension of Miranda Recognition; FRI = Function of Rights in
Interrogation; CMVocabulary = Comprehension of Miranda Vocabulary.
Table 5. Comparison of Internal Consistency and Reliability
Between the Original and Revised Instruments
Instrument
Item–Item
Correlation
Item–Total
Correlation
CMR
Grisso r = .12 to .32 r = .55 to .73
Revised
(first 4 prongs)
.11 to .38 .60 to .70
Revised (all 5 prongs) .11 to .38 .54 to .66
CMVocabulary
Grisso r = .14 to .37 r = .51 to .72
Revised (six original
words)
-.02 to .40 .48 to .70
Revised
(all 16 words)
.00 to .43 .24 to .68
Note. CMR = Comprehension of Miranda Rights; CMVocabulary =
Comprehension of Miranda Vocabulary.
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
Goldstein et al. 437
CMVocabulary-II. Most words produced moderate item–total
correlations (i.e., .44-.68), but three new words demon-
strated poorer relationships with total scores, although the
relationships were still significant (questioning: r = .24, p <
.01; afford: r = .31, p < .01; advice: r = .39, p < .01). In addi-
tion, to directly compare findings between the original and
revised instruments, we calculated item–item and item–total
correlations on the CMVocabulary-II using only those six
words contained in the original instrument. Using the parallel
items, we found similar item–item and item–total correlations
to those produced by Grisso with the original CMVocabulary.
See Table 5 for comparative correlation data.
Internal consistency of the CMR-Recognition and FRI
was not reported in Grisso’s original findings, so item–item
and item–total correlations could not be compared. Com-
pared with Colwell et al.’s (2005) low estimates of internal
consistency on the CMR, FRI, and CMVocabulary, the
Cronbach’s alphas and item–total correlations calculated with
the revised instruments were notably better, suggesting mod-
erate to good internal consistency.
For interrater reliability, Grisso (1998) calculated Pearson
correlation coefficients, and Colwell et al. (2005) calculated
ICC values. Interrater correlations of the original and revised
instruments were similar for the CMR-II and CMVocabulary-
II but were lower for the revised FRI (see Table 6 for specific
ranges in correlation coefficients). ICC values for the CMR-
II, CMVocabulary-II, and FRI were higher than the ICC
values obtained by Colwell et al. (2005) with the original
instruments.
Test–retest reliability could only be compared between
the CMR and CMR-II because reliability data for the origi-
nal CMR-Recognition, CMVocabulary, and FRI are not
available. The CMR test–retest correlation (.84) found by
Grisso (1998) was higher than the CMR-II correlation (.68).
This difference appears to be due to the greater number of
participants improving from test to retest on the CMR-II
(53.2%), compared with the CMR (37.5%); frequency of
decreasing scores from test to retest were similar on the two
versions (CMR-II: 14.9%; CMR: 12.5%).
With the exception of the FRI, for which no construct
validity data were reported by Grisso (1998), the correla-
tions between the MRCI instruments and IQ were very
similar to those obtained by Grisso and by Colwell et al.
(2005) (associated p values were not reported for the origi-
nal instruments; see Table 4 for specific correlations).
Unexpectedly, the correlations between the instruments
and age were lower than those calculated by Grisso (1998).
In particular, the correlation between the original CMVo-
cabulary and age was much stronger than was found with the
revised instrument. This discrepancy can be attributed to
the fact that age appears to be more closely related to under-
standing of some Miranda vocabulary (i.e., those six words
included in the original instrument) than others (i.e., some
of the words added to the revised instrument). When we
examined the correlation between age and the six words
included in both the original and revised instruments, the
correlation produced was higher (r = .17, p < .05) than that
observed between age and the 10 new words added in the
CMVocabulary-II (r = .01, p = .94) and higher than the cor-
relation between age and CMVocabulary-II total scores
(sum of the 16 items; r = .09, p = .24). Nonetheless, this
correlation was still substantially lower than the .34 correla-
tion that Grisso observed between age and the identical six
words.
Discussion
Findings suggest that the revised instruments have similar
validity estimates to those obtained with Grisso’s original
instruments (Colwell et al., 2005; Grisso, 1998) but slightly
lower reliability estimates. Nevertheless, findings support
the overall psychometric quality of the instruments.
Although some reliability estimates for the MRCI were
lower than those reported by Grisso (1998), they were com-
parable with or better than those reported for the original
ins truments in a more recent, independent study (Colwell
et al., 2005). Estimates of internal consistency were generally
moderate, and Cronbach’s alphas and item–total correlations
suggested stronger relationships than did the low alphas
obtained by Colwell et al. (2005). The stronger, but still
moderate, alpha values were expected because the instru-
ments’ items are based on the actual Miranda warnings that
were written by government officials who were not concerned
with interitem homogeneity (Grisso, 2004). The low internal
consistency does not indicate that an examiner cannot use
the instruments as a standardized, reliable method of assigning
Table 6. Comparisons of Interrater Reliability Between the
Original and Revised Instruments
Instrument
Interrater Reliability
Between Scores
on Items
Interrater Reliability
Between Total Scores
CMR
Original r = .80 to .97 r = .96
Revised .68 to 1.00 .90
CMVocabulary
Original .89 to .98 .98
Revised .67 to 1.00 .93
FRIa
Original .72 to 1.00 .96
Revised .63 to .90 .77
Note. CMR = Comprehension of Miranda Rights; CMVocabulary =
Comprehension of Miranda Vocabulary; FRI = Function of Rights in
Interrogation.
a. Interrater reliability on the FRI was calculated between subscale scores
and total scores.
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
438 Assessment 18(4)
evaluative scores to individuals’ Miranda comprehension
(Grisso, 2004).
The Pearson r interrater reliability estimates obtained
using the revised instruments were high and similar to the
estimates obtained using the original instruments, with the
exception of the revised FRI. The ICC interrater reliability
estimates were slightly higher than those found by Colwell
et al. (2005) for the CMR, CMVocabulary, and FRI. There-
fore, the revised instruments appear to maintain good interrater
reliability.
The test–retest reliability estimate for the CMR-II (Pearson
r coefficients) was moderate, although lower than the esti-
mate obtained for the CMR. The current study also provided
test–retest reliability estimates for the CMR-Recognition-
II, CMVocabulary-II, and FRI, which were moderate to
good, and notably, the test–retest interval did not affect the
performance on the measures. Although it appears that prac-
tice had little effect on performance, Miranda comprehension
instruments are not measures of comprehension stability
(Grisso, 2004); therefore, test–retest results should be inter-
preted as indicants of test error.
The current study is the first to present standard errors of
measurement for the Miranda instruments. Standard error
of measurement presents a reliability estimate of the degree
to which an observed score varies from a true score; the
greater the reliability of a measure, the lower the standard
error of measurement and the smaller the confidence inter-
vals. The current study found standard errors of measurement
that resulted in somewhat wide confidence intervals; how-
ever, this was predictable given the expected low internal
consistency of the instruments. Cronbach’s alpha is used in
computing standard errors of measurement, and as noted,
alpha should be low for instruments, such as the MRCI, that
contain items without a unitary underlying construct. The
resultant confidence intervals for the MRCI should not dis-
courage use, however. The instruments were designed to be
used as one tool in a broader forensic evaluation of capacity
to waive Miranda rights. Interpretation of scores should be
made with caution and within the context of other informa-
tion gathered during such an evaluation (A.M. Goldstein &
Goldstein, 2010).
The content validity of the instruments was improved by
updating the language of the warnings and adding the items
that test understanding of the fifth Miranda warning state-
ment so that the instruments reflect more typical versions of
the warnings used today.
Concurrent validity remains difficult to demonstrate
because an established measure of Miranda comprehension
is not available for comparison. Nonetheless, the relation-
ship between the MRCI and IQ provides some support
because a comprehension measure, such as the MRCI,
should be related to measures of general intellectual capac-
ity (Grisso, 2004). Correlations between the revised
measures and IQ were similar to both Grisso’s (1981) origi-
nal estimates and Colwell et al.’s (2005) estimates,
providing support for the construct validity of the instru-
ments. However, correlations between the revised measures
and age were lower than for the original instruments.
The weaker correlation between age and total score of the
additional 10 words in the CMVocabulary-II suggests that age
may be more closely related to understanding the vocabulary
words used in the original CMVocabulary than in the revised
version. The added words in the CMVocabulary-II are sim-
pler and may, therefore, be less strongly related to age;
never theless, these words may be just as important to decipher-
ing individuals’ Miranda (mis)understanding. The obs erved
correlation between age and the original six words (from the
CMVocabulary) is lower than the correlation found with
Grisso’s original sample. Similarly, lower correlations
between age, CMR-II, and CMR-Recogntion-II scores
were observed. The reason for this discrepancy is unclear,
but it may be related to the fact that data were available for
only a small number of 13 year olds, one 12 year old, and
no 10 or 11 year olds, substantially restricting the range in
this variable. As detailed elsewhere (N. E. S. Goldstein et al.,
2003), age remains a significant predictor of CMR-II and
CMR-Recognition-II scores when controlling for IQ, with
older youth demonstrating better understanding of the
Miranda warnings than younger youth.
Finally, convergent validity was demonstrated through
cross-test comparisons. Cross-test comparisons are appro-
priate in certain circumstances, such as when a comparable
measure of the same criterion is not available (AERA, APA,
& NCME, 1999; Grisso, 2004). Results from the current
study indicate that moderate, statistically significant relation-
ships exist among the MRCI instruments, supporting the
convergent validity of the instruments.
Overall, the consistency of estimates between instrument
versions suggests that simplification of the language used
in the warnings and the addition of the fifth Miranda warn-
ing maintain the utility of the instruments while reflecting
standard legal practice at the beginning of the 21st century.
The similarities in psychometric and normative findings
(Goldstein et al., 2003) between the original and revised instru-
ments suggest that the original instruments are appropriate
for continued use until the publication of the MRCI.
Limitations
Although, it might appear that the lack of predictive valid-
ity analyses would limit the psychometric soundness of
the instruments, such analyses are a poor fit for the MRCI
(Grisso, 2004). Waiver validity is a legal determination
based on consideration of the totality of circumstances,
which are a dynamic set of variables that can vary from case
to case; therefore, a judge may or may not consider Miranda
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
Goldstein et al. 439
comprehension results when making a waiver validity
deter mination. The MRCI were designed to assess only
understanding and appreciation of the Miranda warnings at
the time of evaluation. They do not provide a comprehensive
assessment of the many variables that are present during
questioning and, therefore, do not provide a sound basis for
predicting the legal outcome. Moreover, both Rogers and
Grisso have noted that legal determinations provide inade-
quate data for measuring criterion (specifically, predictive)
validity (Grisso, 1986, 2003, 2004; Rogers, Tillbrook, &
Sewell, 2004). Analyses would be complicated further by the
fact that courts vary in the levels of comprehension they require
for valid rights waivers (Viljoen, Zapf, & Roesch, 2007).
Conclusions
The current study provided updated and extended psychomet-
ric data for the revised Miranda comprehension instruments
using a juvenile justice sample. It appears that revisions to
the instruments maintain their utility while reflecting modern
legal practice. Construct validity was not assessed through
factor analysis in the current study, but initial results sup-
port the two hypothetical dimensions of understanding and
appreciation (Zelle et al., 2008). Future research should col-
lect data from community youth, incarcerated adult, and
community adult samples to provide a broader evaluation
of the MRCI’s psychometric properties (AERA, APA, &
NCME, 1999). Future research also should compare the
MRCI with other psychometrically sound measures of
Miranda comprehension when they become available to eval-
uate concurrent validity.
Acknowledgments
The authors would like to thank Tom Grisso, PhD, for his review
of an early version of this article; Lois Oberlander Condie, PhD,
for her role in initiating the revision of the Miranda instruments
and her early work on the project; Robert Listenbee, JD, and the
Philadelphia Defender Association for assistance with participant
recruitment; and the many research assistants who helped with
data collection and entry.
Declaration of Conflicting Interests
The authors declared no conflicts of interest with respect to the
authorship and/or publication of this article.
Funding
The authors received no financial support for the research and/or
authorship of this article.
Notes
1. The final version of the CMV-II contains 16 items.
N. E. S. Goldstein et al. (2003) described the initial version
of the CMV-II with 18 items. However, the CMV-II was
refined further after that publication. Item analysis resulted
in removal of two items (i.e., silent and talk to) from the instru-
ment. The final CMV-II contains 16 items, and data from
these 16 items were used in all relevant statistical analyses
in this article.
2. A new instrument, Perceptions of Coercion during the Hold-
ing and Interrogation Process (P-CHIP), was included with
the original assessment battery. Whereas the other four instru-
ments seek to assess examinees’ capacities in the two domains
of understanding and appreciation, the P-CHIP seeks to
assess waiver behavior. Therefore, the P-CHIP is considered
a supplement to the MRCI battery, and psychometric data are
not presented in this article.
3. Sample warnings were collected from departments in Arizona,
Connecticut, Florida, Massachusetts, Michigan, New Hampshire,
New York, and Virginia.
4. Fifty-seven youth were assented at the Massachusetts facil-
ity, but two were transferred from the facility before data
collection.
5. The Philadelphia Defender Association represents approxi-
mately 70% of the youth charged and housed in Philadelphia
detention centers (S. Simkins, JD, personal communication,
June 6, 2006).
6. In Pennsylvania, adults who have legal custody of youth are
referred to as “custodians,” not “guardians.” Nonetheless, for
consistency and simplicity, the term guardian is used through-
out this article to refer to all types of nonparental custodians.
7. An estimate of the number of youth that declined participa-
tion is unavailable. The first contact with youth at facilities
was through line staff. Because of human subjects protec-
tions, researchers were prohibited from learning the reasons
that youth were unavailable (e.g., declined, no longer at the
facility, placed on the medical unit).
8. Approximately 6% of the total sample was obtained with
guardian consent, and 94% of the sample was obtained through
youth assent with a participant advocate.
9. Items 4 and 5 of the Nature of Interrogation (NI) subscale
were not administered to participants in Massachusetts for
two reasons. First, instructions in the original instrument
dictated that the items should automatically receive 2 points
if an examinee received 2 points on NI Items 1, 2, and 3.
Second, Items 4 and 5 assessed examinees’ appreciation of
the emotional elements of the interrogation situation, and the
addition of the P-CHIP, which assesses emotional elements
of interrogation, rendered the items redundant. However, for
the sake of thoroughness, these items were added to the admin-
istration procedures for all participants in Pennsylvania.
10. Data from the WIAT, MAYSI-2, and GSS2 are being used to
address separate research questions and were not included in
the current study.
11. To compare findings with Grisso’s, analyses examined par-
ticipants’ age using whole number values (e.g., 13 years); it
did not include months (e.g., 13 years, 6 months).
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
440 Assessment 18(4)
12. It should be noted that CMR-II scores were nonnormally
distributed, as would be expected given that most people do
well on most of the items. The calculated confidence inter-
vals should be interpreted with caution because nonnormal
distributions result in less precise confidence intervals. The
same caveat applies to the FRI, which also had a nonnormal
distribution of scores.
13. On the rare occasions that youth scored differently at test and
retest, they scored the full two points higher or lower. Thus,
differences that were observed between test and retest were
dramatic.
14. Massachusetts participants who were not administered NI
Items 4 and 5 are excluded from analyses involving the FRI
total and NI subscale calculations because item scores are
necessary to calculate Cronbach’s alpha and standard errors
of measurement.
15. Rogers et al. (2004) suggested assessing the relationship
between the Miranda instruments and measures of reading
and listening comprehension, such as those in the WIAT.
Theoretically, reading and listening comprehension should
relate to Miranda comprehension; however, such analyses ap-
pear to present a secondary level of construct validity and are
not presented in the current article. The relationship between
reading and listening comprehension and the Miranda instru-
ments is being evaluated elsewhere (N. E. S. Goldstein, Riggs
Romaine, & Zelle, 2011-a; N. E. S. Goldstein, Zelle, et al.,
2011-b).
References
Abramovitch, R., Peterson-Badali, M., & Rohan, M. (1995).
Young people’s understanding and assertion of their rights to
silence and legal counsel. Canadian Journal of Criminology,
37, 1-18.
American Educational Research Association, American Psycho-
logical Association, & National Council on Measurement in
Education. (1999). Standards for educational and psychologi-
cal testing. Washington, DC: American Educational Research
Association.
Archer, R. P., Buffington-Vollum, J. K., Stredny, R.V., &
Handel, R. W. (2006). A survey of psychological test use pat-
terns among forensic psychologists. Journal of Personality
Assessment, 87, 84-94.
Cassell, P. G., & Hayman, S. B. (1996). Police interrogation in the
1990s: An empirical study of the effects of Miranda. UCLA
Law Review, 43, 840-931.
Colwell, L. H., Cruise, K. R., Guy, L. S., McCoy, W. K.,
Fer nandez, K., & Ross, H. H. (2005). The influence of psy-
chosocial maturity on male juvenile offenders’ comprehension
and understanding of the Miranda warning. Journal of the
American Academy of Psychiatry and Law, 33, 444-454.
Cooper, V. G., & Zapf, P. A. (2008). Psychiatric patients’ com-
prehension of Miranda rights. Law and Human Behavior, 32,
390-405.
Ferguson, A. B., & Douglas, A. C. (1970). A study of juvenile
waiver. San Diego Law Review, 7, 39-54.
Frye v. United States, 293 F. 1013 (App. D.C., 1923).
Goldstein, A. M., & Goldstein, N. E. S. (2010). Evaluating capac-
ity to waive Miranda rights (Volume in Best Practices in
Forensic Mental Health Assessment). New York, NY: Oxford
University Press.
Goldstein, N. E. S., Condie, L. O., Kalbeitzer, R., Osman, D., &
Greier, J. L. (2003). Juvenile offenders’ Miranda rights com-
prehension and self-reported likelihood of offering false con-
fessions. Assessment, 10, 359-369.
Goldstein, N. E. S., Riggs Romaine, C. L., & Zelle, H. (2011-a).
Juveniles’ Miranda comprehension: Understanding, appre-
ciation, and totality of circumstances factors. Manuscript in
preparation.
Goldstein, N. E. S., Zelle, H., & Grisso, T. (2011-b). The Miranda
Rights Comprehension Instruments–II. Sarasota, FL: Profes-
sional Resource Press. Manuscript in preparation.
Grisso, T. (1981). Juveniles’ waiver of rights: Legal and psycho-
logical competence. New York, NY: Plenum Press.
Grisso, T. (1986). Evaluating competencies: Forensic assessments
and instruments. New York, NY: Plenum.
Grisso, T. (1998). Instruments for assessing understanding and
appreciation of Miranda Rights. Sarasota, FL: Professional
Resources Press.
Grisso, T. (2003). Evaluating competencies: Forensic assessments
and instruments (2nd ed.). New York, NY: Kluwer.
Grisso, T. (2004). Reply to “A critical review of competency-to-
confess measures.” Law and Human Behavior, 28, 719-724.
Grisso, T., & Barnum, R. (2000). The Massachusetts Youth Screen-
ing Instrument–Second Version. Sarasota, FL: Professional
Resource Press.
Grisso, T., & Pomicter, C. (1977). Interrogation of juveniles: An
empirical study of procedures, safeguards, and rights waiver.
Law and Human Behavior, 1, 321-342.
Gudjonsson, G. H. (1997). The Gudjonsson Suggestibility Scale
manual. Hove, England: Psychology Press.
In re Gault, 387 U.S. 1 (1967).
Kaufman, A. S., & Lichtenberger, E. O. (2002). Assessing ado-
lescent and adult intelligence (2nd ed.). Boston, MA: Allyn
& Bacon.
Kent v. United States, 383 U.S. 541 (1966).
Lally, S. J. (2003). What tests are acceptable for use in forensic
evaluations? A survey of experts. Professional Psychology:
Research & Practice, 34, 491-498.
Leo, R. A. (1996). Inside the interrogation room. Journal of Crim-
inal Law and Criminology, 86, 266-276.
Miranda v. Arizona, 384 U.S. 436 (1966).
Oberlander, L. B. (1998). Miranda comprehension and confes-
sional competence. Expert Opinion, 2, 11-12.
Oberlander, L. B., & Goldstein, N. E. (2001). A review and update
on the practice of evaluating Miranda comprehension. Behav-
ioral Sciences and the Law, 19, 453-471.
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
Goldstein et al. 441
The Psychological Corporation. (1992). Wechsler Individual
Achievement Test. San Antonio, TX: Author.
The Psychological Corporation. (1999). Wechsler Abbreviated
Scale of Intelligence. San Antonio, TX: Author.
Rogers, R. (2008). A little knowledge is a dangerous thing. Emerg-
ing Miranda research and professional roles for psychologists.
American Psychologist, 63, 776-787.
Rogers, R., Harrison, K. S., Shuman, D. W., Sewell, K. W., &
Hazelwood, L. L. (2007). An analysis of Miranda warnings
and waivers: Comprehension and coverage. Law and Human
Behavior, 31, 177-192.
Rogers, R., Hazelwood, L. L., Sewell, K. W., Blackwood, H. L.,
Rogstad, J. E., & Harrison, K. S. (2009). Development and
initial validation of the Miranda Vocabulary Scale. Law and
Human Behavior, 33, 381-392.
Rogers, R., Hazelwood, L. L., Sewell, K. W., Harrison, K. S., &
Shuman, D. W. (2008). The language of Miranda warnings in
American jurisdictions: A replication and vocabulary analysis.
Law and Human Behavior, 32, 124 -136.
Rogers, R., Jordan, M. J., & Harrison, K. S. (2004). A critical
review of published competency-to-confess measures. Law
and Human Behavior, 28, 707-718.
Rogers, R., Rogstad, J. E., Gillard, N. D., Drogin, E. Y.,
Blackwood, H. L., & Shuman, D. W. (2010). “Everyone
knows their Miranda rights”: Implicit assumptions and
countervailing evidence. Psychology, Public Policy, and
Law, 16, 300-318.
Rogers, R., Tillbrook, C., & Sewell, K. (2004). Evaluation of
Competency to Stand Trial–Revised (ECST-R): Professional
manual. Lutz, FL: Psychological Assessment Resources.
Ryba, N. L., Brodsky, S. L., & Shlosberg, A. (2007). Evaluations
of capacity to waive Miranda rights: A survey of practitioners’
use of the Grisso instruments. Assessment, 14, 300-309.
Viljoen, J. L., Klaver, J., & Roesch, R. (2005). Legal decisions of
preadolescent and adolescent defendants: Predictors of confes-
sions, pleas, communication with attorneys, and appeals. Law
and Human Behavior, 29, 253-277.
Viljoen, J. L., & Roesch, R. (2005). Competence to waive inter-
rogation rights and adjudicative competence in adolescent
defendants: Cognitive development, attorney contact, and psy-
chological symptoms. Law and Human Behavior, 26, 481-506.
Viljoen, J. L., Zapf, P. A., & Roesch, R. (2007). Adjudicative
competence and comprehension of Miranda rights in adoles-
cent defendants: A comparison of legal standards. Behavioral
Sciences and the Law, 25, 1-19.
Zelle, H., Goldstein, N. E. S., Riggs Romaine, C. L., Serico, J. M.,
Kemp, K., & Taormina, S. (2008, March). Factor structure of
the Miranda Rights Comprehension Instruments–II. Paper pre-
sented at the Annual Conference of the American Psychology—
Law Society, Jacksonville, FL.
at DREXEL UNIV LIBRARIES on September 4, 2012asm.sagepub.comDownloaded from
... A multilevel regression model was employed in this study to examine mechanisms linking house prices and the self-rated health of older adults. A null model of self-rated health, using the dependent variable, was employed to calculate the intraclass correlation (ICC) [83]. The ICC model was specified as follows: ...
Article
Full-text available
As the global aging trend increases, older adults are placing greater emphasis on their health. Evidence indicates that there is a complex association between house prices and older adults’ health, with their subjective well-being potentially acting as a mediator in this connection. A mediation model, utilizing data from China’s 2018 Labor Dynamics Survey, was employed to examine the impact pathway of house prices, subjective well-being, and self-rated health, while investigating the differences between young-old and old-old groups. The major findings are as follows: (1) House prices negatively affected self-rated health among the older adults. (2) The subjective well-being of older adults mediated the pathway through which house prices affected their self-rated health. (3) For old-old adults, higher house prices were more strongly linked to an increased likelihood of reporting good, very good, or excellent health. Subjective well-being was more significantly associated with reporting better health among the young-old group. Compared with the young-old population, the impact of house prices on self-rated health was stronger among the old-old, and the degree increased with increasing age. Consequently, to improve older adults’ well-being and self-rated health, effective healthy-aging policies should not only consider the influence of the real estate market, but also balance the allocation of elderly service facilities, promote affordable housing, and implement a combination of medical and nursing care from the perspective of urban planning.
... This work employed a multilevel regression model to investigate the influencing factors of residents' SWB. A null model of SWB considered the dependent variable and calculated the intraclass correlation (ICC) (Goldstein et al., 2011). The ICC model was specified as follows: ...
Article
Full-text available
Introduction: Urbanization has affected the quality of the living environments. It is important to improve residents’ living environments and promote their happiness. Methods: We use a national and representative dataset targeting the labor force in China, collecting basic information about the work environments, the social environments, and the urban environments. This work employed a linear regression model to investigate the influencing factors of residents’ SWB. Results: The three major findings are as follows: 1) At the national level, residents’ SWB exhibits a notable spatial variability, being higher in the northern regions and lower in the southern areas. 2) The dimensions of the urban environments (housing prices, POI density, NPP, land development intensity and the number of tertiary hospitals), the work environments (work pressure and job satisfaction), and the social environments (community trust, social justice and sense of security), along with sociodemographic characteristics, significantly influence SWB. 3) In China, the impacts on SWB exhibit pronounced regional heterogeneity. The relationship between environmental pollution and SWB is characterized by an inverted U-shaped pattern. Intriguingly, while housing prices negatively affect SWB in the eastern and central regions, the enforcement of housing purchase limits has been observed to enhance residents’ SWB in the western region. Discussion: First, this work show the overall spatial level of SWB in China geographically. And second, this research found the SWB heterogeneity on a regional level (eastern, central, and western regions). Furthermore, this methodological framework provides a novel perspective among the urban, work and social environments on SWB. This work also contributes to inform policy to improve residents’ SWB in China.
... In this study, the suitability of a multilayer linear model was determined based on the intragroup correlation coefficient (ICC) of the null model (48). ...
Article
Full-text available
Introduction With China embracing a new people-centered urbanization stage, the problem of migrants “flowing without moving” has become increasingly prominent, and settlement intention has gradually garnered attention. Methods Our research, based on questionnaire data from the China Labor Force Dynamic Survey 2016, uses a multilevel linear regression model to explore the influence of mobility, social environment, built environment, and demographics characteristics on settlement intention in the migrants and discusses differences between settlement intention of new and old generations and their internal influence mechanism. Results The findings are as follows: (1) Compared to the old generation, the new migrant generation generally has higher settlement intention. (2) The migrants’ settlement intention is influenced mainly by mobility, social environment, built environment, and demographic characteristics. (3) For the new migrant generation, social and demographic characteristics significantly influence their settlement intention. (4) The floating and built environment of the old generation significantly influence their settlement intention. Discussion Finally, this paper argues that there are differences in the influence mechanism of the same factors on the settlement intention of the new and old generations of migrants. It proposes differentiated policy suggestions for the migrants to promote city social integration. Finally, this paper argues that there are differences in the influence mechanism of the same factors on the settlement intention of the new and old generations of migrants. It proposes differentiated policy suggestions for the migrants to promote city social integration.
... included 103 adult offenders, but most of the validation studies emphasize juvenile offenders.Goldstein et al. (2011) examined the psychometric properties of the MRCI normative data and found what they consider acceptable internal consistencies at a subtest level (alphas between 0.54 4 and 0.75). As might be expected, comparisons between the MRCI and GMI internal consistencies were similarly variable. The authors also reported content, construct, and c ...
Article
Full-text available
Forensic evaluations have advanced considerably with the development of specialized measures validated on forensic and correctional samples. Prior to this progress, such evaluations relied heavily on extrapolations from general psychological tests to crucial, legally relevant questions. Since then, decades of empirical work have produced forensic assessment instruments (FAIs) addressing psycholegal standards in addition to forensically relevant instruments (FRIs) examining issues central to forensic practice (e.g., malingering) but not the standards themselves. This article provides a critical examination of the development, validation, and modern applications of six published FAIs that each address one of three broad criminal forensic issues (i.e., insanity, competency to stand trial, and Miranda abilities and waivers). Evaluations of the measures' reliability and validity particularly in forensic samples are highlighted. To complement FAIs, FRIs related to response styles are briefly explored. As a primary goal, forensic practitioners are provided with the knowledge and background about FAIs to enhance their criminal forensic practices.
... The community trust of the low-income group is influenced by the sociodemographic characteristics, the social environmental dimension, and the urban space dimension [42][43][44][45][46][47][48][49]. Before the analysis, the influence of different variables of community trust in the lowincome group was first discussed by the null model, that is, constructing a null model of community trust containing only the dependent variable and calculating the intra-class correlation (ICC) [61]. The ICC was calculated as follows: ...
Article
Full-text available
Under the burgeoning development of urbanization in China, the low-income groups have received attention recently. By applying a linear regression model and utilizing the date from the 2016 China Labor-force Dynamics Survey, this study has explored the effects of urban environments on the community trust in low-income groups, paying particular attention to the difference between local residents and migrants in the Pearl River Delta (PRD). The empirical findings suggest the following: (1) community trust in low-income groups is influenced by social environment dimension, urban space dimension, and sociodemographic characteristics. Specifically, urbanization rate, population density, POl density, land development intensity, social contact, self-rated health, and age have significant effects on the community trust of low-income groups. (2) For local residents, social environment dimension (social contact), urban space dimension (urbanization rate), and sociodemographic characteristics (political status, hukou status, age, and self-rated health) have significant effects on community trust. (3) In the case of migrants, only the sociodemographic characteristics (working in private enterprises or organizations and in agriculture) have a significant impact on community trust. According to the empirical results, the optimization of physical space and social space should consider low-income groups’ needs in livable community planning.
... 2016, Vol. XXVII: [11][12][13][14][15][16][17][18][19][20][21] E.; Iseas, C.; Campagnolo, L.; Elias, C.; Del Castillo, B. R.; Delucchi, G.; Goldstein, N. E. S.; Folino, J. VERTEX Rev. Arg. de Psiquiat. 2016, Vol. ...
Article
Full-text available
Introduction: Despite the relevance of adolescents' psycholegal capacities to judicial decisions, no assessment tool exists in Latin America to evaluate these competence-related abilities. Objective: To explore aspects of the reliability of the Test de competencia para el desempeño en proceso del fuero de responsabilidad penal juvenil MacArthur: Versión Argentina - MacCAT-CA:VA, wich is the Argentinian adaptation of the MacArthur Competence Assessment Tool-Criminal Adjudication (MacCAT-CA). Method: Mental health professionals trained in the use of MacCAT-CA:VA administered the instrument to 46 adolescents (23 court-ordered to a secure facility; 23 public high school students). Prior to data collection, the instrument was translated, back-translated, and adapted for use in Argentina; the publisher of the original version authorized the translation of the instrument and use of the adapated version for this study. Descriptive statistics and reliability indicators were generated. Results: Cronbach's alpha coeficients were 0.69, 0.67, and 0.75 for the Understanding, Reasoning and Appreciation scales, respectively. The intraclass correlation coefficient for each item was within the good to excellent range (mean ICC=0.71; median ICC=0.75; ICC range=0.40-0.90); for the Understanding, Reasoning, and Appreciation scales, ICC values indicated excellent internal consistency (0.84, 0.81, 0.85, respectively). Compared with the student subsample, a greater proportion of the court-ordered adolescents in secure placement demonstrated significant clinical impairment.
Chapter
The purpose of this chapter is to provide an overview of the relevance of fetal alcohol spectrum disorder (FASD) in the context of adjudicative competency, from initial interactions with police at arrest through interrogation and adjudication at trial. Individuals with FASD experience contact with the criminal justice system at elevated rates and are overrepresented in forensic, legal, and correctional contexts. The complex range of functional and neurodevelopmental impairments experienced by many individuals with FASD raise concerns about their ability to competently navigate adjudicative proceedings. Growing awareness about FASD among legal professionals and courts, coupled with policy calls to more effectively address their overrepresentation in legal contexts, suggests that forensic clinicians are increasingly likely to be tasked with identifying and evaluating justice-involved individuals with FASD. However, limited FASD training, knowledge, and skill among forensic clinicians, in addition to an array of challenges identifying individuals with FASD in clinical and legal settings, render this a complex task. This chapter begins with a brief overview of adjudicative competency and FASD, followed by practical strategies to support best practices in competency evaluation for forensic clinicians.
Article
Full-text available
When conducting a forensic evaluation, what psychological tests do experts consider acceptable to use? The answer is useful to psychologists making practice decisions but also to the courts, who rely on others' opinions to base one of the criteria for determining the admissibility of testimony. The author surveyed diplomates in forensic psychology (N=64) regarding both the frequency with which they use and their opinions about the acceptability of a variety of psychological tests in 6 areas of forensic practice. The 6 areas were mental state at the offense, risk for violence, risk for sexual violence, competency to stand trial, competency to waive Miranda rights, and malingering. Results are presented for each practice area, and the implications of these results for the courts, future research, and practice are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Little information has been available on the frequency of police interrogation and rights waiver in juvenile cases. Research in developmental psychology suggests that the cognitive and emotional characteristics of juveniles, coupled with the circumstances inherent in police interrogations, might render very infrequent the assertion of the right to silence by juveniles. Furthermore, both legal and social science commentators have suggested that an increase in certain due process protections (e.g., presence of parents at questioning) might not mitigate the forementioned effect. To test these assumptions, the study examined juvenile court records for a random sample of felony referrals over a three-year period (491 juveniles, 707 referrals). Police questioning occurred in 65–75% of felony referrals, and juveniles in about 90% of these referrals provided police with information (more than personal identification). These results were examined in relation to demographic and offense variables. Due process protections were significantly more frequent during one year than during a previous year, but there was no difference between years in the frequency of rights waiver. The implications of these findings are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Book
This online resource provides a detailed "how-to" for practitioners, including information on data collection, interpretation, report writing and expert testimony. Forensic mental health assessment (FMHA) has grown into a specialization informed by research and professional guidelines, and this title presents up-to-date information on the most important and frequently conducted forms of FMHA in evaluating waiving Miranda rights.
Article
In its landmark decision in Miranda v. Arizona (1966), the Supreme Court of the United States buttressed the Constitutional privilege against self-incrimination by requiring as a procedural safeguard that various aspects of this privilege be clearly communicated to custodial suspects. Members of the public often believe that their continually media-fueled familiarity with Miranda warnings results in an adequate understanding of Miranda rights—a frequently erroneous assumption that may diminish counsel's motivation to investigate Miranda waivers and may influence court rulings on the validity of such waivers. The current investigation examined Miranda rights misconceptions held by two groups of pretrial defendants: those arrested more recently (i.e., less than 2 weeks ago) and those arrested less recently (i.e., 4 weeks ago or more). The misconceptions of these groups were then contrasted with those of undergraduate students representing a more educated and comparatively unstressed segment of American society. Results revealed a host of widely-held misconceptions, including a fundamental misunderstanding of the function of the “right to remain silent” as a legal protection. Moreover, many misconceptions appeared unrelated to intelligence, education, or prior contacts with the criminal justice system. The implications of these findings are discussed with respect to the validity of Miranda waivers. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This publication offers 4 specialized instruments to assist clinicians in assessing defendants' capacities to understand and appreciate the "Miranda warnings" that they waived at the time of police interrogation. Three instruments allow the clinician to employ a multi-method approach to assessing understanding of the Miranda warnings, and a fourth examines the defendant's capacities to appreciate the significance of the rights in the context of police questioning, the attorney–client relationship, and court proceedings. Based on the results of a 3-yr research study of their reliability and validity, these standardized instruments offer a structured, competency-based testing approach that uses objective scoring criteria, which permits the examiner to compare the examinees' performance to that of large normative samples of juvenile and adult offenders. A specially designed easel provides all of the stimuli and examiner prompts required to administer the 4 instruments, and forms are available for recording and scoring responses. The manual describes the instruments, their development, tables of norms, and a discussion of the scientific and professional status of the instruments relevant to meeting legal criteria for admissibility as a basis for expert opinion in legal cases. (PsycINFO Database Record (c) 2012 APA, all rights reserved)