ArticlePDF Available

Short-term effects of cannabis consumption on cognitive performance in medical cannabis patients

Taylor & Francis
Applied Neuropsychology: Adult
Authors:

Abstract and Figures

This observational study examined the acute cognitive effects of cannabis. We hypothesized that cognitive performance would be negatively affected by acute cannabis intoxication. Twenty-two medical cannabis patients from Southwestern Ontario completed the study. The majority (n = 13) were male. Mean age was 36.0 years, and mean level of education was 13.7 years. Participants were administered the same brief neurocognitive battery three times during a six-hour period: at baseline (“Baseline”), once after they consumed a 20% THC cannabis product (“THC”), and once again several hours later (“Recovery”). The average self-reported level of cannabis intoxication prior to the second assessment (i.e., during THC) was 5.1 out of 10. Contrary to expectations, performance on neuropsychological tests remained stable or even improved during the acute intoxication stage (THC; d: .49−.65, medium effect), and continued to increase during Recovery (d: .45−.77, medium-large effect). Interestingly, the failure rate on performance validity indicators increased during THC. Contrary to our hypothesis, there was no psychometric evidence for a decline in cognitive ability following THC intoxication. There are several possible explanations for this finding but, in the absence of a control group, no definitive conclusion can be reached at this time.
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=hapn21
Applied Neuropsychology: Adult
ISSN: 2327-9095 (Print) 2327-9109 (Online) Journal homepage: https://www.tandfonline.com/loi/hapn21
Short-term effects of cannabis consumption
on cognitive performance in medical cannabis
patients
Phillip Olla, Nicholas Rykulski, Jessica L. Hurtubise, Stephen Bartol, Rachel
Foote, Laura Cutler, Kaitlyn Abeare, Nora McVinnie, Alana G. Sabelli,
Maurissa Hastings & Laszlo A. Erdodi
To cite this article: Phillip Olla, Nicholas Rykulski, Jessica L. Hurtubise, Stephen Bartol, Rachel
Foote, Laura Cutler, Kaitlyn Abeare, Nora McVinnie, Alana G. Sabelli, Maurissa Hastings & Laszlo
A. Erdodi (2019): Short-term effects of cannabis consumption on cognitive performance in medical
cannabis patients, Applied Neuropsychology: Adult
To link to this article: https://doi.org/10.1080/23279095.2019.1681424
Published online: 02 Dec 2019.
Submit your article to this journal
View related articles
View Crossmark data
Short-term effects of cannabis consumption on cognitive performance in
medical cannabis patients
Phillip Olla
a
, Nicholas Rykulski
b
, Jessica L. Hurtubise
c
, Stephen Bartol
d
, Rachel Foote
a
, Laura Cutler
c
,
Kaitlyn Abeare
c
, Nora McVinnie
e
, Alana G. Sabelli
c
, Maurissa Hastings
c
, and Laszlo A. Erdodi
c
a
Audacia Bioscience, Windsor, ON, Canada;
b
College of Human Medicine, Michigan State University, Lansing, MI, USA;
c
Department
of Psychology, University of Windsor, Windsor, ON, Canada;
d
School of Medicine, Wayne State University, Detroit, MI, USA;
e
Brain-
Cognition-Neuroscience Program, University of Windsor, Windsor, ON, Canada
ABSTRACT
This observational study examined the acute cognitive effects of cannabis. We hypothesized that
cognitive performance would be negatively affected by acute cannabis intoxication. Twenty-two
medical cannabis patients from Southwestern Ontario completed the study. The majority (n¼13)
were male. Mean age was 36.0 years, and mean level of education was 13.7years. Participants
were administered the same brief neurocognitive battery three times during a six-hour period: at
baseline (Baseline), once after they consumed a 20% THC cannabis product (THC), and once
again several hours later (Recovery). The average self-reported level of cannabis intoxication prior
to the second assessment (i.e., during THC) was 5.1 out of 10. Contrary to expectations, perform-
ance on neuropsychological tests remained stable or even improved during the acute intoxication
stage (THC; d: .49.65, medium effect), and continued to increase during Recovery (d: .45.77,
medium-large effect). Interestingly, the failure rate on performance validity indicators increased
during THC. Contrary to our hypothesis, there was no psychometric evidence for a decline in cog-
nitive ability following THC intoxication. There are several possible explanations for this finding
but, in the absence of a control group, no definitive conclusion can be reached at this time.
KEYWORDS
Cannabis; cognitive
functioning; performance
validity; repeated testing
Background
Cannabis sativa contains plant-based cannabinoids that
mimic the homeostatic functions of endogenous neurotrans-
mitters, including motor control, pain and pleasure, immune
system, temperature, mood, and memory (Di Marzo, 2001;
Mills & Brawley, 1972). Memory impairment is the most
commonly observed cognitive symptom in the acute stages
of THC consumption. Mechoulam and Parker (2013) found
that long-term memory retrieval remained intact following
cannabis use despite apparent declines in working memory
and memory consolidation. Functional disruptions in atten-
tion, emotional processing, and chronoagnosia have also
been reported (Englund et al., 2013; Englund et al., 2016;
Hindocha et al., 2015; Schoedel et al., 2011; Wade, Robson,
House, Makela, & Aram, 2003).
In contrast, some investigators have found no significant
impairments in attention, processing speed, or executive
functioning (Bhattacharyya et al., 2010; Roser et al., 2008;
Winton-Brown et al., 2011). As proposed by Colizzi and
Bhattacharyya (2017), the inconsistency across studies may
reflect variance in THC concentration, method of consump-
tion, idiosyncratic metabolic responses, and interactions
associated with polysubstance use. Tolerance also has a sig-
nificant impact (Desrosiers, Ramaekers, Chauchard,
Gorelick, & Huestis, 2015; Colizzi & Bhattacharyya, 2018),
but its precise mechanism is poorly understood in medical
cannabis users.
The historical classification of cannabis as an illicit drug
inhibited research on its cognitive effects (Hall, 2018).
Furthermore, beyond individual differences in physiology,
premorbid emotional and cognitive functioning, personality
traits, consumption history, and research designs are
thought to moderate THC symptom expression (Mills &
Brawley, 1972). In an attempt to reduce confounds associ-
ated with determining acute cognitive effects of cannabis on
performance, this study used pragmatic clinical trials in a
cohort of medical cannabis patients. Based on previous
reports of deficits in neuropsychological functioning associ-
ated with cannabis use in clinical patients (Honarmand,
Tierney, OConnor, & Feinstein, 2011), we hypothesized that
cognitive performance would be negatively affected by acute
cannabis intoxication.
Method
Participants
The majority (59.1%) of the 22 community volunteers were
male. Mean age was 36.0 years (SD ¼9.4). The mean level of
CONTACT Laszlo A. Erdodi lerdodi@gmail.com Department of Psychology, University of Windsor, 168 Chrysler Hall South, 401 Sunset Ave, Windsor, ON
N9B 3P4, Canada.
ß2019 Taylor & Francis Group, LLC
APPLIED NEUROPSYCHOLOGY: ADULT
https://doi.org/10.1080/23279095.2019.1681424
education was 13.7 years (SD ¼1.7). Inclusion criteria were:
24 years of age or older, native speaker of English, medical
marijuana license, medically stable, a history of regular can-
nabis use 6 months, and peripheral veins suitable for
repeated venipuncture. Exclusion criteria were pregnancy,
allergy to any cannabinoid or marijuana smoke, and taking
opioids or any other medication deemed to interact with
cannabis during the medical screening for study eligibility.
The most common reason for a medical marijuana prescrip-
tion was psychiatric disorder (n¼15 or 68.2%), followed by
musculoskeletal (n¼4 or 18.2%), (auto)immune (n¼2or
9.1%) and respiratory (n¼1 or 4.5%) illnesses. The majority
of the sample (n¼12 or 54.5%) identified pain management
as one of the reasons for which medical marijuana was pre-
scribed. Average self-reported cannabis consumption was 3.2
grams/day (SD ¼1.5, range: 114). Adding a control group
to the study was logistically prohibitive. Detailed participant
information is provided in Table 1.
Materials
Given the logistical complexity of the overall study, the time
frame for psychometric testing was too narrow for a thor-
ough assessment of cognitive functioning. Therefore, test
selection was optimized for balancing the competing
demands of brevity and comprehensiveness. The final bat-
tery consisted of brief measures of neuropsychological func-
tioning that are known to be sensitive to diffuse cognitive
deficits (Axelrod, Fichtenberg, Liethen, Czarnota, & Stucky,
2001; Donders & Strong, 2015; Henry & Crawford, 2005;
Lynch, Dickerson, & Denney, 2010), and presenting the
examinees with tasks of varying difficulty level.
Moreover, tests were carefully selected to cover the main
neurocognitive domains (language, attention, working mem-
ory, processing speed, and executive function). However,
due to the time constraints, other relevant domains (visual
and auditory memory, verbal and non-verbal reasoning, vis-
ual-spatial-perceptual skills, sustained attention, concept for-
mation) could not be sampled. Table 2 lists the tests and
provides a brief description of each task and the underlying
cognitive construct it was designed to measure. All instru-
ments included in the study are established, well-validated
tests of their target construct (Lezak, Howieson, Bigler, &
Tranel, 2012; Strauss, Sherman, & Spreen, 2006). Identical
test batteries were administered three times with-
out deviation.
In addition to measuring cognitive ability, the credibility
of the response set was continuously monitored using
embedded validity indicators. Administering multiple per-
formance validity tests (PVTs) throughout the assessment is
consistent with established guidelines in clinical neuropsych-
ology (Boone, 2009; Bush, Heilbronner, & Ruff, 2014;
Schutte, Axelrod, & Montoya, 2015), and continues to be
supported by recent empirical evidence (Critchfield et al.,
2019; Erdodi, Tyson et al., 2018; Lichtenstein et al., 2018a;
Schroeder, Olsen, & Martin, 2019). Free-standing PVTs
were designed explicitly to evaluate the credibility of a
response set and, therefore, have been considered the gold
standard instruments. In contrast, embedded PVTs are
after-marketmodifications (in structure or function) of
existing measures of cognitive ability that were independ-
ently calibrated to differentiate between valid and invalid
response sets (Boone, 2013; Larrabee, 2012; Lichtenstein
et al., 2018b; Rai, An, Charles, Ali, & Erdodi, 2019).
Although free-standing PVTs generally tend to have
superior classification accuracy (Jelicic, Ceunen, Peters &
Merckelbach, 2011), embedded PVTs provide cost-effective
alternatives in settings where assessors operate under signifi-
cant time- and volume-pressures (Erdodi, 2019;
Lichtenstein, Erdodi, & Linnea, 2017). Occasionally,
embedded PVTs have demonstrated equivalent (Bashem
et al., 2014; Reese, Suhr, & Riddle, 2012; Webber & Soble,
2018) or even superior classification accuracy to their free-
standing counterparts (Roye, Calamia, Bernstein, De Vito, &
Hill, 2019; Tyson et al., 2018). Therefore, trends in test
usage are shifting towards integrating free-standing and
embedded PVTs (Martin, Schroeder, & Odland, 2015).
The importance of symptom and performance validity
assessment is broadly recognized in North America (Boone,
2009; Chafetz et al., 2015; Schutte et al., 2015) and Europe
(Becke et al., 2019; Dandachi-FitzGerald, Ponds, & Merten,
2017; von Helvoort, Merckelbach, & Merten, 2019; Merten
& Merckelbach, 2013). Repeated PVT failures render the
overall neurocognitive profile invalid, and hence, clinically
uninterpretable (Boone, 2013; Larrabee, 2012). The com-
monly accepted forensic standard for operationalizing global
non-credible responding is 2 PVT failures (Boone, 2013;
Larrabee, 2014). Although an isolated PVT failure is consid-
ered insufficient evidence to render an entire neurocognitive
profile invalid, some argue that even a single PVT failure
raises concerns about the veracity of the overall response set
(Erdodi, Hurtubise et al., 2018; Lichtenstein et al., 2019;
Lippa, 2018; Proto et al., 2014). A strong (Green, Rohling,
Table 1. Sample characteristics: BMI, Self-Reported daily amount of cannabis
use, duration of use and medical condition for which cannabis was prescribed.
ID BMI
Cannabis
Usage/Day (g) Sex Medical condition
1 28.1 3 Male Back pain
2 29.2 2 Female Back pain
3 31.3 3.5 Male Chronic pain
4 19.7 1 Female Degenerative disc disorder
5 26.6 2 Female Anxiety
6 22.4 2.5 Female Depression
7 44.3 5 Male Anxiety
8 - 1 Male Anxiety
9 46.7 2 Female Immune
10 34.7 2 Female Knee pain
11 22.8 3 Male Anxiety
12 17.8 2 Male Arthritis
13 19.6 2 Male Scoliosis
14 20.9 2 Male Osteoarthritis
15 34.1 1.25 Male Lower back pain
16 35.6 1.5 Female Anxiety
17 30.6 14 Male Spinal dysraphism
18 29.1 9 Male Anxiety
19 27.7 3.5 Female Sleep apnea
20 18.7 3 Male Back pain
21 51.6 1.5 Male Back pain
22 41.5 3 Male Chronic pain
ID: Participant identifier; BMI: Body Mass Index.
2 P. OLLA ET AL.
Table 2. Components of the neuropsychological battery.
Test Description/Target Construct Scale Norms Characteristics of the Normative Sample
Animal Fluency A measure of semantic fluency and executive
control. Examinee is asked to name as many
animals as possible. Time limit: 60 seconds.
T Heaton, Miller, Taylor, and Grant (2004)n¼1,148; M
Age
¼50.0, M
Educ
¼13.5
BNT-15 A measure of expressive language through a
confrontation naming task. Examinee is asked to
name the objects on 15 black-and-white single-
line drawings. Time limit: 20 seconds per picture.
Raw An et al. (2019)
Goodglass et al. (2001)
n¼40 students; M
BNT-15
¼13.9, SD ¼1.2; M
Age
¼
22.9, M
Educ
¼14.6
n¼15, M
BNT-15
¼14.1,
SD ¼0.8
Coding
WAIS-IV
A measure of attention and visuomotor processing
speed, based on a symbol substitution task.
Examinee is asked to fill in a blank matrix as fast
as possible following a key (digits 1 through 9
paired with nonsense symbols). Time
limit: 120 seconds.
ACSS Wechsler (1997)n¼2,450 stratified random sample of the US
population (age, gender, education, race & region)
Digit Span
WAIS-III
A measure of auditory attention (Digits Forward)
and working memory (Digits Backward) based on
the digit repetition paradigm. Examinee is asked
to repeat random number sequences of
increasing length.
ACSS
z
Wechsler (1997)n¼2,450 stratified random sample of the US
population (age, gender, education, race & region)
Stroop
D-KEFS
Color Naming A measure of attention and speed of information
processing through a speeded naming task.
Examinee is asked to name 50 colored patches
as fast as possible.
ACSS Delis, Kaplan, and Kramer (2001)n¼1,750 stratified random sample of the US
population (age, gender, education, race & region)
Stroop
D-KEFS
Word Reading A measure of attention and speed of information
processing through a speeded reading task.
Examinee is asked to read 50 color names
printed in black ink as fast as possible.
ACSS Delis et al. (2001)n¼1,750 stratified random sample of the US
population (age, gender, education, race & region)
Stroop
D-KEFS
Interference A measure of inhibition and cognitive flexibility.
Examinee is asked to name the color of the ink
for 50 color names printed in incongruent ink
color as fast as possible.
ACSS Delis et al. (2001)n¼1,750 stratified random sample of the US
population (age, gender, education, race & region)
TMT-A A measure of simple attention, visual scanning and
processing speed. Examinee is asked to draw a
line to sequentially connect 25
encircled numbers.
T Heaton et al. (2004)n¼1,212; M
Age
¼46.6, M
Educ
¼13.6
TMT-B A measure of visual scanning, divided attention and
cognitive flexibility. Examinee is asked to draw a
line connecting an alternating sequence of
numbers (increasing order) and letters
(alphabetical order).
T Heaton et al. (2004)n¼1,212; M
Age
¼46.6, M
Educ
¼13.6
BNT-15: Boston Naming Test Short Form; WAIS-IV: Wechsler Adult Intelligence Scale Fourth Edition; WAIS-III: Wechsler Adult Intelligence Scale Third Edition; D-KEFS: Delis-Kaplan Executive Function System; TMT:
Trail Making Test; ACSS: Age-corrected scaled scores (M ¼10, SD ¼3).
APPLIED NEUROPSYCHOLOGY: ADULT 3
Lees-Haley, & Allen, 2001), negative linear (Abeare et al.,
2019; Berger et al., 2019; Erdodi, Abeare et al., 2018) rela-
tionship between PVT failures and cognitive test scores has
been long-established, emphasizing the need for objective,
ongoing monitoring of performance validity during cogni-
tive testing (Boone, 2009). Table 3 provides the cutoffs and
the corresponding references on the PVTs embedded within
the test battery used in this study.
Procedure
Participants were recruited via social media from the med-
ical cannabis community in Southwestern Ontario, Canada.
Three hundred participants registered for the study, of
which 30 completed a medical interview via a telemedicine
service to verify eligibility. Twenty-three participants
reported to the study, but one withdrew early due to the
experience of adverse effects following cannabis consump-
tion. In total, 22 participants completed the full design. The
study was conducted on a single day, from 8:30 AM to
3:00 PM. This project was approved by the University
Research Ethics Board, and ethical guidelines regulating
research involving human participants were followed
throughout the study.
Participants were followed throughout the duration of the
study to monitor potential adverse effects. The first round of
cognitive testing (Baseline) was administered after obtaining
informed consent. As the next step, participants consumed
one gram of Cannabis sativa (20% THC) via vapes, cannabis
cigarettes (joints) and dabs for 10 min and were then asked
to report their subjective level of intoxication on a visual
analog scale. After 30 min of relaxation, the second round of
cognitive testing was performed (THC). Finally, the third
round of cognitive testing (Recovery) was performed 2.53h
later, after which participants were discharged from
the study.
The same neuropsychological battery (Table 2) was
administered three times (Baseline,THC and Recovery) to all
participants. Testing sessions took 1520 mins to complete.
Psychometric testing was administered by research assistants
(RAs) under the on-site supervision of a licensed clinical
neuropsychologist. Given that participants were medical
patients, it was deemed unethical to request that they
abstain from consuming cannabis prior to their enrollment
in the study.
Data analysis
Descriptive statistics were reported for each of the three test
administrations. The main inferential statistics were repeated
measures ANOVAs, with time of administration as the inde-
pendent variable and various neuropsychological tests as the
dependent variables. Post hoc contrasts were calculated using
uncorrected within-group t-tests. Independent t-tests were
performed to compare performance in the current study to
that of normative samples. All contrasts were two-tailed, at
.05 significance level. Effect size estimates were partial eta
squared (g
p2
) and Cohensd, respectively.
Results
The average self-reported level of cannabis intoxication prior
to the second assessment was 5.1 out of 10. To facilitate the
visual comparison of scores across time, tests sharing the
same scale of measurement (raw scores, ACSS, T-scores and
z-scores) were grouped during the construction of tables
and figures. However, the narrative description of the results
is structured around cognitive domains. Additionally, the
credibility of the response sets is continuously evaluated
through reporting of the outcome of embedded PVTs within
the cognitive domain discussed.
Language
A significant difference associated with a large overall effect
emerged on the BNT-15 accuracy raw scores. The only sig-
nificant post hoc pairwise contrast was between Baseline and
Recovery (medium effect). However, as shown in Table 4,
participants performed near the ceiling at all times (M:
14.414.6), with the majority (54.563.6%) obtaining perfect
scores (15). No one scored <13, the threshold for
non-credible performance. The proportion of the sample
Table 3. Performance validity indicators embedded within the neuropsychological tests.
Test Cutoff Reference
Animal Fluency T 33 Sugarman and Axelrod (2015)
BNT-15 ACC 12 An et al. (2019); Erdodi, Hurtubise et al. (2018)
T2C 85 An et al. (2019)
Coding
WAIS-IV
ACSS 5 Erdodi, Abeare et al. (2017); Erdodi and Lichtenstein (2017); Kim et al. (2010)
Digit Span
WAIS-III
RDS 7 Greiffenstein, Baker, and Gola (1994); Critchfield et al. (2019); Schroeder et al. (2012)
LDF 4 Babikian, Boone, Lu, and Arnold (2006); Heinly, Greve, Bianchini, Love, and Brennan (2005)
LDB 2 Heinly et al. (2005)
Stroop
D-KEFS
Color Naming
ACSS 6 Erdodi, Sagar et al. (2018)
Stroop
D-KEFS
Word Reading
ACSS 6 Erdodi, Sagar et al. (2018)
Stroop
D-KEFS
Interference ACSS 6 Erdodi, Sagar et al. (2018)
Trail Making Test A T 37 Abeare et al. (2019); Erdodi and Lichtenstein, 2019
Trail Making Test B T 35 Abeare et al. (2019); Erdodi and Lichtenstein, 2019
BNT-15: Boston Naming Test Short Form; WAIS-IV: Wechsler Adult Intelligence Scale Fourth Edition; WAIS-III: Wechsler Adult Intelligence Scale Third
Edition; D-KEFS: Delis-Kaplan Executive Function System; T: T-score (M¼50, SD ¼10); ACC: Accuracy score (number of correct answers out of 15); T2C: Time to
completion (sum of response latencies across the 15 items in seconds); ACSS: Age-corrected scaled scores (M¼10, SD ¼3); RDS: Reliable digit span; LDF:
Longest digit span forward; LDB: Longest digit span backward.
4 P. OLLA ET AL.
with the lowest observed score (13) declined with each sub-
sequent administration: 13.6%, 9.1% and 4.5%, respectively.
Participants outperformed the control group of 40 Canadian
undergraduate students enrolled in the study by An et al.
(2019), the most appropriate normative data available. In
addition, participants scored higher than the normative sam-
ple in the technical manual (Goodglass, Kaplan, & Barresi,
2001). All contrasts were significant and associated with
medium-large effects (d: 0.390.72).
Similarly, a very large effect was observed on the animal
fluency test, driven by the lower mean performance during
Baseline (still in the Average range). A significant improve-
ment associated with medium-large effects was observed
during the two subsequent administrations (Table 5).
During the last two administrations, participants also out-
performed the normative sample (M¼50), producing
medium-large effects (d: 0.500.70). The base rate of failure
(BR
Fail
) on the validity cutoff (T 33, Sugarman & Axelrod,
2015) was low (4.5% during Baseline, 0.0% during subse-
quent testing).
Simple attention and processing speed
There was no change in performance across time on the
overall Digit Span age-corrected scaled score (Table 6).
BR
Fail
on the validity cutoff (Reliable Digit Span 7) was
low (4.5%) during Baseline and THC, and zero during
Recovery. Participants consistently outperformed the
normative sample (M¼10) at all times, with medium effect
sizes (d: 0.500.67). The negative findings extended to age-
corrected z-scores for Longest Digits Forward (Table 6). As
before, participants consistently outperformed the normative
sample (M¼0.0), with large effects (d: 0.720.82). BR
Fail
on
the validity cutoff (4 raw score) was consistently zero.
On the two measures of graphomotor speed (Trails A
and Coding), there was a large main effect. Following an
Average range performance during Baseline and THC, the
participants produced a Trails A mean in the High Average
range during Recovery (Table 5), a score that was superior
to both previous assessments (d: 0.460.85, medium and
large effects). In addition, the Recovery mean was signifi-
cantly higher than the normative average (M¼50), and
associated with a large effect (d¼0.85). In contrast, none of
the three means on Coding were significantly different from
the normative sample (M¼10; Table 6). However, the mean
during Recovery was superior to the one during Baseline
(d¼0.56, medium effect). BR
Fail
on the Coding validity cut-
off (5) was consistently low (4.5%) throughout the assess-
ments. BR
Fail
was higher (14.3% during Baseline, 13.6%
during THC and 0.0% during Recovery) on the validity cut-
off for the Trails A (T 37).
Similarly, a large main effect emerged on the Color
Naming subtest of the D-KEFS, driven by the High Average
range performance during Recovery, which was superior to
both of the previous administrations (d: 0.460.51, medium
effects). Performance during Baseline and Recovery (but not
THC) was significantly higher than the normative mean
(M¼10) and was associated with medium-large effects (d:
0.370.69). In contrast, the main effect was non-significant
on the Word Reading subtest of the D-KEFS (Table 6).
Similarly, none of the post hoc pairwise contrasts reached
statistical significance. However, as with the Color Naming,
performance during Baseline and Recovery (but not THC)
was significantly higher than the normative mean (d:
0.580.62, medium effects). BR
Fail
was consistently zero on
the Color Naming validity cutoff (6). On Word Reading,
BR
Fail
was zero during Baseline, on the same validity cutoff
Table 4. Changes in raw scores across time on neurocognitive testing.
Descriptives Repeated Measures ANOVA
Test/Score Time MSDFpg
p
2
Sig. post hocsd
BNT-15 1 14.40.73 3.64
a
.035 .148 1-3 .46
Accuracy 2 14.50.67
3 14.60.69
BNT-15 1 32.9 15.5 8.29 .001 .283 1-2 .49
T2C 2 27.713.4 1-3 .77
3 23.011.0 2-3 .45
ANOVA: Analysis of variance; Sig. post hocs: Post hoc uncorrected pairwise t-
tests with p-value <.05; BNT-15: Boston Naming Test Short Form; T2C:
Time to completion (seconds)
a
Mauchlys test of sphericity significant at p<.05.
Mean is significantly different from the mean of the control group in the
study by An et al. (2019)(p<.05).
Table 5. Changes in T-scores (M¼50, SD ¼10) across time on neurocogni-
tive testing.
Descriptives Repeated Measures ANOVA
Test Time M SD F pg
p2
Sig. post hocs d
Animals 1 49.8 10.3 5.98 .005 .222 12 .65
2 55.612.1 13 .74
3 57.411.0
TMT-A 1 48.6 12.1 7.46 .002 .272 13 .85
2 52.8 14.5 23 .46
3 58.49.8
TMT-B 1 48.1 10.6 4.57 .016 .186 13 .77
2 51.0 12.1
3 54.810.7
ANOVA: Analysis of variance; Sig. post hocs: Post hoc uncorrected pairwise t-
tests with p-value <.05; TMT: Trail Making Test.
Mean is significantly different from the mean of the normative sample
(p<.05);
Table 6. Changes in Age-Corrected scaled scores (M¼10, SD ¼3) across time
on neurocognitive testing.
Descriptives Repeated Measures ANOVA
Test Time MSDFpg
p2
Sig. post hocsd
Coding 1 9.7 2.6 4.28 .020 .169 1-3 .56
2 10.1 2.8
3 10.9 3.0
Digit Span 1 11.93.1 0.63 .641 .029 None
2 11.92.7
3 11.42.6
D-KEFS 1 10.91.7 3.60 .036 .146 1-3 .46
Color 2 10.7 2.1 2-3 .51
Naming 3 11.71.8
D-KEFS 1 11.51.7 2.53 .092 .107 None
Word 2 10.8 2.5
Reading 3 11.52.1
D-KEFS 1 11.12.4 2.68 .081 .113 1-3 .49
Stroop 2 11.72.7
3 12.51.8
ANOVA: Analysis of variance; Sig. post hocs: Post hoc uncorrected pairwise t-
tests with p-value <.05; D-KEFS: Delis-Kaplan Executive System.
Mean is significantly different from the mean of the normative sample
(p<.05).
APPLIED NEUROPSYCHOLOGY: ADULT 5
during Baseline, increased to 9.1% during THC and dropped
to 4.5% during Recovery.
Lastly, a very large main effect emerged on the time-
to-completion variable for the BNT-15 (i.e., the sum of
response latency for each of the 15 items in seconds). All
post hoc pairwise contrasts were significant (d: 0.450.77,
medium-large effects). Mean time-to-completion was lower
(i.e., participants responded faster) during THC and
Recovery (but not Baseline) compared to the control group
in the An et al. (2019) study (M¼43.4 s), with large effects
(d: 0.720.96). Further details are displayed in Table 3.
BR
Fail
was consistently zero on the validity cutoff (85 s).
Executive function
A large main effect emerged on Trails B (Table 5). The only
significant post hoc pairwise contrast was between Baseline
and Recovery (d¼0.77, large effect). Performance during
Recovery was the only one that was superior to the norma-
tive mean (M¼50), with a medium effect (d¼0.46). BR
Fail
was higher (13.6%) during Baseline, 9.5% during THC and
0.0% during Recovery on the validity cutoff (T 35).
Similarly, a large main effect was observed on the Stroop
task within the D-KEFS (Table 5). As with Trails B, the only
significant post hoc pairwise contrast was between Baseline
and Recovery (d¼0.49, medium effect). Participants outper-
formed the normative sample (M¼50) during all three
administrations (d: 0.401.01, medium-large effects). BR
Fail
was low (4.5% during Baseline and THC; zero during
Recovery) on the Stroop task using the same benchmark for
non-credible responding applied to Color Naming and
Word Reading (6). Finally, the main effect was non-
significant on the Longest Digit Span Backward (Table 7).
Likewise, none of the post hoc pairwise contrasts reached
statistical significance, nor was performance different from
the normative mean (M¼0.0) during any of the three
assessments. BR
Fail
on the validity cutoff (2 raw score) was
consistently zero.
Performance validity
As it was apparent from the reports above, BR
Fail
on uni-
variate cutoffs were consistently low (0.014.3%), with zero
being the modal value. Examining the cumulative PVT fail-
ures did not change the overall outcome. During Baseline,
the majority (72.7%) of participants passed all PVTs; 18.2%
failed only one, and 9.1% failed at least two. Similarly, dur-
ing THC, 72.7% of the sample passed all PVTs; 9.1% failed
one, and 18.2% failed at least two. Finally, during Recovery,
90.1% of participants passed all PVTs; 9.1% failed one, while
no one failed more than one. Figure 1 provides a visual
summary of the distribution of VT failures over time.
Discussion
This study was designed to examine the effect of cannabis
on cognitive functioning using observational pragmatic clin-
ical trials. Medical marijuana users were administered a brief
neurocognitive battery three times in six hours: at Baseline,
after they smoked marijuana, and again several hours later.
There was no psychometric evidence for a decline in per-
formance on cognitive testing following THC intoxication.
Results may have several possible explanations, all of
which are limited by the absence of a control group and,
therefore, are inherently speculative. Nevertheless, they are
outlined below to provide a list of possible factors to con-
sider in future research. In our opinion, the most plausible
explanation for the negative findings is that THC suppressed
the normative learning effects although alternative explan-
ations cannot be ruled out. If the deleterious effects of can-
nabis on cognition are demonstrated in future studies, it has
potential clinical and practical significance in that it suggests
that chronic cannabis use may interfere with treatment
adherence and adaptive functioning (driving, work,
child-care).
Experimental confounds
First, all participants were chronic users of marijuana. As
such, the intervention was not a novel experience. The mean
self-reported subjective sense of intoxication substantiates
this concern: participants indicated that they experienced
being about half-way high.This may be partly due to the
10-min restriction on cannabis consumption during
Table 7. Changes in z-Scores (M¼0.0, SD ¼1.0) across time on neurocogni-
tive testing.
Descriptives Repeated Measures ANOVA
Test Time MSDFpg
p2
Sig. post hocsd
Longest 1 0.720.82 0.13 .876 .006 None
Digits 2 0.700.93
Forward 3 0.790.92
Longest 1 0.26 1.06 0.58 .565 .027 None
Digits 2 0.17 1.18
Backward 3 0.01 0.70
ANOVA: Analysis of variance; Sig. post hocs: Post hoc uncorrected pairwise t-
tests with p-value <.05.
Mean is significantly different from the mean of the normative sample
(p<.05).
0
10
20
30
40
50
60
70
80
90
100
Baseline THC Recovery
Percent of the Sample
Timeline
0 PVT Failures
≥2 PVT Failures
Figure 1. The distribution of failures on PVTs (performance validity tests).
Failing two or more PVTs is the commonly used threshold for deeming a given
cognitive profile invalid (Boone, 2013; Larrabee, 2014; Lippa, 2018).
6 P. OLLA ET AL.
the study imposed by the Research Ethics Board to manage
the risk for potential adverse effects. Specifically, there were
safety concerns with regards to allowing participants to self-
titrate dosing of cannabis. Naturally, this attenuated both
the magnitude of the intervention and the ecological validity
of the research design.
Second, practice effects may have been a major experi-
mental confound. A fundamental assumption underlying
neuropsychological testing is the novelty of the task (Lezak
et al., 2012). However, by the time the second and most crit-
ical assessment (THC) was performed, the participants
already had exposure to the entire battery. While THC elic-
its acute brain-behavior changes best captured through rapid
serial testing, practice effects might neutralize them and
result in a net-zero effect. Although variable (ranging from
small to very large), the retest effectitself can be quite
robust and, as such, it may inadvertently mask the main
effect of interest (Basso, Bornstein, & Lang, 1999; Beglinger
et al., 2005; Calamia, Markon, & Tranel, 2012; Heilbronner
et al., 2010; McCaffrey, Ortega, & Haase, 1993; Zuccato,
Tyson, & Erdodi, 2018).
Moreover, the magnitude of practice effects can vary as a
function of examinee characteristics, instruments (type and
version of the test), cognitive domains, and scheduling
(number of test administrations and time elapsed between
two subsequent testings). To further complicate matters,
some studies found interactions among these variables
(Erdodi, Lajiness-ONeill, & Saules, 2010; Erdodi & Lajiness-
ONeill, 2014), even on instruments previously thought to
be immune to order effects (Conners, 2004). Among the
scant rapid serial testing studies found, the strongest practice
effects were reported between the first and second adminis-
tration for measures of psychomotor speed, learning and
memory (Theisen, Rapport, Axelrod, & Brines, 1998;
Wagner, Helmreich, Dahmen, Lieb, & Tadi
c, 2011). Overall,
results from previous studies substantiate concerns about
practice effects potentially masking the transient deleterious
effects of acute cannabis intoxication. The converging evi-
dence suggests that the normative outcome of repeat testing
(especially from the first to second administration) is a sig-
nificant increase in test performance.
Third, the individual variability in the pattern of cannabis
use (overall amount, purity, time since last use) and the
resulting heterogeneity in tolerance to THC may be another
mechanism by which the main effect was weakened. For
example, Schuster, Hoeppner, Evins, and Gilman (2016)
found no difference between controls and late-onset mari-
juana users while early-onset users demonstrated significant
deficits in auditory verbal learning and memory.
Coincidentally, the one participant who withdrew from the
study because of adverse effects had reported a notably
lower rate of marijuana consumption than the rest of the
participants.
Fourth, the sample was cognitively high functioning.
Even during Baseline, participants scored above the norma-
tive sample on several of the neuropsychological tests.
Cognitive reserve is a well-established confounding variable
that complicates the detection of cognitive deficits during
psychometric testing (Erdodi, Shahein et al., 2019;OBryant
et al., 2008; Stern et al., 1992). In other words, a high level
of baseline cognitive functioning may enable subjects to
compensate for subtle acquired cognitive deficits, thereby
precluding detection of the deleterious effects of acute can-
nabis intoxication on performance during neuropsycho-
logical testing.
Fifth, instrumentation artifacts may have contributed to
negative findings. Although many of the tests selected are
sensitive to diffuse cognitive deficits (Axelrod et al., 2001;
Curtis, Greve, Bianchini, & Brennan, 2006; Donders &
Strong, 2015; Lange et al., 2005; Spreen & Benton, 1965;
Stuss et al., 1985), and/or invalid performance (An et al.,
2019; Erdodi, Abeare et al., 2017; Erdodi, Sagar et al., 2018;
Schroeder, Twumasi-Ankrah, Baade, & Marshall, 2012), they
were specifically designed for clinical settings where the
detection of frank impairment is often the ultimate goal. As
such, these tests may lack the sensitivity necessary for meas-
uring more subtle fluctuations in cognitive functioning
(Abeare et al., 2018) induced by cannabis intoxication.
Finally, since half of the participants reported pain as one
of the reasons for cannabis use, consumption prior to the
second cognitive test administration may have altered pain
perception and thus, improved cognitive performance.
Conversely, the relative state of deprivation during Baseline
testing may have interfered with demonstrating their max-
imal ability level. In these participants, increased THC levels
during the second testing may have paradoxically enhanced
cognitive functioning by alleviating pain a variable associ-
ated with underperformance on neuropsychological testing
(Bar-On, Gal, Shorer, & Ablin, 2016; Henry et al., 2018;
Suhr, 2003; Suhr & Spickard, 2012). This explanation is con-
sistent with previous reports on short-term normalizing
effects of cannabis in patients with schizophrenia (Fischer
et al., 2014).
Given both the potentially powerful confounding varia-
bles outlined above and the absence of a control group, the
present study precludes us from reaching a definitive con-
clusion about the short-term effect of marijuana on cogni-
tive functioning. However, there is circumstantial evidence
to suggest that negative findings during THC may, in fact,
reflect the suppressing effect of cannabis on cognitive func-
tioning: (1) The best performance typically occurred during
Recovery, with largest effect sizes between Baseline vs.
Recovery; (2) On D-KEFS Color Naming and Word
Reading, mean scores were significantly higher than the nor-
mative mean during Baseline and Recovery, but not during
THC; (3) On Trails A and B, performance flat-lined from
Baseline to THC, only to spike during Recovery.
Paradoxically, the strongest evidence of a subtle suppres-
sion of cognitive performance comes from PVT profiles: the
number of invalid response sets increased during THC, fol-
lowed by a zero BR
Fail
during Recovery.Althoughthebase
rate of non-credible responding in this study is consistent with
broad-based estimates in clinical (Young, 2015; Young, Roper,
& Arentsen, 2016) and research settings (An et al., 2017;
Santos, Kazakov, Reamer, Park, & Osmon, 2014;Silk-Eglit,
Stenclik, Miele, Lynch, & McCaffrey, 2015), it was quite low
APPLIED NEUROPSYCHOLOGY: ADULT 7
overall. Given that PVTs are specifically designed to be robust
to the effect of genuine impairment (Boone, 2013;Erdodi,
Nussbaum, et al., 2017;Erdodi&Roth,2017;Erdodi,Seke,
et al., 2017; Larrabee, 2012), it is surprising that they were the
most sensitive indicators of the suppressing effect of marijuana
on cognitive functioning,
THC tolerance
Previous studies on the development of tolerance to the
effects of THC on neurocognitive testing provide further
insight as to the lack of cognitive impairment detected in
this particular study. The effects of cannabis consumption
on cognitive function, feelings of intoxication, cardiac func-
tion, and psychomimetic effects have been shown to correl-
ate with a history of consumption (Colizzi & Bhattacharyya,
2018). Psychomotor impairment from cannabis consumption
is greater in occasional users (Desrosiers et al., 2015). Some
studies have even shown complete tolerance to the cognitive
effects of cannabis consumption in frequent users (Colizzi &
Bhattacharyya, 2018). Considering that, in the present study,
participants reported being subjectively only half-way high,
and that the level of tolerance development to cognitive
impairment with frequent use is greater than the level of tol-
erance to subjective intoxication, this provides another
explanation for a lack of detectable impairment from acute
cannabis consumption.
Conclusion
Even without the ability to draw definitive conclusions as to
the reason for a lack of detectable cognitive decline, the pre-
sent observational study is an important early step in under-
standing the short-term effects of cannabis on cognitive
functioning in medical cannabis users. More research is
needed to determine the cognitive sequelae of THC intoxica-
tion. Future studies may benefit from conducting a random-
ized clinical trial, administering larger doses of THC,
analyzing participant data in subgroups based on prior use
patterns and expected tolerance levels, performing the base-
line testing several days or weeks before the THC trial, and
using more comprehensive assessment batteries engineered
to detect subtle forms of deficits, such as auditory verbal
and visual learning as well as sustained attention tests that
were beyond the range of the present study due to logistical
constraints. In the light of well-documented memory deficits
during the acute stage of THC intoxication (Mechoulam &
Parker, 2013; Mills & Brawley, 1972), the rate of acquisition
and delayed recall of novel information should be included
as an outcome measure in future research.
With the advent of the legalization of cannabis in North
America and Europe, the notion of a critical threshold (akin
to the legal limit on blood alcohol level for driving) is likely
to become a contentious issue with far-reaching implications
to policy-making and law enforcement. Although the per-
sistent negative findings in terms of the effect of cannabis
on cognition contradicted our a priori predictions and can-
not be fully explained within the methodological confines of
the present study, they identified important new challenges
in cannabis research: instrumentation artifacts (learning
effects) and tolerance in regular users. As such, despite its
limitations, the study makes potentially valuable contribu-
tions to cannabis research by informing the design of future
investigations.
In addition, the negative results may be representative of
habitual users of cannabis. If replicated by subsequent stud-
ies, our findings could help extend the understanding of the
link between the amount of cannabis use, acute intoxication
and its immediate effect on neuropsychological performance.
Providing empirically-based, objective data on the effects of
cannabis on cognitive functioning can inform key decision-
makers. Concurrently, developing practical guidelines about
risk management during acute cannabis intoxication could
function as informal practical guidelines to the public.
References
Abeare, C. A., Sabelli, A., Taylor, B., Holcomb, M., Dumitrescu, C.,
Kirsch, N. L., & Erdodi, L. A. (2019). The importance of demo-
graphically adjusted cutoffs: Age and education bias in raw score
cutoffs within the Trail Making Test. Psychological Injury and Law,
12(2), 170182. doi:10.1007/s12207-019-09353-x
Abeare, C., Messa, I., Whitfield, C., Zuccato, B., Casey, J., & Erdodi, L.
(2019). Performance validity in collegiate football athletes at baseline
neurocognitive testing. Journal of Head Trauma Rehabilitation,
34(4), 2031. doi:10.1097/HTR.0000000000000451
An, K. Y., Charles, J., Ali, S., Enache, A., Dhuga, J., & Erdodi, L. A.
(2019). Re-examining performance validity cutoffs within the
Complex Ideational Material and the Boston Naming Test-Short
Form using an experimental malingering paradigm. Journal of
Clinical and Experimental Neuropsychology,41(1), 1525. doi:10.
1080/13803395.2018.1483488
An, K. Y., Kaploun, K., Erdodi, L. A., & Abeare, C. A. (2017).
Performance validity in undergraduate research participants: A com-
parison of failure rates across tests and cutoffs. The Clinical
Neuropsychologist,31(1), 193206. doi:10.1080/13854046.2016.
1217046
Axelrod, B. N., Fichtenberg, N. L., Liethen, P. C., Czarnota, M. A., &
Stucky, K. (2001). Performance characteristics of postacute traumatic
brain injury patients on the WAIS-III and WMS-III. The Clinical
Neuropsychologist,15(4), 516520. doi:10.1076/clin.15.4.516.1884
Babikian, T., Boone, K. B., Lu, P., & Arnold, G. (2006). Sensitivity and
specificity of various Digit Span scores in the detection of suspect
effort. The Clinical Neuropsychologist,20(1), 145159. doi:10.1080/
13854040590947362
Bar-On, K. T., Gal, G., Shorer, R., & Ablin, J. N. (2016). Cognitive
functioning in fibromyalgia: The central role of effort. Journal of
Psychosomatic Research,87,3036. doi:10.1016/j.jpsychores.2016.06.
004
Bashem, J. R., Rapport, L. J., Miller, J. B., Hanks, R. A., Axelrod, B. N.,
& Millis, S. R. (2014). Comparison of five performance validity indi-
ces in bona fide and simulated traumatic brain injury. The Clinical
Neuropsychologist,28(5), 851875. doi:10.1080/13854046.2014.927927
Basso, M. R., Bornstein, R. A., & Lang, J. M. (1999). Practice effects of
commonly used measures of executive function across twelve
months. The Clinical Neuropsychologist,13(3), 283292. doi:10.1076/
clin.13.3.283.1743
Becke, M., Fuermaier, A. B. M., Buehren, J., Weisbrod, M.,
Aschenbrenner, S., Tucha, O., & Tucha, L. (2019). Utility of the
Structured Interview of Reported Symptoms (SIRS-2) in detecting
feigned adult attention-deficit/hyperactivity disorder. Journal of
Clinical and Experimental Neuropsychology,41(8), 786802. doi:10.
1080/13803395.2019.1621268
8 P. OLLA ET AL.
Beglinger, L., Gaydos, B., Tangphaodaniels, O., Duff, K., Kareken, D.,
Crawford, J., Siemers, E. (2005). Practice effects and the use of
alternative forms in serial neuropsychological testing. Archives of
Clinical Neuropsychology,20(4), 517529. doi:10.1016/j/can.2004.12.
003
Berger, C., Lev, A., Braw, Y., Elbaum, T., Wagner, M., & Rassovsky, Y.
(2019). Detection of Feigned ADHD using the MOXO-d-CPT.
Advance online publication. Journal of Attention Disorders. doi:10.
1177/1087054719864656
Bhattacharyya, S., Morrison, P. D., Fusar-Poli, P., Martin-Santos, R.,
Borgwardt, S., Winton-Brown, T., McGuire, P. K. (2010).
Opposite effects of delta-9-tetrahydrocannabinol and cannabidiol
on human brain function and psychopathology.
Neuropsychopharmacology,35(3), 764774. doi:10.1038/npp.2009.184
Boone, K. B. (2009). The need for continuous and comprehensive sam-
pling of effort/response bias during neuropsychological examination.
The Clinical Neuropsychologist,23(4), 729741. doi:10.1080/
13854040802427803
Boone, K. B. (2013). Clinical Practice of Forensic Neuropsychology. New
York, NY: Guilford.
Bush, S. S., Heilbronner, R. L., & Ruff, R. M. (2014). Psychological
assessment of symptom and performance validity, response bias,
and malingering: Official position of the Association for Scientific
Advancement in Psychological Injury and Law. Psychological Injury
and Law,7(3), 197205. doi:10.1007/s12207-014-9198-7
Calamia, M., Markon, K., & Tranel, D. (2012). Scoring higher the
second time around: Meta-analyses of practice effects in neuro-
psychological assessment. The Clinical Neuropsychologist,26(4),
543570. doi:10.1080/13854046.2012.680913
Chafetz, M. D., Williams, M. A., Ben-Porath, Y. S., Bianchini, K. J.,
Boone, K. B., Kirkwood, M. W., Larrabee, G. J., & Ord, J. S. (2015).
Official position of the American Academy of Clinical
Neuropsychology Social Security Administration policy on validity
testing: Guidance and recommendations for change. The Clinical
Neuropsychologist,29(6), 723740. doi:10.1080/13854046.2015.
1099738
Colizzi, M., & Bhattacharyya, S. (2017). Does cannabis composition
matter? Differential effects of delta-9-tetrahydrocannabinol and can-
nabidiol on human cognition. Current Addiction Reports,4(2),
6274. doi:10.1007/s40429-017-0142-2
Colizzi, M., & Bhattaharyya, S. (2018). Cannabis use and the development
of tolerance: A systematic review of human evidence. Neuroscience and
Biobehavioral Reviews,93,125. doi:10.1016/j.neubiorev.2018.07.014
Conners, K. C. (2004). Conners Continuous Performance Test (CPT II).
Version 5 for Windows. Technical Guide and Software Manual.
North Tonawada, NY: Multi-Health Systems
Critchfield, E., Soble, J. R., Marceaux, J. C., Bain, K. M., Chase Bailey,
K., Webber, T. A. (2019). Cognitive impairment does not cause
invalid performance: Analyzing performance patterns among cogni-
tively unimpaired, impaired, and noncredible participants across six
performance validity tests. The Clinical Neuropsychologist,33(6),
10831101. doi:10.1080/13854046.2018.1508615
Curtis, K. L., Greve, K. W., Bianchini, K. J., & Brennan, A. (2006).
California Verbal Learning Test indicators of malingered neurocog-
nitive dysfunction: Sensitivity and specificity in traumatic brain
injury. Assessment,13(1), 4661. doi:10.1177/1073191105285210
Dandachi-FitzGerald, B., Merckelbach, H., & Ponds, R. W. H. M.
(2017). Neuropsychologistsability to predict distorted symptom
presentation. Journal of Clinical and Experimental Neuropsychology,
39(3), 257264. doi:10.1080/13803395.2016.1223278
Delis, D. C., Kaplan, E. F., & Kramer, J. H. (2001). Delis-Kaplan execu-
tive function system. San Antonio, TX: Psychological Corporation.
Desrosiers, N. A., Ramaekers, J. G., Chauchard, E., Gorelick, D. A., &
Huestis, M. A. (2015). Smoked cannabispsychomotor and neuro-
cognitive effects in occasional and frequent smokers. Journal of
Analytical Toxicology,39(4), 251261. doi:10.1093/jat/bkv012
Di Marzo, V. (2001). The endocannabinoid system: Can it contribute
to cannabis. Journal of Cannabis Therapeutics,1(1), 4346. doi:10.
1300/J175v01n01_04
Donders, J., & Strong, C. A. H. (2015). Clinical utility of the Wechsler
Adult Intelligence Scale Fourth Edition after traumatic brain
injury. Assessment,22(1), 1722. doi:10.1177/1073191114530776
Englund, A., Atakan, Z., Kralj, A., Tunstall, N., Murray, R., &
Morrison, P. (2016). The effect of five day dosing with THCV on
THC-induced cognitive, psychological and physiological effects in
healthy male human volunteers: A placebo-controlled, double-blind,
crossover pilot trial. Journal of Psychopharmacology,30(2), 140151.
doi:10.1177/0269881115615104
Englund, A., Morrison, P. D., Nottage, J., Hague, D., Kane, F.,
Bonaccorso, S., Kapur, S. (2013). Cannabidiol inhibits THC-
elicited paranoid symptoms and hippocampal-dependent memory
impairment. Journal of Psychopharmacology,27(1), 1927. doi:10.
1177/0269881112460109
Erdodi, L. A. (2019). Aggregating validity indicators: The salience of
domain specificity and the indeterminate range in multivariate mod-
els of performance validity assessment. Applied Neuropsychology:
Adult,26(2), 155172. doi:10.1080/23279095.2017.1384925
Erdodi, L. A., & Lajiness-ONeill, R. (2014). Time-related changes in
ConnersCPT-II Scores: Replication study. Applied Neuropsychology:
Adult,21(1), 4350. doi:10.1080/09084282.2012.724036
Erdodi, L. A., & Lichtenstein, J. D. (2017). Invalid before impaired: An
emerging paradox of embedded validity indicators. The Clinical
Neuropsychologist,31(67), 10291046. doi:10.1080/13854046.2017.
1323119
Erdodi, L. A., & Lichtenstein, J. D. (2019). Information processing
speed tests as PVTs. In K. B. Boone (Ed.), Assessment of feigned cog-
nitive impairment. A neuropsychological perspective. New York, NY:
Guilford.
Erdodi, L. A., & Roth, R. M. (2017). Low scores on BDAE Complex
Ideational Material are associated with invalid performance in adults
without aphasia. Applied Neuropsychology: Adult,24(3), 264274.
doi:10.1080/23279095.2017.1298600
Erdodi, L. A., Abeare, C. A., Lichtenstein, J. D., Tyson, B. T.,
Kucharski, B., Zuccato, B. G., & Roth, R. M. (2017). WAIS-IV proc-
essing speed scores as measures of non-credible responding The
third generation of embedded performance validity indicators.
Psychological Assessment,29(2), 148157. doi:10.1037/pas0000319
Erdodi, L. A., Abeare, C. A., Medoff, B., Seke, K. R., Sagar, S., &
Kirsch, N. L. (2018). A single error is one too many: The Forced
Choice Recognition trial on the CVLT-II as a measure of perform-
ance validity in adults with TBI. Archives of Clinical
Neuropsychology,33(7), 845860. doi:10.1093/arclin/acx110
Erdodi, L. A., Hurtubise, J. L., Charron, C., Dunn, A., Enache, A.,
McDermott, A., & Hirst, R. B. (2018). The D-KEFS Trails as per-
formance validity tests. Psychological Assessment,30(8), 10821095.
doi:10.1037/pas0000561d
Erdodi, L. A., Lajiness-ONeill, R., & Saules, K. K. (2010). Order of
ConnersCPT-II administration within a cognitive test battery influ-
ences ADHD indices. Journal of Attention Disorders,14(1), 4351.
doi:10.1177/1087054709347199
Erdodi, L. A., Nussbaum, S., Sagar, S., Abeare, C. A., & Schwartz, E. S.
(2017). Limited English proficiency increases failure rates on per-
formance validity tests with high verbal mediation. Psychological
Injury and Law,10(1), 96103. doi:10.1007/s12207-017-9282-x
Erdodi, L. A., Sagar, S., Seke, K., Zuccato, B. G., Schwartz, E. S., &
Roth, R. M. (2018). The Stroop Test as a measure of performance
validity in adults clinically referred for neuropsychological assess-
ment. Psychological Assessment,30(6), 755766. doi:10.1037/
pas0000525
Erdodi, L. A., Seke, K. R., Shahein, A., Tyson, B. T., Sagar, S., & Roth,
R. M. (2017). Low scores on the Grooved Pegboard Test are associ-
ated with invalid responding and psychiatric symptoms. Psychology
and Neuroscience,10(3), 325344. doi:10.1037/pne0000103
Erdodi, L. A., Tyson, B. T., Abeare, C. A., Zuccato, B. G., Rai, J. K.,
Seke, K. R., Roth, R. M. (2018). Utility of critical items within
the Recognition Memory Test and Word Choice Test. Applied
Neuropsychology: Adult,25(4), 327339. doi:10.1080/23279095.2017.
1298600
APPLIED NEUROPSYCHOLOGY: ADULT 9
Erdodi, L., Shahein, A., Fareez, F., Rykulski, N., Sabelli, A., &
Roth, R. M. (2019). Increasing the cutoff on the MMSE and
DRS-2 improves clinical classification accuracy in highly educated
older adults. Advance online publication. Psychology and
Neuroscience.
Fischer, A. S., Whitfield-Gabrieli, S., Roth, R. M., Brunette, M. F., &
Green, A. I. (2014). Impaired functional connectivity of brain
reward circuitry in patients with schizophrenia and cannabis use
disorder: Effects of cannabis and THC. Schizophrenia Research,
158(13), 176182. doi:10.1016/j.schres.2014.04.033
Goodglass, H., Kaplan, E., & Barresi, B. (2001). Boston Diagnostic
Aphasia Examination (3rd ed.). Philadelphia, PA: Lippincott
Williams & Wilkins.
Green, P., Rohling, M. L., Lees-Haley, P. R., & Allen, L. M. (2001).
Effort has a greater effect on test scores than severe brain injury in
compensation claimants. Brain Injury,15(12), 10451060. doi:10.
1080/02699050110088254
Greiffenstein, M. F., Baker, W. J., & Gola, T. (1994). Validation of
malingered amnesia measures with a large clinical sample.
Psychological Assessment,6(3), 218224. doi:10.1037//1040-3590.6.3.
218
Hall, W. (2018). How should we respond to cannabis-impaired driving?
Drug and Alcohol Review,37(1), 35. doi:10.1111/dar.12651
Heaton, R. K., Miller, S. W., Taylor, M. J., & Grant, I. (2004). Revised
comprehensive norms for an expanded Halstead-Reitan battery:
Demographically adjusted neuropsychological norms for African
American and Caucasian adults. Lutz, FL: Psychological Assessment
Resources.
Heilbronner, R. L., Sweet, J. J., Attix, D. K., Krull, K. R., Henry, G. K.,
& Hart, R. P. (2010). Official position of the American Academy of
Clinical Neuropsychology on serial neuropsychological assessments:
The utility and challenges of repeat test administrations in clinical
and forensic contexts. The Clinical Neuropsychologist,24(8),
12671278. doi:10.1080/13854046.2010.526785
Heinly, M. T., Greve, K. W., Bianchini, K., Love, J. M., & Brennan, A.
(2005). WAIS Digit-Span-based indicators of malingered neurocog-
nitive dysfunction: Classification accuracy in traumatic brain injury.
Assessment,12(4), 429444. doi:10.1177/1073191105281099
Henry, G. K., Heilbronner, R. L., Suhr, J., Gornbein, J., Wagner, E., &
Drane, D. L. (2018). Illness perceptions predict cognitive perform-
ance validity. Journal of the International Neuropsychological Society,
24(7), 735711. doi:10.1017/S1355617718000218
Henry, J., & Crawford, J. R. (2005). A meta-analytic review of verbal
fluency deficits in depression. Journal of Clinical and Experimental
Neuropsychology,27(1), 78101. doi:10.1080/138033990513654
Hindocha, C., Freeman, T. P., Schafer, G., Gardener, C., Das,
R. K., Morgan, C. J., & Curran, H. V. (2015). Acute effects of
delta-9-tetrahydrocannabinol, cannabidiol and their combination
on facial emotion recognition: A randomized, double-blind, pla-
cebo-controlled study in cannabis users. European
Neuropsychopharmacology,25(3), 325334. doi:10.1016/j.euroneuro.
2014.11.014
Honarmand, K., Tierney, M. C., OConnor, P., & Feinstein, A. (2011).
Effects of cannabis on cognitive function in patients with multiple
sclerosis. Neurology,76(13), 11531160. doi:10.1212/WNL.
0b013e318212ab0c
Jelicic, M., Ceunen, E., Peters, M. J., & Merckelbach, H. (2011).
Detecting coached feigning using the Test of Memory Malingering
(TOMM) and the Structured Inventory of Malingered
Symptomatology (SIMS). Journal of Clinical Psychology,67(9),
850855. doi:10.1002/jclp.20805
Kim, N., Boone, K. B., Victor, T., Lu, P., Keatinge, C., & Mitchell, C.
(2010). Sensitivity and specificity of a Digit Symbol recognition trial
in the identification of response bias. Archives of Clinical
Neuropsychology,25(5), 420428. doi:10.1903/arclin/acq040
Lange, R. T., Iverson, G. L., Zakrzewski, M. J., Ethel-King, P. E., &
Franzen, M. D. (2005). Interpreting the Trail Making Test
following traumatic brain injury: Comparison of traditional time
scores and derived indices. Journal of Clinical and Experimental
Neuropsychology,27, 897906. doi:10.1080/1380339049091290
Larrabee, G. J. (2012). Performance validity and symptom validity in
neuropsychological assessment. Journal of the International
Neuropsychological Society,18(4), 625630.
Larrabee, G. J. (2014). False-positive rates associated with the use of
multiple performance and symptom validity tests. Archives of
Clinical Neuropsychology,29(4), 364373. doi:10.1093/arclin/acu019
Lezak, M. D., Howieson, D. B., Bigler, E. D., & Tranel, D. (2012).
Neuropsychological assessment. New York, NY: Oxford University
Press.
Lichtenstein, J. D., Erdodi, L. A., & Linnea, K. S. (2017). Introducing a
forced-choice recognition task to the California Verbal Learning
Test Childrens Version. Child Neuropsychology,23(3), 284299.
doi:10.1080/09297049.2015.1135422
Lichtenstein, J. D., Erdodi, L. A., Rai, J. K., Mazur-Mosiewicz, A., &
Flaro, L. (2018a). Wisconsin Card Sorting Test embedded validity
indicators developed for adults can be extended to children. Child
Neuropsychology,24(2), 247260. doi:10.1080/09297049.2016.
1259402
Lichtenstein, J. D., Greenacre, M. K., Cutler, L., Abeare, K., Baker,
S. D., Kent, K. J., Erdodi, L. A. (2019). Geographic variation and
instrumentation artifacts: In search of confounds in performance
validity assessment in adults with mild TBI. Psychological Injury and
Law,12(2), 127145. doi:10.1007/s12207-019-09354-w
Lichtenstein, J. D., Holcomb, M., & Erdodi, L. A. (2018b). One-minute
PVT: Further evidence for the utility of the California Verbal
Learning TestChildrens Version Forced Choice Recognition Trial.
Journal of Pediatric Neuropsychology,4(34), 94104. doi:10.1007/
s40817-018-0057-4
Lippa, S. M. (2018). Performance validity testing in neuropsychology:
A clinical guide, critical review, and update on a rapidly evolving lit-
erature. The Clinical Neuropsychologist,32(3), 391421. doi:10.1080/
13854046.2017.1406146
Lynch, S. G., Dickerson, K. J., & Denney, D. R. (2010). Evaluating
processing speed in multiple sclerosis: A comparison of two rapid
serial processing measures. The Clinical Neuropsychologist,24(6),
963976. doi:10.1080/13854046.2010.502128
Martin, P. K., Schroeder, R. W., & Odland, A. P. (2015).
Neuropsychologistsvalidity testing beliefs and practices: A survey of
North American professionals. The Clinical Neuropsychologist,29(6),
741746. doi:10.1080/13854046.2015.1087597
McCaffrey, R. J., Ortega, A., & Haase, R. F. (1993). Practice effects in
repeated neuropsychological assessments. Archives of Clinical
Neuropsychology,8(6), 519524. doi:10.1093/arclin/8.6.519
Mechoulam, R., & Parker, L. A. (2013). The endocannabinoid system
and the brain. Annual Review of Psychology,64(1), 2147. doi:10.
1146/annurev-psych-113011-143739
Merten, T., & Merckelbach, H. (2013). Symptom validity in somato-
form and dissociative disorders: A critical review. Psychological
Injury and Law,6(2), 122137. doi:10.1007/s12207-013-9155-x
Mills, M. A., & Brawley, P. (1972). The psychopharmacology of
cannabis sativa": a review.Agents and Actions,2(5), 201215. doi:
10.1007/bf02087044
OBryant, S. E., Gavett, B. E., McCaffrey, R. J., OJile, J. R., Huerkamp,
J. K., Smitherman, T. A., & Humpreys, J. D. (2008). Clinical utility
of Trial 1 of the Test of Memory Malingering (TOMM). Applied
Neuropsychology,15, 113116. doi:10.1080/09084280802083921
Proto, D. A., Pastorek, N. J., Miller, B. I., Romesser, J. M., Sim, A. H.,
& Linck, J. M. (2014). The dangers of failing one or more perform-
ance validity tests in individuals claiming mild traumatic brain
injury-related postconcussive symptoms. Archives of Clinical
Neuropsychology,29, 614624. doi:10.1093/arclin/acu044
Rai, J., An, K. Y., Charles, J., Ali, S., & Erdodi, L. A. (2019).
Introducing a forced choice recognition trial to the Rey Complex
Figure Test. Advance online publication. Psychology and
Neuroscience, doi:10.1037/pne0000175
Reese, C. S., Suhr, J. A., & Riddle, T. L. (2012). Exploration of malin-
gering indices in the Wechsler Adult Intelligence Scale Fourth
Edition Digit Span subtest. Archives of Clinical Neuropsychology,
27(2), 176181. doi:10.1093/arclin/acr117
10 P. OLLA ET AL.
Roser, P., Juckel, G., Rentzsch, J., Nadulski, T., Gallinat, J., &
Stadelmann, A. M. (2008). Effects of acute oral delta9-tetrahydro-
cannabinol and standardized cannabis extract on the auditory P300
event-related potential in healthy volunteers. European
Neuropsychopharmacology,18(8), 569577. doi:10.1016/j.euroneuro.
2008.04.008
Roye, S., Calamia, M., Bernstein, J. P., De Vito, A. N., & Hill, B. D.
(2019). A multi-study examination of performance validity in under-
graduate research participants. The Clinical Neuropsychologist,33(6),
11381155. doi:10.1080/13854046.2018.1520303
Santos, O. A., Kazakov, D., Reamer, M. K., Park, S. E., & Osmon,
D. C. (2014). Effort in college undergraduate is sufficient on the
Word Memory Test. Archives of Clinical Neuropsychology,29(7),
609613. doi:10.1093/arclin/acu039
Schoedel, K. A., Chen, N., Hilliard, A., White, L., Stott, C., Russo, E.,
Sellers, E. M. (2011). A randomized, double-blind, placebo-
controlled crossover study to evaluate the subjective abuse potential
and cognitive effects of nabiximols oromucosal spray in subjects
with a history of recreational cannabis use. Human
Psychopharmacology,26(3), 224236. doi:10.1002/hup.1196
Schroeder, R. W., Olsen, D. H., & Martin, P. K. (2019). Classification
accuracy rates of four TOMM validity indiced when examined inde-
pendently and jointly. Advance online publication. The Clinical
Neuropsychologist, 1. doi:10.1080/13854046.2019.1619839
Schroeder, R. W., Twumasi-Ankrah, P., Baade, L. E., & Marshall, P. S.
(2012). Reliable digit span: A systematic review and cross-validation
study. Assessment,19(1), 2130. doi:10.1177/1073191111428764
Schuster, R. M., Hoeppner, S. S., Evins, A. E., & Gilman, J. M. (2016).
Early onset marijuana use is associated with learning inefficiencies.
Neuropsychology,30(4), 405415. doi:10.1037/neu0000281
Schutte, C., Axelrod, B. N., & Montoya, E. (2015). Making sure neuro-
psychological data are meaningful: Use of performance validity test-
ing in medicolegal and clinical contexts. Psychological Injury and
Law,8(2), 100105. doi:10.1007/s12207-015-9225-3
Silk-Eglit, G. M., Stenclik, J. H., Miele, A. S., Lynch, J. K., &
McCaffrey, R. J. (2015). Rates of false-positive classification resulting
from the analysis of additional embedded performance validity
measures. Applied Neuropsychology: Adult,22(5), 335347. doi:10.
1080/23279095.2014.938809
Spreen, O., & Benton, A. L. (1965). Comparative studies of some psy-
chological tests for cerebral damage. The Journal of Nervous and
Mental Disease,140(5), 323333. doi:10.1097/00005053-196505000-
00002
Stern, Y., Andrews, H., Pittman, J., Sano, M., Tatemichi, T., Lantigua,
R., & Mayeux, R. (1992). Diagnosis of dementia in a heterogeneous
population. Archives of Neurology,49(5), 453460. doi:10.1001/arch-
neur.1992.00530290035009
Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A compendium of
neuropsychological tests. New York, NY: Oxford University Press.
Stuss, D. T., Ely, P., Hugenholtz, H., Richard, M. T., LaRochelle, S.,
Poirier, C. A., & Bell, I. (1985). Subtle neuropsychological deficits in
patients with good recovery after closed head injury. Neurosurgery,
17(1), 4147. doi:10.1227/00006123-198507000-00007
Sugarman, M. A., & Axelrod, B. N. (2015). Embedded measures of per-
formance validity using verbal fluency tests in a clinical sample.
Applied Neuropsychology: Adult,22(2), 141146.
Suhr, J. A. (2003). Neuropsychological impairment in fibromyalgia:
Relation to depression, fatigue, and pain. Journal of Psychosomatic
Research,55(4), 321329. doi:10.1016/S0022-3999(02)00628-1
Suhr, J., & Spickard, B. (2012). Pain-related fear is associated with cog-
nitive task avoidance: Exploration of the cogniphobia construct in a
recurrent headache sample. The Clinical Neuropsychologist,26(7),
11281141. doi:10.1080/13854046.2012.713121
Theisen, M. E., Rapport, L. J., Axelrod, B. N., & Brines, D. B.
(1998). Effects of practice in repeated administrations of the
Wechsler Memory Scale Revised in normal adults. Assessment,
5(1), 8592.
Tyson, B. T., Baker, S., Greenacre, M., Kent, K. J., Lichtenstein, J. D.,
Sabelli, A., & Erdodi, L. A. (2018). Differentiating epilepsy from psy-
chogenic nonepileptic seizures using neuropsychological test data.
Epilepsy and Behavior,87,3945. doi:10.1016/j.yebeh.2018.08.010
von Helvoort, D., Merckelbach, H., & Merten, T. (2019). The Self-
Report Symptom Inventory (SRSI) is sensitive to instructed feigning,
but not to genuine psychopathology in male forensic inpatients: An
initial study. The Clinical Neuropsychologist,33(6), 10691082. doi:
10.1080/13854046.2018.1559359
Wade, D. T., Robson, P., House, H., Makela, P., & Aram, J. (2003). A
preliminary controlled study to determine whether whole-plant can-
nabis extracts can improve intractable neurogenic symptoms.
Clinical Rehabilitation,17(1), 2129. doi:10.1191/0269215503cr581oa
Wagner, S., Helmreich, I., Dahmen, N., Lieb, K., & Tadi
c, A. (2011).
Reliability of three alternate forms of the Trail Making Tests A and
B. Archives of Clinical Neuropsychology,26(4), 314321. doi:10.1093/
arclin/acr024
Webber, T. A., & Soble, J. R. (2018). Utility of various WAIS-IV Digit
Span indices for identifying noncredible performance among per-
formance validity among cognitively impaired and unimpaired
examinees. The Clinical Neuropsychologist,32(4), 657670. doi:10.
1080/13854046.2017.1415374
Wechsler, D. (1997). Wechsler Adult Intelligence Scale (3rd ed.). San
Antonio, TX: The Psychological Corporation.
Winton-Brown, T. T., Allen, P., Bhattacharrya, S., Borgwardt, S. J.,
Fusar-Poli, P., Crippa, J. A., McGuire, P. K. (2011). Modulation
of auditory and visual processing by delta-9-tetrahydrocannabinol
and cannabidiol: An fMRI study. Neuropsychopharmacology,36(7),
13401348. doi:10.1038/npp.2011.17
Young, G. (2015). Malingering in forensic disability-related assess-
ments. Psychological Injury and Law,8(3), 188199. doi:10.1007/
s12207-015-9232-4
Young, J. C., Roper, B. L., & Arentsen, T. J. (2016). Validity testing
and neuropsychology practice in the VA healthcare system: Results
from recent practitioner survey. The Clinical Neuropsychologist,
30(4), 497514. doi:10.1080/13854046.2016.1159730
Zuccato, B. G., Tyson, T. T., & Erdodi, L. A. (2018). Early bird fails the
PVT? The effects of timing artifacts on performance validity tests.
Psychological Assessment,30(11), 14911498. doi:10.1037/pas0000596
APPLIED NEUROPSYCHOLOGY: ADULT 11
... However, the studies are character-ized by different patient groups, cannabis products, and administration routes in combination. 12,27,28 This study aimed to assess cognitive changes in a subgroup of Danish patients with advanced cancer who are scheduled for initiation with a standardized dronabinol regimen as adjuvant pain-relieving therapy in conjunction with conventional palliative care. ...
... [39][40][41][42] However, our findings are supported by three other recent studies that found improved cognition and general health among patients using cannabis products. 12,27,43 These studies included different patient groups (none were receiving palliative care) receiving cannabis by different routes of administration (smoked, inhaled, and oil), and treatment did not follow a titration regimen. Some studies have suggested that the route of administration and dosing titration of cannabis may have an influence on the risk of cognitive impairment. ...
Article
Full-text available
Background: Cannabis may offer therapeutic benefits to patients with advanced cancer not responding adequately to conventional palliative treatment. However, tolerability is a major concern. Cognitive function is a potential adverse reaction to tetrahydrocannabinol containing regimens. The aim of this study was to test cognitive function in patients being prescribed dronabinol as an adjuvant palliative therapy. Methods: Adult patients with advanced cancer and severe related pain refractory to conventional palliative treatment were included in this case-series study. Patients were examined at baseline in conjunction with initiation of dronabinol therapy and at a two-week follow-up using three selected Wechsler's adult intelligence scale III neurocognitive tests: Processing Speed Index (PSI), Perceptual Organization Index (POI), and Working Memory Index (WMI). Patients were also assessed using pain visual analog scale, Major Depression Inventory, and Brief Fatigue Inventory. Results: Eight patients consented to take part in the study. Two patients discontinued dronabinol therapy, one due to a complaint of dizziness and another critical progression of cancer disease, respectively. The remaining six patients were successfully treated with a daily dosage of 12.5 mg dronabinol (p = 0.039). PSI (p = 0.020), POI (p = 0.034.), and WMI (p = 0.039). Conclusions: Cognitive function improved in this group of patients with advanced cancer in conjunction with low-dose dronabinol therapy. The cause is likely multifactorial including reported relief of cancer-associated symptoms. Further clinical investigation is required.
... Ironically, a study on the short-term cognitive effects of cannabis consumption found that non-student community participants recruited from the same geographic region performed within the normal range on the category fluency test (M T-score = 49.8) during the acute state of medically verified cannabis intoxication (Olla, Rykulski, et al., 2021;Olla, Abumeeiz, et al., 2021). Similarly, the NSE sample performed below the young adult sample (community volunteers) of Elgamal, Roy, and Sharratt (2011) on the category fluency test (medium effect). ...
... First, the sample is demographically and geographically constrained: All participants were recruited from a single university from a mid-sized Canadian city. Thus, results may not generalize to other populations (e.g., less educated, non-student community members, or patients with neuropsychiatric disorders), given that past research shows geographic and demographic differences in performance on cognitive tests even within the same population Hurtubise et al., 2020;Lichtenstein et al., 2019;McDaniel, 2006;Olla, Rykulski, et al., 2021;Olla, Abumeeiz, et al., 2021), and across cultures/ethnicities (Bezdicek et al., 2012(Bezdicek et al., , 2016Fernandez & Marcopulos, 2008). ...
Article
Full-text available
Objective The objective of the present study was to examine the neurocognitive profiles associated with limited English proficiency (LEP). Method A brief neuropsychological battery including measures with high (HVM) and low verbal mediation (LVM) was administered to 80 university students: 40 native speakers of English (NSEs) and 40 with LEP. Results Consistent with previous research, individuals with LEP performed more poorly on HVM measures and equivalent to NSEs on LVM measures—with some notable exceptions. Conclusions Low scores on HVM tests should not be interpreted as evidence of acquired cognitive impairment in individuals with LEP, because these measures may systematically underestimate cognitive ability in this population. These findings have important clinical and educational implications.
... Spindle et al. (2018) demonstrated that vaporised cannabis produces stronger effects and higher peak THC concentrations than oral consumption, emphasising the influence of consumption mode on cognitive effects. Additionally, studies such as Eadie et al. (2021) and Olla et al. (2019) indicate less severe cognitive impairments in medical cannabis users, highlighting the differences between recreational and medical usage. Importantly, the persistence and recovery of cognitive functions, particularly in verbal memory and attention, may extend beyond acute intoxication in long-term users (Broyd et al., 2016). ...
Article
Full-text available
A randomised, placebo-controlled, double blind, crossover trial on the effect of a 20:1 cannabidiol: Δ9-tetrahydrocannabinol medical cannabis product on neurocognition, attention, and mood.
... First, PVTs provided an empirically based justification for making participation credit contingent upon performance. Second, there is a wealth of empirical evidence supporting the PVTs' ability to detect suboptimal effort during cognitive testing (Abeare, Messa, et al. 2019;Boone 2013;Green et al. 2001;Larrabee 2012;Olla et al. 2021). In fact, PVTs used to be called "effort tests" (Boone 2009;Constantinou et al. 2005;Green 2007;Merten, Bossink, and Schmand 2007;Sharland and Gfeller 2007) prior to a subsequent revision in terminology (Larrabee 2012). ...
Article
Full-text available
Abstract Objective: The present study was designed to replicate previous research on the relationship between class attendance and test scores in higher education, as well as to differentiate the effect of attendance (i.e., being physically present) and engagement (i.e., active learning). Method: Data were collected from 613 undergraduate students enrolled in a third-year psychology course on tests and measurement. Attendance was operationalized as the proportion of classes for which students were present. Engagement was operationalized using psychometric evidence of the level of cognitive effort demonstrated during randomly administered in-class assignments. Learning outcome was measured with an objectively scored midterm and a cumulative final exam, as well as a subjectively scored written assignment. Results: There was a significant positive correlation between attendance (.25-.40), engagement (.26-.43) and academic achievement. Once the analyses were restricted to the two tails of the distribution (i.e., best and worst attendance/engagement), the correlation coefficients increased (.35-.62). Attendance and engagement explained a higher proportion of the variance in the midterm and final exam (16-28% vs 22-38%) than the written assignment (18% and 12%, respectively). The contrast between the two tails of the distribution was associated with a larger effect for engagement compared to attendance, and the final compared to the midterm exam. Conclusions: Attendance fails to capture important aspects of student behavior that predict academic outcome. Physical presence in the classroom should not be equated with active learning. Grade inflation may mask attenuate individual differences relevant to the accurate assessment of student performance. Key words: class attendance, student engagement, effort, motivation, grade inflation
... Therefore, a VI-9 ≤ 1 was considered valid (Boone, 2013;Lippa, 2018;Victor, et al., 2009), whereas a VI-9 ≥ 3 was considered invalid (Larrabee, et al., 2019;Medici, 2013;Odland, et al., 2015). VI-9 values of 2 were excluded from analyses requiring dichotomous (Pass/Fail) outcomes to maintain the purity of criterion groups (Ashendorf, 2019;Axelrod, et al., 2014;Erdodi, 2021;Olla, et al., 2021). The VI-9 produced a good combination of sensitivity (0.50) and specificity (0.94) against the PVT-3, correctly classifying 84% of the sample. ...
Article
Full-text available
The Conners’ Continuous Performance Test – Second Edition (CPT-II) has demonstrated utility as a performance validity test (PVT). Early research also identified several benefits of repeat administration. This study was designed to evaluate the potential of repeat administrations of the CPT-II to enhance its clinical utility in detecting ADHD or non-credible responding. Data were collected from a consecutive case sequence of 100 patients (MAge = 41.5; MEducation = 13.8) referred for neuropsychological assessment. Performance validity was psychometrically defined using a combination of free-standing and embedded PVTs. The CPT-II was administered twice to all patients: once in the morning and once at the end of the testing appointment. Data supported previously identified validity cutoffs for the CPT-II, with the notable exception of Commission errors. Patients with ADHD were less likely to fail validity cutoffs than those without ADHD, suggesting that ADHD alone does not explain failure on PVTs. At Time 1, the CPT-II was insensitive to a clinical diagnosis of ADHD; at Time 2, only one significant contrast emerged. Self-report measures more effectively differentiated between patients with and without ADHD. Test-retest reliability was generally higher in patients with valid performance (0.33-0.83) compared to those with invalid performance (0.38-0.77), with notable variation across scales. Two sets of CPT-II scores increased confidence in the ADHD diagnosis and the remainder of the neurocognitive profile. The CPT-II is self-administered and automatically scored, making routine double administration more practical than might first be thought.
Poster
Full-text available
Based on the 2021 publication "Duration of Neurocognitive Impairment With Medical Cannabis Use: A Scoping Review". For full publication, visit: https://www.frontiersin.org/articles/10.3389/fpsyt.2021.638962/full
Article
Full-text available
Cognitive reserve could mask the deleterious effects of neurodegenerative disorders in older adults with high premorbid functioning, resulting in false negatives on cognitive screening tests. Failing to detect early signs of cognitive decline may result in missing a critical period of intervention with disease modifying drugs or making informed decisions about end of life issues. The Mini-Mental State Exam (MMSE) and the Dementia Rating Scale, 2nd edition (DRS-2) were compared in a sample of 113 highly educated older adults from northern New England recruited for a research study. Participants were classified as cognitively intact or impaired by a panel of experts based on neuroimaging results, psychometric testing, clinical history, and self-reported level of adaptive functioning corroborated by collateral informants. The 2 instruments produced comparably high specificity, but variable sensitivity to cognitive impairment. Surprisingly, the MMSE consistently outperformed the DRS-2 in overall classification accuracy. Raising the standard cutoffs improved the signal detection performance of both tests with minimal loss in specificity and thus, appears to be a clinically justifiable trade-off. At around 90% specificity, MMSE <28 and DRS-2 <139 correctly identified 86% and 67% of the sample. When these cutoffs were restricted to the detection of mild cognitive deficit only, sensitivity declined slightly (81% and 57%, respectively). Neuropsychological tests of memory and executive function were more sensitive to cognitive decline than measures of attention and processing speed. Findings suggest that higher cutoffs may be warranted, and perhaps necessary, in examinees with high educational achievement. At the proposed alternative cutoffs, the MMSE and DRS-2 remained sensitive to even subclinical cognitive deficits. Replication with physician-referred patients is needed to establish the generalizability of the findings to clinical settings.
Article
Full-text available
Objective: The objective of this study was to assess the MOXO-d-CPT utility in detecting feigned ADHD and establish cutoffs with adequate specificity and sensitivity. Method: The study had two phases. First, using a prospective design, healthy adults who simulated ADHD were compared with healthy controls and ADHD patients who performed the tasks to the best of their ability (n = 47 per group). Participants performed the MOXO-d-CPT and an established performance validity test (PVT). Second, the MOXO-d-CPT classification accuracy, employed in Phase 1, was retrospectively compared with archival data of 47 ADHD patients and age-matched healthy controls. Results: Simulators performed significantly worse on all MOXO-d-CPT indices than healthy controls and ADHD patients. Three MOXO-d-CPT indices (attention, hyperactivity, impulsivity) and a scale combining these indices showed adequate discriminative capacity. Conclusion: The MOXO-d-CPT showed promise for the detection of feigned ADHD and, pending replication, can be employed for this aim in clinical practice and ADHD research.
Article
Full-text available
Regional fluctuations in cognitive ability have been reported worldwide. Given perennial concerns that the outcome of performance validity tests (PVTs) may be contaminated by genuine neuropsychological deficits, geographic differences may represent a confounding factor in determining the credibility of a given neurocognitive profile. This pilot study was designed to investigate whether geographic location affects base rates of failure (BRFail) on PVTs. BRFail were compared across a number of free-standing and embedded PVTs in patients with mild traumatic brain injury (mTBI) from two regions of the US (Midwest and New England). Retrospective archival data were collected from clinically referred patients with mTBI at two different academic medical centers (nMidwest = 76 and nNew England = 84). One free-standing PVT (Word Choice Test) and seven embedded PVTs were administered to both samples. The embedded validity indicators were combined into a single composite score using two different previously established aggregation methods. The New England sample obtained a higher score on the Verbal Comprehension Index of the WAIS-IV (d = .34, small-medium). The difference between the two regions in Full Scale IQ (FSIQ) was small (d = .28). When compared with the omnibus population mean (100), the effect of mTBI on FSIQ was small (d = .22) in the New England sample and medium (d = .53) in the Midwestern one. However, contrasts using estimates of regional FSIQ produced equivalent effect sizes (d: .47–.53). BRFail was similar on free-standing PVTs, but varied at random for embedded PVTs. Aggregating individual indices into a validity composite effectively neutralized regional variability in BRFail. Classification accuracy varied as a function of both geographic region and instruments. Despite small overall effect sizes, regional differences in cognitive ability may potentially influence clinical decision making, both in terms of diagnosis and performance validity assessment. There was an interaction between geographic region and instruments in terms of the internal consistency of PVT profiles. If replicated, the findings of this preliminary study have potentially important clinical, forensic, methodological, and epidemiological implications.
Article
Full-text available
This study was designed to develop validity cutoffs by utilizing demographically adjusted T-scores on the trail making test (TMT), with the goal of eliminating potential age and education-related biases associated with the use of raw score cutoffs. Failure to correct for the effect of age and education on TMT performance may lead to increased false positive errors for older adults and examinees with lower levels of education. Data were collected from an archival sample of 100 adult outpatients (MAge = 38.8, 56% male; MEd = 13.7) who were clinically referred for neuropsychological assessment at an academic medical center in the Midwestern USA after sustaining a traumatic brain injury (TBI). Performance validity was psychometrically determined using the Word Memory Test and two multivariate validity composites based on five embedded performance validity indicators. Cutoffs on the demographically corrected TMT T-scores had generally superior classification accuracy compared to the raw score cutoffs reported in the literature. As expected, the T-scores also eliminated age and education bias that was observed in the raw score cutoffs. Both T-score and raw score cutoffs were orthogonal to injury severity. Multivariate models of T-score based cutoff failed to improve classification accuracy over univariate T-score cutoffs. The present findings provide support for the use of demographically adjusted validity cutoffs within the TMT. They produced superior classification to raw score-based cutoffs, in addition to eliminating the bias against older adults and examinees with lower levels of education.
Article
Full-text available
Introduction: The Structured Interview of Reported Symptoms (SIRS-2) utilizes various strategies in the detection of simulated psychiatric disorders. The present study aimed to examine which of these strategies proves most useful in uncovering feigned attention deficit hyperactivity disorder (ADHD) in adulthood. Method: One-hundred seventy-one individuals instructed to feign ADHD were compared to 46 genuine patients with ADHD as well as 99 neurotypical controls in their reports provided on the SIRS-2. Results: Responses provided by simulators resembled those of genuine patients with ADHD on all SIRS-2 subscales with the exception of a supplementary scale tapping Overly Specified symptom reports, where a moderate effect emerged (d = 0.88). Classification accuracy remained low, with particularly poor sensitivity (sensitivity = 19.30%). Sensitivity was higher when the decision rules postulated in the first edition SIRS were applied instead of its successor’s decision model, yet this increase in sensitivity came at the price of unacceptably low specificity. Conclusion: The present results call for a disorder-specific instrument for the detection of simulated ADHD and offer starting points for the development of such a tool.
Article
Full-text available
This study was designed to introduce and validate a forced choice recognition trial to the Rey Complex Figure Test (FCR RCFT ). Healthy undergraduate students at a midsized Canadian university were randomly assigned to the control (n = 80) or experimental malingering (n = 60) conditions. All participants were administered a brief battery of neuropsychological tests. The FCR RCFT had good overall classification accuracy (area under the curve: .79 -.88) against various criterion variables. The conservative cutoff (≤16) was highly specific (.93-.96) but not very sensitive (.38 -.51). Conversely, the liberal cutoff (≤18) was sensitive (.57-.72) but less specific (.88 -.90). The FCR RCFT provided unique information about performance validity above and beyond the existing yes/no recognition trial. Combining multiple RCFT validity indices improved classification accuracy. The utility of previously published validity indicators embedded in the RCFT was also replicated. The FCR RCFT extends the growing trend of enhancing the clinical utility of widely used standard memory tests by developing a built-in validity check. Multivariate models were superior to univariate cutoffs. Although the FCR RCFT performed well in the current sample, replication in clinical/forensic patients is needed to establish its utility in differentiating genuine memory deficits from noncredible responding.
Article
Full-text available
Objective: The Self-Report Symptom Inventory (SRSI) is a new symptom validity test that, unlike other symptom over-reporting measures, contains both genuine symptom and pseudosymptom scales. We tested whether its pseudosymptom scale is sensitive to genuine psychopathology and evaluated its discriminant validity in an instructed feigning experiment that relied on carefully selected forensic inpatients (n = 40). Method: We administered the SRSI twice: we instructed patients to respond honestly to the SRSI (T1) and then to exaggerate their symptoms in a convincing way (T2). Results: On T1, the pseudosymptom scale was insensitive to patients’ actual psychopathology. Two patients (5%) had scores exceeding the liberal cut point (specificity = 0.95) and no patient scored above the more stringent cut point (specificity = 1.0). Also, the SRSI cut scores and ratio index discriminated well between honest (T1) and exaggerated (T2) responses (AUCs were 0.98 and 0.95, respectively). Conclusions: Given the relatively few false positives, our data suggest that the pseudosymptom scale of the SRSI is a useful measure of symptom over-reporting in forensic treatment settings.
Article
Full-text available
Objective: To assess the prevalence of invalid performance on baseline neurocognitive testing using embedded measures within computerized tests and individually administered neuropsychological measures, and to examine the influence of incentive status and performance validity on neuropsychological test scores. Setting: Sport-related concussion management program at a regionally accredited university. Participants: A total of 83 collegiate football athletes completing their preseason baseline assessment within the University's concussion management program and a control group of 140 nonathlete students. Design: Cross-sectional design based on differential incentive status: motivated to do poorly to return to play more quickly after sustaining a concussion (athletes) versus motivated to do well due to incentivizing performance (students). Main measures: Immediate Post-Concussion and Cognitive Testing (ImPACT), performance validity tests, and measures of cognitive ability. Results: Half of the athletes failed at least 1 embedded validity indicator within ImPACT (51.8%), and the traditional neuropsychological tests (49.4%), with large effects for performance validity on cognitive test scores (d: 0.62-1.35), incentive status (athletes vs students; d: 0.36-1.15), and the combination of both factors (d: 1.07-2.20) on measures of attention and processing speed. Conclusion: Invalid performance on baseline assessment is common (50%), consistent across instruments (ImPACT or neuropsychological tests) and settings (one-on-one or group administration), increases as a function of incentive status (risk ratios: 1.3-4.0) and results in gross underestimates of the athletes' true ability level, complicating the clinical interpretation of the postinjury evaluation and potentially leading to premature return to play.
Article
Objective: This study investigated sensitivity and specificity rates of four Test of Memory Malingering (TOMM) indices (Trial 1, Trial 2, Retention, and Albany Consistency Index (ACI)) and examined how classification accuracy rates change when utilizing these indices in various combinations. Method: A sample of 202 neuropsychological outpatients was utilized. Patients were categorized as valid performers if they passed all criterion performance validity tests (PVTs) and were determined to be invalid performers if they failed two or more criterion PVTs. Classification accuracy statistics were obtained for individual TOMM indices as well as combinations of TOMM indices. Results: When using only Trial 1 as a validity indicator, the TOMM identified 57% of invalidly performing individuals. When all TOMM indices were examined, the ACI demonstrated the highest sensitivity value (63%) but it also demonstrated the lowest specificity value (91%). Allowing for failure of any of the four TOMM indices provided the best overall sensitivity value (67%) while maintaining adequate specificity (90%). Finally, it was determined that failure of three or more TOMM validity indices resulted in a specificity rate of 97% and failure of four of more TOMM validity indices resulted in a specificity rate of 98%. Conclusions: Classification accuracy of TOMM validity indices are discussed in relation to positive and negative predictive values. Results suggest that clinicians can examine all four TOMM validity indices concurrently, particularly in settings where high base rates of invalidity occur.
Article
Objective: Performance validity testing is a necessary practice when conducting research with undergraduate students, especially when participants are minimally incentivized to provide adequate effort. However, the failure rate on performance validity measures in undergraduate samples has been debated with studies of different measures and cutoffs reporting results ranging from 2.3 to 55.6%. Method: The current study examined multiple studies to investigate failures on performance validity measures in undergraduate students, and how these rates are influenced by liberal and conservative cutoffs. Failure rates were calculated using standalone performance validity tests (PVTs) and embedded validity indices (EVIs) from eight studies conducted at two universities with over one thousand participants. Results: Results indicated that failure rates in standalone PVTs were up to four times greater when using liberal versus conservative cutoffs. EVI rates varied for conservative versus liberal cutoffs with some measures showing almost no difference and others showing 10 times greater failure rates. Conclusions: Findings provide further descriptive data on the base rate of validity test failure in undergraduate student samples and suggest that EVIs might be more sensitive to alterations made in cutoff scores than standalone PVTs. Overall, these results highlight the variability in failure rates across different measures and cutoffs that researchers might employ in any individual study.