Content uploaded by Matthew J. Kmiecik
Author content
All content in this area was uploaded by Matthew J. Kmiecik on Jul 06, 2017
Content may be subject to copyright.
A Method for Characterizing Semantic and Lexical Properties of Sentence
Completions in Traumatic Brain Injury
Matthew J. Kmiecik,Barry N. Rodgers,
David M. Martinez, and Sandra B. Chapman
Center for BrainHealth at The University of Texas at Dallas
Daniel C. Krawczyk
Center for BrainHealth at The University of Texas at Dallas and
The University of Texas Southwestern Medical Center
Clinical investigations of individuals with chronic stage traumatic brain injury (TBI) showing mild-to-
moderate levels of residual impairment largely use standardized neuropsychological assessments to
measure executive functioning. The Hayling Sentence Completion Test (HSCT) relies upon several
executive functions but detects cognitive impairments across studies inconsistently. We sought to (a)
further characterize sentence completions on the HSCT by quantifying their semantic and lexical
properties and (b) investigate cognitive components important for HSCT performance. A sample of 108
mild-to-moderate participants with TBI underwent a comprehensive neuropsychological assessment that
evaluated verbal ability, working memory, processing speed, task switching, and inhibitory control.
Multiple regression analyses suggest that these 5 cognitive components differentially contribute to
describing HSCT performance and measures of semantic and lexical properties of unconnected sentence
completions. Across all 3 measures, verbal ability was most predictive of performance, while inhibitory
control was the least predictive. Working memory capacity also predicted HSCT performance, while
processing speed and task switching ability predicted lexical measures. We present a method for
quantitatively measuring the semantic and lexical properties of generated words on the HSCT and how
these additional measures relate to executive functions.
Public Significance Statement
This study introduces new scoring methods for a neuropsychological assessment of high-level
cognitive functioning, known as the Hayling Sentence Completion Test, often administered to
individuals with traumatic brain injury (TBI). These new measures were related to cognitive abilities
of individuals with chronic mild-to-moderate TBI and may provide more sensitive cognitive metrics
in clinical populations or individuals with mild cognitive impairments.
Keywords: traumatic brain injury, executive functions, neuropsychology, Hayling Sentence Completion
Test, linguistics
Evaluating persistent sequelae of traumatic brain injuries (TBIs)
at chronic postinjury stages remains an important goal of cognitive
rehabilitation studies. Individuals can experience deficits across
cognitive, social, emotional, and behavioral domains depending on
the severity of the TBI (Arciniegas, Frey, Newman, & Wortzel,
2010;Cicerone, Levin, Malec, Stuss, & Whyte, 2006;Dikmen,
Machamer, Powell, & Temkin, 2003). In particular, activities that
rely upon executive functions are frequently compromised (Ar-
ciniegas et al., 2010;Cicerone et al., 2006;Coelho, Liles, & Duffy,
1995).
Executive functions enable the formation, planning, and execu-
tion of goal-orientated behaviors that largely comprise daily life
(Alvarez & Emory, 2006;Jurado & Rosselli, 2007). Executive
functions are mediated by more distinct cognitive processes (e.g.,
inhibitory control, IC, and working memory, WM) and provide the
necessary framework to develop, plan, monitor, and ultimately
complete internally driven goals (Jurado & Rosselli, 2007). Im-
pairments in these cognitive processes, such as attention and
information processing (Mathias & Wheaton, 2007), lead to dif-
ferential cognitive deficits that may separate based on different
executive functioning profiles in TBI (Zimmermann et al., 2015).
The relationships among various cognitive components and how
they contribute to performance on neuropsychological assessments
Matthew J. Kmiecik, Barry N. Rodgers, David M. Martinez, and Sandra
B. Chapman, Center for BrainHealth at The University of Texas at Dallas;
Daniel C. Krawczyk, Center for BrainHealth at The University of Texas at
Dallas, and The University of Texas Southwestern Medical Center.
This work was supported by Department of Defense CDMRP grants
W81XWH-11-2-0194 (Daniel C. Krawczyk), W81XWH-11-2-0195 (Sandra
B. Chapman), and the Meadows Foundation (Daniel C. Krawczyk and Sandra
B. Chapman). We wish to express our gratitude to all the study participants and
their families. Preliminary results of this investigation were presented as a
scientific poster at the annual American Congress of Rehabilitation Medicine
conference in Dallas, Texas, in October 2015. A copy of the scientific poster
is available on ResearchGate.
Correspondence concerning this article should be addressed to Matthew
J. Kmiecik, Center for BrainHealth at The University of Texas at Dallas,
2200 West Mockingbird Lane, Dallas, TX 75235. E-mail: matthew.kmiecik@
utdallas.edu
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Psychological Assessment © 2017 American Psychological Association
2017, Vol. 0, No. 999, 000 1040-3590/17/$12.00 http://dx.doi.org/10.1037/pas0000510
1
AQ: au
AQ: 6
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
of executive functioning remain poorly understood. However, fac-
tor and meta-analyses have demonstrated the importance of re-
lated, but separable, cognitive components including the follow-
ing: IC (Latzman & Markon, 2010;Miyake et al., 2000), task
switching (TS; i.e., cognitive monitoring; Latzman & Markon,
2010;Miyake et al., 2000) information processing (Mathias &
Wheaton, 2007), WM (i.e., updating; Miyake et al., 2000), and
language processing (Whiteside et al., 2016). Further understand-
ing the contributions of cognitive components to executive func-
tioning performance on current assessments will help shape the
development and improvement of TBI rehabilitation methods.
The Hayling Sentence Completion Test (HSCT) is a common
executive functioning assessment (Burgess & Shallice, 1996) that
is widely used in TBI investigations (e.g., Draper & Ponsford,
2008;Fonseca et al., 2012;Hewitt, Evans, & Dritschel, 2006;
Odhuba, van den Broek, & Johns, 2005;Senathi-Raja, Ponsford, &
Schönberger, 2010;Spitz, Maller, O’Sullivan, & Ponsford, 2013;
Spitz, Schönberger, & Ponsford, 2013;Wood & Liossi, 2006;
Wood & Rutterford, 2006;Wood & Williams, 2008). The HSCT
consists of 30 sentences divided into two sections. The last word
from each sentence is removed, and the participant must generate
a word to complete the sentence as quickly as possible. In Section
1, participants provide sensible completions or words that logically
complete each sentence (e.g., The captain wanted to stay with the
sinking ship). In Section 2, participants are instructed to provide
unconnected completions or words that do not logically complete
each sentence (e.g., The captain wanted to stay with the sinking
banana). Successful unconnected completions require participants
to suppress bottom-up semantic search processes that prime ap-
propriate responses and instead implement top-down processes to
provide inappropriate responses (Belleville, Rouleau, & Van der
Linden, 2006;Bielak, Mansueti, Strauss, & Dixon, 2006;Bouquet,
Bonnaud, & Gil, 2003;Burgess & Shallice, 1996;Collette et al.,
2001). Logically completed sentences are categorized as Category
A errors, while sentences completed with close semantic associates
are categorized as Category B errors.
Several studies support the HSCT as an executive functioning
measure (see Strauss, Sherman, & Spreen, 2006). In a seminal
study conducted by Burgess and Shallice (1996), participants with
frontal lobe lesions generated more errors and slower response
latencies on unconnected completions compared with controls and
participants with posterior lesions (see also Burgess & Shallice,
1997). Additional support for frontal lobe involvement on the
HSCT emerged from positron emission tomography (PET) studies
that found increases in prefrontal cortex activation during response
initiation and inhibition (Collette et al., 2001;Nathaniel-James,
Fletcher, & Frith, 1997). Clinical studies have reported decreased
executive functioning abilities on the HSCT in older adults (Bielak
et al., 2006), patients with Alzheimer’s disease (Belleville et al.,
2006), and individuals with Parkinson’s disease (Bouquet et al.,
2003). Participants with TBI have shown greater susceptibility to
making errors (Draper & Ponsford, 2008;Senathi-Raja et al.,
2010) and longer completion times (Fonseca et al., 2012)on
unconnected completions on the HSCT compared with healthy
controls. Odhuba and colleagues (2005) demonstrated modest cor-
relations between HSCT measures and the Dysexecutive Question-
naire (Burgess, Alderman, Evans, Emslie, & Wilson, 1998), as
well as the Community Integration Questionnaire (Willer, Otten-
bacher, & Coad, 1994), providing evidence of ecological validity
for this test. Furthermore, de Frias, Dixon, and Strauss (2006)
conducted a confirmatory factor analysis within a large cross-
sectional sample of older adults (aged between 55 and 85 years)
across four measures of executive functioning: the newer HSCT
and Brixton tests (Burgess & Shallice, 1997) and the more estab-
lished Stroop and Color Trails Test. All four measures loaded upon
a single-factor solution, suggesting a common underlying relation-
ship between these executive functioning measures and providing
evidence for construct validity.
Despite these encouraging findings, results on the HSCT in TBI
samples are sometimes inconsistent when compared across studies,
and its effectiveness in measuring executive functioning has been
called into question (Manchester, Priestley, & Jackson, 2004;
Wood & Liossi, 2006). A study by Spitz and colleagues (2013)
showed no behavioral differences between participants with mild-
to-severe TBI and control participants on HSCT measures, despite
robust correlations relating HSCT performance to white matter
integrity. In contrast to Odhuba and colleagues (2005), other
studies have shown weak correlations between HSCT and the
Dysexecutive Questionnaire in moderate-to-severe TBI samples
(Manchester et al., 2004;Wood & Liossi, 2006), suggesting that
the HSCT may have limited application for assessing everyday life
deficits after TBI.
Inconsistent HSCT results may reflect a lack of specificity in
characterizing participant responses. For instance, the HSCT does
not standardly include a characterization measure for sensible
completions. In other words, illogically or inappropriately com-
pleted sentences in Section 1 are not characterized differently from
logically completed sentences and are scored only with a response
time measure. Furthermore, assessor and cultural language differ-
ences (e.g., the word chip means different things in American vs.
British English) may bias the categorization of Category A and B
errors on unconnected completions (Strauss et al., 2006). Unclear
guidelines for scoring error responses in the HSCT manual may
result in subjective decisions by assessors when scoring semantic
errors (Strauss et al., 2006). Further characterizing sentence com-
pletions using novel quantitative methods may elucidate subtle
impairments in performance that traditional standard and error
scores fail to capture (Thiele, Quinting, & Stenneken, 2016).
The purpose of this investigation was twofold. We sought to (a)
further characterize sentence completions on the HSCT by quan-
tifying their semantic and lexical properties and (b) determine the
extent to which HSCT performance relies upon cognitive compo-
nents of executive functions. Semantic similarity ratings between
sentences and subsequent responses were measured using latent
semantic analysis (LSA; Landauer, Foltz, & Laham, 1998). LSA
allows quantitative measurements of the semantic similarity be-
tween HSCT sentences and participant completions. These mea-
surements importantly provide objective measurements of HSCT
errors and allow hypothesis driven analyses. Accordingly, we
hypothesized that sensible and unconnected completions would
have semantically similar and dissimilar LSA values, respectively.
Furthermore, we hypothesized that the semantic similarity values
on unconnected completions would be negatively related to per-
formance on cognitive components of executive functions. Lexical
characteristics of sentence completions were quantified using the
SUBTL frequency norms from the SUBTLEX
US
corpus (Brys-
baert & New, 2009).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
2KMIECIK, RODGERS, MARTINEZ, CHAPMAN, AND KRAWCZYK
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
Studies of executive functioning are often difficult to compare
given the wide range of available neuropsychological assessments
administered across investigations. Therefore, scores from selected
neuropsychological assessments were combined to form individual
cognitive components shown to be important descriptors of exec-
utive functioning performance from the above summarized factor
and meta-analyses (Latzman & Markon, 2010;Mathias & Whea-
ton, 2007;Miyake et al., 2000;Whiteside et al., 2016): verbal
ability (VA), WM, TS, processing speed (PS), and IC. Although
none of the previously mentioned studies suggests a five-factor
(i.e., component) solution, it is unlikely that orthogonal relation-
ships would exist between all cognitive components of executive
functioning. Therefore, the five cognitive components were chosen
to accommodate the various findings of these factor and meta-
analyses and to comprehensively model HSCT performance, as
well as the semantic and lexical measures from sensible and
unconnected completions.
Method
Participants
This investigation was part of a larger study evaluating the
effectiveness of reasoning training in chronic-phase mild-to-
moderate TBI (Krawczyk et al., 2013). Participants were recruited
throughout the Dallas/Fort Worth area via community flyers, pub-
lic radio announcements, and video advertisements at local cine-
mas. A total of 152 participants with TBI were assessed for
eligibility according to the following inclusion criteria: a Glasgow
Outcome Scale Extended (GOS-E; Wilson, Pettigrew, & Teasdale,
1998) score between four and seven inclusive, at least 6 months’
post-TBI, age between 19 and 65, English proficiency, no current
use of illicit drugs including alcohol, no self-reported preexisting
medical conditions that would substantially negatively impact cog-
nitive measures—including cerebral palsy, mental retardation, au-
tism, epilepsy, pervasive developmental disorder, psychosis, or
active behavioral disorder— excluding previous TBIs, and not
currently pregnant. Symptoms of posttraumatic stress disorder
(PTSD) and depression were assessed, but their presence was not
exclusionary because of their high comorbidity rates with TBI,
especially in military veterans (Chard, Schumm, McIlvain, Bailey,
& Parkinson, 2011;Pugh et al., 2016). The participants’ TBI
severity and number of occurrences with loss of consciousness
(LOC) was determined using the Ohio State University TBI Iden-
tification Method Short Form (v. 12–10-08; Corrigan & Bogner,
2007). Classification into mild or moderate TBI was based on the
worst injury reported, while LOC comprised the total number of
TBI-related occurrences.
Eight participants failed to meet these inclusion criteria, six partic-
ipants were excluded because of elevated scores (scores greater than
10) associated with harmful or hazardous drinking on the Alcohol
Disorders Identification Test (AUDIT; Saunders, Aasland, Babor, de
la Fuente, & Grant, 1993), two participants did not complete the entire
cognitive battery, and an additional 28 participants were lost because
of experimental attrition. Our final sample included 108 participants
with TBI (Table 1). Participants were not excluded from behavioral
analyses if focal lesions were discovered in structural MRI scans
administered to all participants at the pretesting phase of the clinical
trial (see Krawczyk et al., 2013 for imaging protocol). Focal lesions
were observed in the frontal lobes of four participants, including one
dorsal anterior, one left anterior, one right anterior, and one bilateral
lesion, and were not observed elsewhere in the brain. Clinical neuro-
imaging scans are often inconsistently administered at the time of
mild TBI incidents, and their absence was not considered exclusion-
ary. Therefore, we were limited in attributing these focal lesions as
related or unrelated to the participants’ TBI(s). This study was con-
ducted in accordance with the Declaration of Helsinki and approved
by the institutional review boards of The University of Texas at Dallas
and The University of Texas Southwestern Medical Center.
Procedure
This investigation presents cognitive testing results at pretest
and prior to the administration of cognitive training protocols
Table 1
Demographic Information for Participants With Traumatic
Brain Injury
Regression models
n(M,SD)
pPerformance LSA, WF, and SV
N108 67
Sex
Male 66 42 .96
#
Female 42 25
Race
White 83 55 .60
#
African American 13 8
Hispanic 10 2
Asian 1 1
Unknown 1 1
Status
Civilian 66 45 .52
#
Military 42 22
Severity
Mild TBI 93 57 .99
#
Moderate TBI 15 10
Injury
Vehicular accident 33 21 .99
#
Blunt-force trauma 15 10
Fall 19 10
Sport related 14 9
Multiple injuries 16 11
Blast 11 6
LOC (number of times)
0 19 12 .99
#
15229
22315
387
4–5 2 1
6–10 3 2
Unknown 1 1
Age (years) 108 (41, 13) 67 (43, 14) .36
ˆ
Education (years) 106 (16, 3) 66 (16, 3) .79
ˆ
Time since last TBI (years) 106 (11, 9) 67 (11, 10) .72
ˆ
GOSE 107 (6, 1) 66 (6, 1) .96
ˆ
FSE 108 (22, 6) 67 (23, 6) .78
ˆ
OSU Worst Injury Score 106 (3, 1) 65 (3, 1) .84
ˆ
Note. LSA ⫽latent semantic analysis; WF ⫽word frequency; SV ⫽
source variety; TBI ⫽traumatic brain injury; LOC ⫽loss of conscious-
ness; GOSE ⫽Extended Glasgow Outcome Scale; FSE ⫽functional
status examination; OSU ⫽Ohio State University. Regression model
groups were compared using Pearson’s
2
test (#) or Welch’s Two
Sample T-Test (ˆ).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
3
SENTENCE COMPLETIONS IN TBI
T1
AQ: 1
AQ: 2
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
within a randomized, double-blinded, controlled clinical trial.
Baseline cognitive assessments were obtained via a comprehensive
cognitive battery that included self-report measures. Refer to
(Krawczyk et al., 2013) for a detailed description of the experi-
mental procedure.
The HSCT was administered to all subjects in paper-and-pencil
format in accordance with administration and scoring instructions
described in the test manual (Burgess & Shallice, 1997). Response
times were measured with a stopwatch from the time when the
experimenter stopped reading the last word of each sentence to
when the participant began to state his or her response. All neu-
ropsychological tests were scored independently by two blinded
assessors. Any disagreements on scoring were discussed between
the two scorers until a final score was agreed upon.
Measures
HSCT. HSCT performance is traditionally measured using
the Hayling overall scaled score. This measure considers both
sections by summing the scaled scores for sensible completions
(SCs), unconnected completions (UCs), and errors. In an effort to
utilize raw scores across all neuropsychological assessments and
preserve variability, we created a novel performance variable from
the raw scores across all aspects of the HSCT. Raw scores of
reaction time (RT) in seconds were calculated by summing the
times across all sentence completions in each section, creating
separate sensible completion (Section 1) and unconnected comple-
tion (Section 2) time scores. Two types of error scores were
collected for unconnected completions: Category A errors are
responses closely connected to the general meaning of the sentence
(e.g., The captain wanted to stay with the sinking ship), whereas
Category B errors are responses somewhat connected to the sen-
tence (e.g., The captain wanted to stay with the sinking fish).
Category A and B error scores were calculated by summing the
occurrence of each error. We calculated individual z-scores from
the raw time scores of sensible and unconnected completions,
Category A, and Category B errors and summed these z-scores to
form the Performance variable. The Performance scores were
multiplied by ⫺1 to denote better performance with positive
z-scores and to facilitate comprehension in regression results.
Semantic similarity. The semantic similarity between partic-
ipant responses and sentence content was measured using LSA
(Landauer et al., 1998). LSA is a computational method that
utilizes the singular value decomposition on a large corpora of text
to extract similarity ratings between entries of text and has been
shown to accurately model human behavior, such as priming
effects (Landauer et al., 1998). Semantic similarity ratings were
derived via the one-to-many comparisons function in LSA (http://
lsa.colorado.edu/). Each sentence and its associated participant
responses were entered into the main text and texts to compare
fields, respectively. General Reading up to first-year college (300
factors) was the chosen topic space, term-to-term comparisons was
the chosen comparison type, and the comparisons were completed
using the maximum number of factors.
The semantic similarity values for each participant were com-
puted by averaging the LSA values for all 15 SCs and all 15 UCs
and are referred to as SC LSA and UC LSA (see Table 2 for a
description of abbreviations). Higher LSA scores reflect increased
semantic similarity. Therefore, participants generating correct sen-
sible completions, on average, are expected to have higher LSA
scores for this section (SC LSA). In contrast, participants gener-
ating correct unconnected completions, on average, are expected to
have lower LSA scores for this section (UC LSA). In other words,
higher LSA scores on sensible and unconnected completions trans-
late to higher and lower performance, respectively. Occasionally,
LSA values were unavailable because of unique responses from
participants. Participants were not excluded if LSA ratings were
unavailable and each participant’s averaged LSA rating was com-
puted using the LSA ratings available for each section. Unavail-
able LSA ratings occurred for one participant for only one sensible
completion. Seven participants generated unconnected sentence
completions with unavailable LSA ratings, with four participants
having a maximum of one unavailable LSA value and three
participants having a maximum of two unavailable LSA values.
Word frequency and source variety. We computed WF and
SV ratings for each sentence completion using the SUBTL word
frequency norms from the SUBTLEX
US
corpus (Brysbaert &
New, 2009). The SUBTL word frequency norms were retrieved
via the supplementary materials of Brysbaert and New (2009) as a
Microsoft Excel workbook allowing for easy analysis in most data
software programs. Word frequency (WF) ratings represent the
frequency of usage per one million words and have shown to
predict high levels of variance in lexical decision studies and
outperform previously established databases (Brysbaert & New,
2009). In general, results from lexical decision tasks have attrib-
Table 2
Names and Descriptions of Abbreviations
Abbreviation Name Description
HSCT Hayling Sentence Completion Test Measure of participants’ executive functioning
IC Inhibitory control Measure of participants’ inhibitory control ability
LSA Latent semantic analysis Technique used to derive semantic similarity ratings
PS Processing speed Measure of participants’ processing speed
SC Sensible completion Sensible completions on section one of the HSCT
SV Source variety SUBTLEX
US
corpus word usage measure controlled for breadth of sources
TS Task switching Measure of participants’ task switching ability
UC Unconnected completion Unconnected completions on section two of the HSCT
VA Verbal ability Measure of participants’ semantic knowledge and executive fluency
WF Word frequency Measure of word usage from the SUBTLEX
US
corpus
WM Working memory Measure of participants’ working memory span
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
4KMIECIK, RODGERS, MARTINEZ, CHAPMAN, AND KRAWCZYK
T2
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
uted faster recognition and production (i.e., lexical access) of high-
compared with low-frequency words (e.g., Balota & Chumbley,
1984;Balota & Chumbley, 1985;Brysbaert & New, 2009;Jesche-
niak & Levelt, 1994). In addition, concrete words (e.g., “home”)
and abstract words (e.g., “archaic”) seem to mirror these lexical
access effects with faster recognition and understanding of con-
crete than abstract concepts (Kroll & Merves, 1986) that may
depend on contextual information (Schwanenflugel, Harnishfeger,
& Stowe, 1988). Like WF, source variety (SV) measures the
percentage of films or TV shows the word appears (named
SUBTL
CD
in the SUBTLEX
US
corpus) and provides an additional
characteristic of usage frequency by indexing the uniqueness of a
word. Words with high SV (e.g., “man”) tend to be words more
commonly used than those with low SV (e.g., “jackpot”).
It is difficult to determine whether responding with high or low
WF and SV words translates to performance on the HSCT. Gen-
erating high-frequency words on unconnected completions may
reflect efficient lexical access, while generating low-frequency
words may reflect a more thoughtful or effortful process; both
processes can be interpreted as evidence for healthy executive
functioning. Instead, word frequency ratings may provide insight
into the strategies the participants might employ, rather than per-
formance, on the HSCT. For example, participants often name
objects around the room, rather than using a self-generation strat-
egy, during unconnected completions (Burgess & Shallice, 1997).
Measuring the WF and SV of the participants’ responses may
provide additional insights to possible strategies used across sen-
sible and unconnected completions on the HSCT.
Ratings for each participant were computed by averaging the
WF and SV ratings for all 15 sentences within each section of the
HSCT. WF and SV ratings for multiple word responses were
calculated by averaging the WF and SV ratings of each individual
word within the response. WF and SV ratings were not calculated
for a multiple word response if one of the words did not have an
associated WF and SV rating. The same averaging procedure used
for unavailable LSA values was followed for unavailable WF and
SV ratings for single and multiple word responses. Only one
participant generated an unavailable sensible completion for both
WF and SV ratings. Five participants generated unavailable un-
connected completions with four participants having a maximum
of one unavailable WF and SV ratings and one participant had a
maximum of three unavailable WF and SV ratings.
VA. We assessed VA using the similarities subtest of the
Wechsler Abbreviated Scale of Intelligence (WASI; Wechsler,
1999) and the letter and category fluency subtests of the Delis
Kaplan Executive Function System (D-KEFS) Verbal Fluency test
(Delis, Kaplan, & Kramer, 2001). Measures of semantic (i.e.,
category) and phonemic (i.e., letter) fluency, as well as verbal
intelligence (VIQ), are standardly used in the TBI studies. In
general, WASI measurements of verbal (VIQ) and nonverbal abil-
ity (PIQ) are highly reliable and stable in control populations, but
may lose inferential ability in clinical studies (see Strauss et al.,
2006). Verbal semantic and phonemic fluency measures are also
robust and sensitive to executive functioning deficits in TBI that
are not accounted for by VIQ (see Henry & Crawford, 2004).
Correlational analyses revealed collinearity between these mea-
sures: letter and category fluency raw scores, r(114) ⫽.55, p⬍
.001, letter fluency and similarities total correct raw scores,
r(114) ⫽.23, p⫽.013, and between category and similarities total
correct raw scores, r(114) ⫽.28, p⫽.002. Therefore, to reduce
redundancy and collinearity in the regression models we computed
z-scores using the total raw scores and total correct raw scores
from the similarities and letter and category fluency subtests,
respectively, and summed these z-scores to form the VA compo-
nent. This measure provides a dimensional component of VA that
captures variability from frontal based injuries (i.e., phonemic
fluency), temporal based injuries (i.e., semantic fluency), and
premorbid bases of verbal intelligence (i.e., WASI similarities).
WM. WM was assessed using the digit span subtest from the
Wechsler Adult Intelligence Scale Third Edition (WAIS-III;
Wechsler, 1996) and the Daneman-Carpenter Listening Span
(Daneman & Carpenter, 1980). We computed z-scores from the
total raw scores of the digit span test (digits forward plus digits
backward) and the total score from the listening span task. Sum-
ming these z-scores formed the WM component.
PS. PS was assessed using the D-KEFS color-word interfer-
ence subtest. z-Scores were computed from raw scores of the color
naming and word reading conditions. Summing these z-scores
formed the PS component. z-Scores were multiplied by ⫺1to
denote faster PS with positive z-scores.
IC. IC was assessed by using raw scores from the inhibition
(Condition 3) condition of the D-KEFS color-word interference
subtest. IC raw scores were multiplied by ⫺1 to denote better IC
with positive scores.
TS. TS ability was assessed by using the category switching
condition of the D-KEFS verbal fluency subtest and the inhibition-
switching condition of the D-KEFS color-word interference sub-
test. Studies utilizing factor analyses and clinical populations have
voiced support for using D-KEFs switching scores to measure
aspects of executive functioning. Latzman and Markon (2010)
conducted a factor analysis on the D-KEFS subtests and found
high loadings of category- and inhibition-switching on cognitive
monitoring and inhibition factors, respectively, that were repli-
cated in a separate sample and age invariant. In a clinical study,
Strong, Tiesma, and Donders (2011) found reductions in D-KEFS
category fluency performance in participants with mild to severe
TBI compared with controls. In addition, the category fluency
performance for the participants with TBI was predicted by injury
severity (i.e., length of coma) and not influenced by PS and
premorbid intelligence, suggesting that category fluency may cap-
ture aspects of executive functioning impacted mainly by brain
injury (Strong et al., 2011). z-Scores computed from the total
switching accuracy and inhibition/switching raw scores were sub-
tracted to form the TS component. Combining both category- and
inhibition-switching scores in this study provided a multidimen-
sional cognitive component of switching by capturing both the
manipulation of internal representations, which rely on WM (Latz-
man & Markon, 2010), and external cue switching. Because high
scores in switching accuracy and inhibition/switching denoted
high and low performance, respectively, a subtraction was neces-
sary to combine both measurements without losing information
about participant performance. TS scores were multiplied by ⫺1to
denote better TS with positive z-scores.
Statistical analyses were performed in R (version 3.2.0) and
RStudio (version 0.98.1091) using the R packages base (R Core
Team, 2015), QuantPsyc (Fletcher, 2012), and pastecs (Grosjean
& Ibanez, 2014).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
5
SENTENCE COMPLETIONS IN TBI
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
Results
Descriptive statistics of HSCT measures and the assessments
that comprise the cognitive components are reported in Tables 3
and 4, respectively. Seven hypothesis-driven multiple linear re-
gressions were performed using Performance, SC LSA, UC LSA,
SC WF, UC WF, SC SV, and UC SV as dependent variables and
all five cognitive components (i.e., VA, WM, PS, IC, and TS) as
predictors (zero-order correlations are depicted in Table 5). Par-
ticipants were excluded from regression analyses if they did not
have complete measurements from any one of the predictors or
dependent variables. This resulted in 108 participants included in
the Performance model and 67 participants with TBI included in
all the remaining models. These participant groups did not differ
based on demographic information (Table 1). Out of all seven
dependent variables, only the Performance, UC LSA, and UC SV
variables yielded significant models (Table 6). Variance inflation
factor (VIF; Cohen, Cohen, West, & Aiken, 2003;Keith, 2015)
calculations suggest low multicollinearity among the predictors:
VA (VIF ⫽1.7), PS (2.1), IC (2.4), WM (1.3), and TS (2.7). These
results indicate that a variety of cognitive components are impor-
tant in determining HSCT performance, especially during uncon-
nected completions. Furthermore, VA, PS, WM, and TS differen-
tially predicted HSCT performance depending on how sentence
completions were characterized.
Discussion
This investigation explored the HSCT by measuring the seman-
tic and lexical properties of sentence completions generated by
adults with chronic mild-moderate TBI. We also evaluated the
extent to which HSCT performance relies upon more distinct
cognitive components. Our results suggest that the five cognitive
components of VA, PS, WM, IC, and TS differentially contribute
to explaining HSCT performance, as well as the newly introduced
LSA and SV measures of unconnected completions. These results
provide increased support of the HSCT as an assessment of exec-
utive functioning because of its reliance upon a variety of cogni-
tive components. Each significant measure will be discussed in
turn.
HSCT Performance
Positive relationships were found between HSCT performance
and all five cognitive components with significant correlations
between VA, PS, and WM. These positive relationships suggest
that participants with TBI with better VA, faster PS, and higher
WM capacity performed better on the HSCT. In other words, these
three components were especially important for faster participant
response times for sensible and unconnected completions as well
as fewer Category A and B errors.
Traditional studies using the HSCT often attribute IC deficits as
explanations for increases in errors (Draper & Ponsford, 2008;
Senathi-Raja et al., 2010) and completion time (Fonseca et al.,
2012) on unconnected completions. A positive, but nonsignificant,
correlation between IC and HSCT performance suggests congru-
ence with these previous findings. However, our multiple regres-
sion results suggest that VA and WM, not IC, as the most impor-
tant predictors when taking the variability across all other
cognitive components into account and using an all TBI sample.
There are several explanations for the weak contribution of IC to
HSCT overall performance, including IC is potentially (1) unnec-
essary for successful HSCT performance, (2) unaffected in TBI,
and (3) not effectively assessed.
The first point is not a likely explanation given the strong
empirical evidence supporting the need for IC during unconnected
completions from both patient (e.g., Belleville et al., 2006;Bielak
et al., 2006;Bouquet et al., 2003;Burgess & Shallice, 1996;
Fonseca et al., 2012) and neuroimaging (Collette et al., 2001;
Nathaniel-James et al., 1997) studies. Moreover, the HSCT per-
formance variable used in our study comprised performance on
both connected and unconnected completions; however, IC is
mainly required on unconnected completions. The global HSCT
performance variable presents a less targeted approach and may
explain the weak statistical relationship between overall HSCT
performance and the IC component. IC scores were negatively
correlated with the UC LSA measure, which provided a more
targeted approach by isolating unconnected completions, suggest-
ing participants with better IC produced less semantically similar
words on unconnected completions (i.e., better IC was associated
with increased performance on unconnected completions). Thus,
given the previous empirical support for IC on the HSCT and the
negative correlation with UC LSA, we suggest that IC is likely
involved during unconnected completions.
Second, certain aspects of IC may not be affected by TBI.
Dimoska-Di Marco, McDonald, Kelly, Tate, and Johnstone (2011)
performed a meta-analysis of mild to severe TBI studies including
tasks of both response inhibition (e.g., go no/go, stop signal) and
interference control (i.e., Stroop Color-Word interference). This
investigation revealed a small and nonsignificant effect size for
Stroop Color-Word interference control (d⫽0.05) that contrasted
the much larger and significant response inhibition (d⫽0.5)
effect. These results suggest that Stroop interference control may
be less impaired in individuals with TBI compared with response
inhibition. In our investigation, IC was a nonsignificant predictor
Table 3
Descriptive Statistics for Traditional, Semantic, and Lexical Measures of the Hayling Sentence Completion Test
Variable
Sensible completions Unconnected completions
RT (s) LSA WF SV RT (s) A Errors B Errors LSA WF SV
N108 67 67 67 108 108 108 67 67 67
M8.03 .36 445.72 50.18 29.82 .68 1.75 .24 186.33 20.32
SD 7.24 .02 40.17 4.49 26.73 1.21 1.90 .05 243.70 8.54
Note. RT ⫽reaction time; LSA ⫽latent semantic analysis; WF ⫽word frequency; SV ⫽source variety.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
6KMIECIK, RODGERS, MARTINEZ, CHAPMAN, AND KRAWCZYK
AQ: 3
T3
T4
T5
T6
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
in both the HSCT performance and UC LSA models, similarly
suggesting that the Color-Word Interference subtest did not mean-
ingfully explain HSCT performance and may be relatively unim-
paired in this mainly mild TBI sample.
Third, traditional measures from neuropsychological assess-
ments may inadequately assess interference control. Dimoska-Di
Marco et al. (2011) noted in their meta-analysis of the Stroop
Color-Word interference control task that although larger effect
sizes were observed for “total time” on task outcome measures,
these measures are susceptible to factors unrelated to IC (e.g., PS).
While smaller, and even negative, effect sizes were observed for
“RT per trial” and “number of stimuli completed” in mild-severe
TBI samples, these measures for the Stroop Color-Word interfer-
ence subtest may better isolate interference control in TBI samples
(Dimoska-Di Marco et al., 2011). The IC component in our inves-
tigation utilized the “total time” on task outcome measure for the
D-KEFS Color-Word interference subtest and may have less spe-
cifically measured the participants’ interference control.
Furthermore, dimensionality and injury severity may explain the
importance of VA and WM over IC to HSCT performance. The IC
component may have lacked dimensionality given its single as-
sessment composition (i.e., D-KEFS color-word inhibition raw
scores), while the other cognitive components featured combina-
tions of multiple assessments. The VA and WM components were
comprised of three and two assessments, respectively, and may
have captured greater participant variability. In addition, the TBI
participant samples in comparable HSCT studies often featured a
mild to severe range of TBI severity (Draper & Ponsford, 2008;
Fonseca et al., 2012;Senathi-Raja et al., 2010), while our sample
comprised mainly of mild TBIs (n⫽93) with much fewer mod-
erate TBI (n⫽15). Nevertheless, a significant model here sug-
gests all five components were important, to varying degrees, in
predicting HSCT performance, thus reinforcing the HSCT’s ability
to capture executive functioning performance across a wide range
of cognitive abilities.
Semantic Similarity of Unconnected Completions
Negative relationships were found between the semantic simi-
larity of unconnected sentence completions and all five cognitive
components with significant correlations between VA, TS, and IC.
These negative relationships suggest that individuals with TBI
with dysfunctional TS, less IC, and lower VA generated semanti-
cally similar unconnected completions. In other words, partici-
pants with TBI with deficits in executive functions were more
likely to exhibit linguistic inflexibility on unconnected sentence
completions by responding with semantically similar words, de-
spite instructions specifying the opposite. For example, individuals
with worse IC, verbal, and TS ability were more likely to respond
with a semantically similar word (e.g., ship) than a semantically
dissimilar word (e.g., banana) for the sentence “The captain
wanted to stay with the sinking . . .” during unconnected comple-
Table 4
Descriptive Statistics of Neuropsychological Assessment Raw and Age-Adjusted Scaled Scores
Component Test Subtest Score MSDMinimum Maximum
Verbal ability WASI Similarities Raw 37.35 4.08 28 44
T 53.97 6.73 39 68
D-KEFS (VF) Letter Raw 40.10 9.99 12 65
Scaled 10.69 3.05 3 18
Category Raw 39.46 8.95 21 67
Scaled 10.25 3.53 3 19
Working memory WAIS-III Digit span Raw 16.89 3.85 9 28
Scaled 10.04 2.62 5 18
DC Listening span Raw 2.64 .92 1 7
Processing speed D-KEFS (CWI) Color naming Raw 31.74 7.99 21 60
Scaled 8.79 3.32 1 14
Word reading Raw 23.58 5.93 15 49
Scaled 9.42 3.13 1 14
Inhibitory control D-KEFS (CWI) Inhibition Raw 59.27 16.93 33 127
Scaled 9.35 3.19 1 15
Task switching D-KEFS (VF) Category switching Raw 13.97 2.74 8 20
Scaled 10.45 3.25 3 18
D-KEFS (CWI) Inhibition/Switching Raw 65.95 17.39 38 112
Scaled 9.34 3.22 1 15
Note. WASI ⫽Wechsler Abbreviated Scale of Intelligence; D-KEFS ⫽Delis Kaplan Executive Function System; WAIS-III ⫽Wechsler Adult
Intelligence Scale Third Edition; DC ⫽Daneman-Carpenter Listening Span; CWI ⫽color–word interference; VF ⫽verbal fluency. Includes all participants
(N⫽108).
Table 5
Correlations Between Dependent Variables and Predictors for
Significant Models
Variable Performance UC LSA UC SV
Verbal ability .30
ⴱⴱⴱ
⫺.37
ⴱⴱⴱ
⫺.39
ⴱⴱⴱ
Processing speed .19
ⴱⴱ
⫺.17 ⫺.01
Inhibitory control .15 ⫺.25
ⴱⴱ
⫺.15
Working memory .28
ⴱⴱⴱ
⫺.20 ⫺.21
ⴱ
Task switching .17
ⴱ
⫺.35
ⴱⴱⴱ
⫺.35
ⴱⴱⴱ
N108 67 67
Note.UC⫽unconnected completion; LSA ⫽latent semantic analysis;
SV ⫽source variety.
ⴱ
p⬍.1.
ⴱⴱ
p⬍.05.
ⴱⴱⴱ
p⬍.01.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
7
SENTENCE COMPLETIONS IN TBI
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
tions. Individuals with TBI tend to make more errors on uncon-
nected completions when utilizing traditional scoring methods of
counting the number of category A and B errors (Draper &
Ponsford, 2008;Senathi-Raja et al., 2010). These traditional scor-
ing methods may introduce assessor bias because participant re-
sponses occasionally do not clearly categorize into Category A and
B errors. In this case, assessors must subjectively decide the error
type, especially if the HSCT manual (Burgess & Shallice, 1997)
does not provide direct examples. Our method of calculating LSA
values between the sentence and its subsequent completion re-
moves this error categorization by human assessors, allowing for a
more quantitative approach in measuring Category A errors (i.e.,
increased semantic similarity) from Category B and correct re-
sponses (i.e., decreased semantic similarity).
Regression results indicated that VA was the strongest predictor
of unconnected completion LSA values. This suggests participants
with intact verbal knowledge may better understand similarities
and differences among concepts and, therefore, more readily apply
this ability during semantically dependent tasks. These findings
align with a factor analysis of various language and executive
functioning tasks performed by Whiteside et al. (2016). This
investigation showed word generation tasks were mainly predom-
inated by a language, rather than an executive functioning, factor
suggesting semantic knowledge plays an important role in execu-
tive functioning tasks. Therefore, participants’ language abilities
when completing semantically dependent measures of executive
functioning, such as the HSCT, may influence executive function-
ing outcomes. This influence, however, may confound and reduce
the sensitivity in consistently detecting long-term sequelae of
traumatic brain injuries, especially with aphasic TBI patients
(Frey, 2016). Characterizing the semantic characteristics of partic-
ipant word generations, such as using LSA, may help understand
this differentiation between language and executive functioning
ability.
Source Variety of Unconnected Completions
Negative relationships were found between the source variety of
unconnected sentence completions and all five cognitive compo-
nents with significant correlations between VA and TS. These
negative relationships suggest TBI individuals with dysfunctional
cognitive components generated more ubiquitously used words
during unconnected completions.
However, when accounting for the variability across all predic-
tors in the multiple regression the cognitive components of VA,
PS, and TS differentially predicted the source variety of uncon-
nected sentence completions. While higher scores in VA and TS
predicted lower source variety ratings (i.e., less common words),
better PS predicted higher source variety ratings (i.e., more com-
mon words). Participants with higher VA and TS, but slow PS,
may rely on more uncommon words when responding to uncon-
nected completions. This pattern, however, may reverse in partic-
ipants who exhibit faster PS, but lower VA and TS scores. This
result may be explained by a coping strategy often used by par-
ticipants with TBI during the administration of the HSCT. When
generating unconnected completions, participants frequently name
objects around the room rather than provide an internally gener-
ated response. The extent to which a participant may be employing
a room-object naming strategy to avoid internal word generation
may be indexed with source variety, as objects located in rooms
are often more concrete (e.g., chair) than concepts internally gen-
erated (e.g., love). In addition, individual differences across sev-
eral neuropsychological assessments may help determine how
preserved and damaged cognitive capacities interact in adults with
Table 6
Multiple Regression Results Predicting Hayling Sentence Completion Test Measures From Five
Cognitive Components
DV
b
95% CI
SE FdfPredictor LL UL
Performance
Verbal ability .35
ⴱⴱ
.04 .66 .15 .27 3.10
ⴱⴱ
5, 102
Processing speed .13 ⫺.26 .53 .20 .09
Inhibitory control ⫺.0001 ⫺.05 .05 .02 ⫺.001
Working memory .60
ⴱⴱ
.003 1.20 .30 .21
Task switching ⫺.26 ⫺.76 .26 .26 ⫺.15
UC LSA
Verbal ability ⫺.006
ⴱ
⫺.012 .001 .003 ⫺.25 2.57
ⴱⴱ
5, 61
Processing speed .003 .005 .011 .004 .13
Inhibitory control .00003 ⫺.001 .001 .0005 .01
Working memory ⫺.002 ⫺.015 .012 .007 ⫺.03
Task switching ⫺.008 ⫺.020 .003 .006 ⫺.28
UC SV
Verbal ability ⫺1.14
ⴱⴱ
⫺2.20 ⫺.09 .53 ⫺.30 4.57
ⴱⴱⴱ
5, 61
Processing speed 1.41
ⴱⴱ
.13 2.69 .64 .35
Inhibitory control .063 ⫺.10 .23 .08 .14
Working memory ⫺.73 ⫺2.88 1.42 1.07 ⫺.09
Task switching ⫺2.3
ⴱⴱ
⫺4.15 ⫺.45 .93 ⫺.46
Note. CI ⫽confidence interval; DV ⫽dependent variable; LL ⫽lower limit; UL ⫽upper limit; UC ⫽
unconnected completion; LSA ⫽Latent Semantic Analysis; SV ⫽source variety.
ⴱ
p⬍.1.
ⴱⴱ
p⬍.05.
ⴱⴱⴱ
p⬍.01.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
8KMIECIK, RODGERS, MARTINEZ, CHAPMAN, AND KRAWCZYK
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
TBI. These cognitive interactions may explain variability in treat-
ment outcomes and inform future treatment plans for chronic-
phase TBI.
In addition, VA was the most important and contributing com-
ponent across all three models. The letter (i.e., phonemic) and
category (i.e., semantic) fluency subtests that capture the VA
component are widely used in measuring executive functioning in
TBI (Busch, McBride, Curtiss, & Vanderploeg, 2005;Henry &
Crawford, 2004;Jurado, Mataro, Verger, Bartumeus, & Junque,
2000;Kavé, Heled, Vakil, & Agranov, 2011;Raskin & Rearick,
1996) and show evidence that deficits in verbal fluency may lead
to disruptions in global executive functioning (see Henry & Craw-
ford, 2004, for a review). Therefore, it is not surprising that VA for
participants with TBI was important in explaining variations in
overall HSCT performance, as well as lexical and semantic prop-
erties of unconnected completions. A high reliance upon WM and
VA, and low reliance upon IC, suggests the HSCT may rely
heavily upon the cognitive flexibility of linguistic information,
rather than pure IC.
Limitations
Given the exploratory nature of this method and investigation in
TBI, we would like to address its limitations. This investigation
was part of a larger study examining the effectiveness of reasoning
training in TBI, leading to potential participant selection effects
(i.e., participants with persisting cognitive complaints or symp-
toms). Given our unbalanced sample size we were unable to
address the differences in task performance between participants
with mild (N⫽93) and moderate (N⫽15) TBI. Furthermore, we
were unable to account for the presence of possible aphasias within
our TBI sample. The majority of the participants demonstrated
average levels of performance on linguistic measures for WASI
similarities age-adjusted Tscores and D-KEFS age-adjusted scaled
scores for letter fluency category fluency, and category switching
(Table 4). Future TBI studies of executive functioning are recom-
mended to include assessments of aphasia, such as the Boston
Diagnostic Aphasia Examination (Strauss et al., 2006), to better
characterize language deficits that may impair performance (Frey,
2016). Moreover, inferences to other populations are limited be-
cause of a lack of external validation of the proposed methods in
control and non-TBI populations.
The use of raw scores from neuropsychological assessments to
create z-scored cognitive components failed to take into account
the participants’ age and education that could impact the results for
reasons other than TBI-related executive functioning declines
(e.g., aging). To assess these influences, hierarchical multiple
regressions were performed by entering the age and years of
education as step one and adding the cognitive components in step
two. Age and years of education were nonsignificant predictors in
both steps and step two changes in R
2
were significant across all
three models, suggesting that the cognitive components explain
HSCT performance over and above the effects of age and educa-
tion. Despite the multifaceted nature of executive functioning,
subtests from only four neuropsychological assessments (WASI,
WAIS-III, D-KEFS, Daneman-Carpenter Listening Span) captured
the participants’ performance across the five cognitive compo-
nents. Additionally, the low internal consistency of switching scores
captured by the D-KEFS (see Strauss et al., 2006) may reduce the
stability of findings regarding TS ability. Lastly, semantic similarity
and lexical characteristics of sentence completions were occasionally
not found within the LSA and SUBTLEX
US
databases. Missing
words increase the measurement error among sensible and uncon-
nected completion averages on the HSCT. However, this demon-
strates the breadth of the human lexicon and emphasizes the
importance of investigating semantically and lexically driven dif-
ferences in TBI populations, as these differences may elucidate
subtle changes in cognition in the chronic stage of TBI. For
example, we demonstrated in a concurrent study that measuring
the semantic and lexical properties of HSCT sentence completions
revealed differences between individuals with chronic mild-
moderate TBI and healthy controls, as well as cognitive training
differences across time, while standard HSCT measurements sug-
gested otherwise (Kmiecik, Chapman, & Krawczyk, 2015). Future
work in neuropsychological testing should include more specific
approaches in semantically rich assessments to more closely ex-
amine cognitive components underlying more complex executive
functions.
Conclusions
Overall, measuring the semantic and lexical properties of HSCT
sentence completions provided an additional method for charac-
terizing sensible and unconnected sentence completions. Semantic
and lexical properties of unconnected completions, as well as
overall HSCT performance, were predicted by various cognitive
components of executive functioning. These results provide sup-
port for both the HSCT as a measure of executive functioning and
our newly introduced methods for further characterizing partici-
pant responses. We believe that these novel analyses of HSCT data
may be able to enhance the sensitivity of this measure in cases in
which populations of interest are variable or possess mild cogni-
tive impairments. These measures may be valuable in early diag-
noses or in evaluating cases in which traditional measures do not
adequately capture the cognitive impairments present.
References
Alvarez, J. A., & Emory, E. (2006). Executive function and the frontal
lobes: A meta-analytic review. Neuropsychology Review, 16, 17– 42.
http://dx.doi.org/10.1007/s11065-006-9002-x
Arciniegas, D. B., Frey, K. L., Newman, J., & Wortzel, H. S. (2010).
Evaluation and management of posttraumatic cognitive impairments.
Psychiatric Annals, 40, 540 –552. http://dx.doi.org/10.3928/00485713-
20101022-05
Balota, D. A., & Chumbley, J. I. (1984). Are lexical decisions a good
measure of lexical access? The role of word frequency in the neglected
decision stage. Journal of Experimental Psychology: Human Perception
and Performance, 10, 340 –357. http://dx.doi.org/10.1037/0096-1523.10
.3.340
Balota, D. A., & Chumbley, J. I. (1985). The locus of word-frequency
effects in the pronunciation task: Lexical access and/or production?
Journal of Memory and Language, 24, 89 –106. http://dx.doi.org/10
.1016/0749-596X(85)90017-8
Belleville, S., Rouleau, N., & Van der Linden, M. (2006). Use of the
Hayling task to measure inhibition of prepotent responses in normal
aging and Alzheimer’s disease. Brain and Cognition, 62, 113–119.
http://dx.doi.org/10.1016/j.bandc.2006.04.006
Bielak, A. A., Mansueti, L., Strauss, E., & Dixon, R. A. (2006). Perfor-
mance on the Hayling and Brixton tests in older adults: Norms and
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
9
SENTENCE COMPLETIONS IN TBI
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
correlates. Archives of Clinical Neuropsychology, 21, 141–149. http://
dx.doi.org/10.1016/j.acn.2005.08.006
Bouquet, C. A., Bonnaud, V., & Gil, R. (2003). Investigation of supervi-
sory attentional system functions in patients with Parkinson’s disease
using the Hayling task. Journal of Clinical and Experimental Neuropsy-
chology, 25, 751–760. http://dx.doi.org/10.1076/jcen.25.6.751.16478
Brysbaert, M., & New, B. (2009). Moving beyond Kucˇera and Francis: A
critical evaluation of current word frequency norms and the introduction
of a new and improved word frequency measure for American English.
Behavior Research Methods, Instruments and Computer, 41, 977–990.
http://dx.doi.org/10.3758/BRM.41.4.977
Burgess, P. W., Alderman, N., Evans, J., Emslie, H., & Wilson, B. A.
(1998). The ecological validity of tests of executive function. Journal of
the International Neuropsychological Society, 4, 547–558. http://dx.doi
.org/10.1017/S1355617798466037
Burgess, P. W., & Shallice, T. (1996). Response suppression, initiation and
strategy use following frontal lobe lesions. Neuropsychologia, 34, 263–
272. http://dx.doi.org/10.1016/0028-3932(95)00104-2
Burgess, P. W., & Shallice, T. (1997). The Hayling and Brixton tests.
London, England: Pearson.
Busch, R. M., McBride, A., Curtiss, G., & Vanderploeg, R. D. (2005). The
components of executive functioning in traumatic brain injury. Journal
of Clinical and Experimental Neuropsychology, 27, 1022–1032. http://
dx.doi.org/10.1080/13803390490919263
Chard, K. M., Schumm, J. A., McIlvain, S. M., Bailey, G. W., & Parkinson,
R. B. (2011). Exploring the efficacy of a residential treatment program
incorporating cognitive processing therapy-cognitive for veterans with
PTSD and traumatic brain injury. Journal of Traumatic Stress, 24,
347–351. http://dx.doi.org/10.1002/jts.20644
Cicerone, K., Levin, H., Malec, J., Stuss, D., & Whyte, J. (2006). Cognitive
rehabilitation interventions for executive function: Moving from bench
to bedside in patients with traumatic brain injury. Journal of Cognitive
Neuroscience, 18, 1212–1222. http://dx.doi.org/10.1162/jocn.2006.18.7
.1212
Coelho, C. A., Liles, B. Z., & Duffy, R. J. (1995). Impairments of discourse
abilities and executive functions in traumatically brain-injured adults.
Brain Injury, 9, 471– 477. http://dx.doi.org/10.3109/02699059
509008206
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied Multiple
Regression/Correlation Analysis for the Behavioral Sciences (3rd ed.).
Mahwah, NJ: L. Erlbaum Associates.
Collette, F., Van der Linden, M., Delfiore, G., Degueldre, C., Luxen, A., &
Salmon, E. (2001). The functional anatomy of inhibition processes
investigated with the Hayling task. NeuroImage, 14, 258 –267. http://dx
.doi.org/10.1006/nimg.2001.0846
Corrigan, J. D., & Bogner, J. (2007). Initial reliability and validity of the
Ohio State University TBI Identification Method. The Journal of Head
Trauma Rehabilitation, 22, 318 –329. http://dx.doi.org/10.1097/01.HTR
.0000300227.67748.77
Daneman, M., & Carpenter, P. A. (1980). Individual differences in working
memory and reading. Journal of Verbal Learning & Verbal Behavior,
19, 450 – 466. http://dx.doi.org/10.1016/S0022-5371(80)90312-6
de Frias, C. M., Dixon, R. A., & Strauss, E. (2006). Structure of four
executive functioning tests in healthy older adults. Neuropsychology, 20,
206 –214. http://dx.doi.org/10.1037/0894-4105.20.2.206
Delis, D. C., Kaplan, E., & Kramer, J. H. (2001). D-KEFS Executive
Function System: Examiners manual. San Antonio, TX: Pearson Edu-
cation, Inc.
Dikmen, S. S., Machamer, J. E., Powell, J. M., & Temkin, N. R. (2003).
Outcome 3 to 5 years after moderate to severe traumatic brain injury.
Archives of Physical Medicine and Rehabilitation, 84, 1449 –1457.
http://dx.doi.org/10.1016/S0003-9993(03)00287-9
Dimoska-Di Marco, A., McDonald, S., Kelly, M., Tate, R., & Johnstone,
S. (2011). A meta-analysis of response inhibition and Stroop interfer-
ence control deficits in adults with traumatic brain injury (TBI). Journal
of Clinical and Experimental Neuropsychology, 33, 471– 485. http://dx
.doi.org/10.1080/13803395.2010.533158
Draper, K., & Ponsford, J. (2008). Cognitive functioning ten years follow-
ing traumatic brain injury and rehabilitation. Neuropsychology, 22, 618 –
625. http://dx.doi.org/10.1037/0894-4105.22.5.618
Fletcher, T. D. (2012). QuantPsyc: Quantitative psychology tools (Version
1.5 R package). Retrieved from http://cran.r-project.org/package⫽
QuantPsyc
Fonseca, R. P., Zimmermann, N., Cotrena, C., Cardoso, C., Kristensen,
C. H., & Grassi-Oliveira, R. (2012). Neuropsychological assessment of
executive functions in traumatic brain injury: Hot and cold components.
Psychology & Neuroscience, 5, 183–190. http://dx.doi.org/10.3922/j
.psns.2012.2.08
Frey, K. (2016, November). Aphasia in traumatic BI: Characterization,
novel considerations, and treatment. Paper presented at the American
Congress of Rehabilitation Medicine, Chicago, IL.
Grosjean, P., & Ibanez, F. (2014). pastecs: Package for analysis of space-
time ecological series. R package version 1.3–18. Retrieved from https://
CRAN.R-project.org/package⫽pastecs
Henry, J. D., & Crawford, J. R. (2004). A meta-analytic review of verbal
fluency performance in patients with traumatic brain injury. Neuropsy-
chology, 18, 621– 628. http://dx.doi.org/10.1037/0894-4105.18.4.621
Hewitt, J., Evans, J. J., & Dritschel, B. (2006). Theory driven rehabilitation
of executive functioning: Improving planning skills in people with
traumatic brain injury through the use of an autobiographical episodic
memory cueing procedure. Neuropsychologia, 44, 1468 –1474. http://dx
.doi.org/10.1016/j.neuropsychologia.2005.11.016
Jescheniak, J. D., & Levelt, W. J. M. (1994). Word frequency effects in
speech production: Retrieval of syntactic information and of phonolog-
ical form. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 20, 824 – 843. http://dx.doi.org/10.1037/0278-7393.20.4.824
Jurado, M. A., Mataro, M., Verger, K., Bartumeus, F., & Junque, C.
(2000). Phonemic and semantic fluencies in traumatic brain injury
patients with focal frontal lesions. Brain Injury, 14, 789 –795. http://dx
.doi.org/10.1080/026990500421903
Jurado, M. B., & Rosselli, M. (2007). The elusive nature of executive
functions: A review of our current understanding. Neuropsychology
Review, 17, 213–233. http://dx.doi.org/10.1007/s11065-007-9040-z
Kavé, G., Heled, E., Vakil, E., & Agranov, E. (2011). Which verbal
fluency measure is most useful in demonstrating executive deficits
after traumatic brain injury? Journal of Clinical and Experimental
Neuropsychology, 33, 358 –365. http://dx.doi.org/10.1080/13803395
.2010.518703
Keith, T. Z. (2015). Multiple regression and beyond: An introduction to
multiple regression and structural equation modeling (2nd ed.). New
York, NY: Routledge.
Kmiecik, M. J., Chapman, S., & Krawczyk, D., (2015). Executive func-
tioning in traumatic brain injury: A detailed investigation of the Hayling
Test. Archives of Physical Medicine and Rehabilitation, 96, e97– e98.
http://dx.doi.org/10.1016/j.apmr.2015.08.326
Krawczyk, D. C., Marquez de la Plata, C., Schauer, G. F., Vas, A. K.,
Keebler, M., Tuthill, S.,...Chapman, S. B. (2013). Evaluating the
effectiveness of reasoning training in military and civilian chronic trau-
matic brain injury patients: study protocol. Trials, 14, 29. http://dx.doi
.org/10.1186/1745-6215-14-29
Kroll, J. F., & Merves, J. S. (1986). Lexical access for concrete and abstract
words. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 12, 92–107. http://dx.doi.org/10.1037/0278-7393.12.1.92
Landauer, T. K., Foltz, P. W., & Laham, D. (1998). An introduction to
latent semantic analysis. Discourse Processes, 25, 259 –284. http://dx
.doi.org/10.1080/01638539809545028
Latzman, R. D., & Markon, K. E. (2010). The factor structure and age-
related factorial invariance of the Delis-Kaplan Executive Function
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
10 KMIECIK, RODGERS, MARTINEZ, CHAPMAN, AND KRAWCZYK
AQ: 4
AQ: 5
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
System (D-KEFS). Assessment, 17, 172–184. http://dx.doi.org/10.1177/
1073191109356254
Manchester, D., Priestley, N., & Jackson, H. (2004). The assessment of
executive functions: Coming out of the office. Brain Injury, 18, 1067–
1081. http://dx.doi.org/10.1080/02699050410001672387
Mathias, J. L., & Wheaton, P. (2007). Changes in attention and
information-processing speed following severe traumatic brain injury: A
meta-analytic review. Neuropsychology, 21, 212–223. http://dx.doi.org/
10.1037/0894-4105.21.2.212
Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A.,
& Wager, T. D. (2000). The unity and diversity of executive functions
and their contributions to complex “Frontal Lobe” tasks: A latent vari-
able analysis. Cognitive Psychology, 41, 49 –100. http://dx.doi.org/10
.1006/cogp.1999.0734
Nathaniel-James, D. A., Fletcher, P., & Frith, C. D. (1997). The functional
anatomy of verbal initiation and suppression using the Hayling Test.
Neuropsychologia, 35, 559 –566. http://dx.doi.org/10.1016/S0028-
3932(96)00104-2
Odhuba, R. A., van den Broek, M. D., & Johns, L. C. (2005). Ecological
validity of measures of executive functioning. British Journal of Clinical
Psychology, 44, 269 –278. http://dx.doi.org/10.1348/014466505X29431
Pugh, M. J., Finley, E. P., Wang, C. P., Copeland, L. A., Jaramillo, C. A.,
Swan, A. A.,...theTRACC Research Team. (2016). A retrospective
cohort study of comorbidity trajectories associated with traumatic brain
injury in veterans of the Iraq and Afghanistan wars. Brain Injury, 30,
1481–1490. http://dx.doi.org/10.1080/02699052.2016.1219055
Raskin, S. A., & Rearick, E. (1996). Verbal fluency in individuals with
mild traumatic brain injury. Neuropsychology, 10, 416 – 422. http://dx
.doi.org/10.1037/0894-4105.10.3.416
R Core Team. (2015). R: A language and environment for statistical
computing. Vienna, Austria: R Foundation for Statistical Computing.
Retrieved from http://www.r-project.org/
Saunders, J. B., Aasland, O. G., Babor, T. F., de la Fuente, J. R., & Grant,
M. (1993). Development of the Alcohol Use Disorders Identification
Test (AUDIT): WHO Collaborative Project on Early Detection of Per-
sons with Harmful Alcohol Consumption—II. Addiction, 88, 791– 804.
http://dx.doi.org/10.1111/j.1360-0443.1993.tb02093.x
Schwanenflugel, P. J., Harnishfeger, K. K., & Stowe, R. W. (1988).
Context availability and lexical decisions for abstract and concrete
words. Journal of Memory and Language, 27, 499 –520. http://dx.doi
.org/10.1016/0749-596X(88)90022-8
Senathi-Raja, D., Ponsford, J., & Schönberger, M. (2010). Impact of age on
long-term cognitive function after traumatic brain injury. Neuropsychol-
ogy, 24, 336 –344. http://dx.doi.org/10.1037/a0018239
Spitz, G., Maller, J. J., O’Sullivan, R., & Ponsford, J. L. (2013). White
matter integrity following traumatic brain injury: The association with
severity of injury and cognitive functioning. Brain Topography, 26,
648 – 660. http://dx.doi.org/10.1007/s10548-013-0283-0
Spitz, G., Schönberger, M., & Ponsford, J. (2013). The relations among
cognitive impairment, coping style, and emotional adjustment following
traumatic brain injury. The Journal of Head Trauma Rehabilitation, 28,
116 –125. http://dx.doi.org/10.1097/HTR.0b013e3182452f4f
Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A Compendium of
Neuropsychological Tests: Administration, Norms, and Commentary
(3rd ed.). New York, NY: Oxford University Press.
Strong, C. A., Tiesma, D., & Donders, J. (2011). Criterion validity of the
Delis-Kaplan Executive Function System (D-KEFS) fluency subtests
after traumatic brain injury. Journal of the International Neuropsycho-
logical Society, 17, 230 –237. http://dx.doi.org/10.1017/S135561
7710001451
Thiele, K., Quinting, J. M., & Stenneken, P. (2016). New ways to analyze
word generation performance in brain injury: A systematic review and
meta-analysis of additional performance measures. Journal of Clinical
and Experimental Neuropsychology, 38, 764 –781. http://dx.doi.org/10
.1080/13803395.2016.1163327
Wechsler, D. (1996). Wechsler Adult Intelligence Scale–Third edition
(WAIS-III). San Antonio, TX: Pearson Education, Inc.
Wechsler, D. (1999). Wechsler Abbreviated Scale of Intelligence (WASI).
San Antonio, TX: Pearson Education, Inc.
Whiteside, D. M., Kealey, T., Semla, M., Luu, H., Rice, L., Basso, M. R.,
& Roper, B. (2016). Verbal fluency: Language or executive function
measure? Applied Neuropsychology Adult, 23, 29 –34. http://dx.doi.org/
10.1080/23279095.2015.1004574
Willer, B., Ottenbacher, K. J., & Coad, M. L. (1994). The community
integration questionnaire. A comparative examination. American Jour-
nal of Physical Medicine & Rehabilitation, 73, 103–111. http://dx.doi
.org/10.1097/00002060-199404000-00006
Wilson, J. T., Pettigrew, L. E., & Teasdale, G. M. (1998). Structured
interviews for the Glasgow Outcome Scale and the extended Glasgow
Outcome Scale: Guidelines for their use. Journal of Neurotrauma, 15,
573–585. http://dx.doi.org/10.1089/neu.1998.15.573
Wood, R. L., & Liossi, C. (2006). The ecological validity of executive tests
in a severely brain injured sample. Archives of Clinical Neuropsychol-
ogy, 21, 429 – 437. http://dx.doi.org/10.1016/j.acn.2005.06.014
Wood, R. L., & Rutterford, N. A. (2006). Demographic and cognitive
predictors of long-term psychosocial outcome following traumatic brain
injury. Journal of the International Neuropsychological Society, 12,
350 –358. http://dx.doi.org/10.1017/S1355617706060498
Wood, R. L., & Williams, C. (2008). Inability to empathize following
traumatic brain injury. Journal of the International Neuropsychological
Society, 14, 289 –296. http://dx.doi.org/10.1017/S1355617708080326
Zimmermann, N., Pereira, N., Hermes-Pereira, A., Holz, M., Joanette, Y.,
& Fonseca, R. P. (2015). Executive functions profiles in traumatic brain
injury adults: Implications for rehabilitation studies. Brain Injury, 29,
1071–1081. http://dx.doi.org/10.3109/02699052.2015.1015613
Received December 13, 2016
Revision received May 11, 2017
Accepted May 22, 2017 䡲
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
11
SENTENCE COMPLETIONS IN TBI
tapraid5/z1t-assess/z1t-assess/z1t99917/z1t3266d17z
xppws S⫽1 6/9/17 1:39 Art: 2016-1351
APA NLM
A preview of this full-text is provided by American Psychological Association.
Content available from Psychological Assessment
This content is subject to copyright. Terms and conditions apply.