As modern society delves further into the information
revolution, it has become increasingly evident and in-
escapable that aspects of daily life and work revolve
around a computer monitor and a central processing
unit. With the personalization of computers, the percent-
age of U.S. households with a computer has risen dra-
matically, from only 8.2% in 1984 to 52% in 2000
(Newberger, 2001). The field of neuropsychology has
not been immune from this encroachment, and in many
ways it has been welcomed. From the early years of
large-scale, institution-based mainframes to the advent
Requests for reprints should be sent to Jeffrey N. Browndyke,
Department of Psychiatry & Human Behavior, Miriam Hospital,
Brown Medical School, Coro Building West, 3rd floor, 1 Hoppin
Street, Providence, RI 02903, USA.
2002, Vol. 9, No. 4, 210–218
Copyright 2002 by
Lawrence Erlbaum Associates, Inc.
Computer-Related Anxiety: Examining the Impact of
Technology-Specific Affect on the Performance of a Computerized
Neuropsychological Assessment Measure
Jeffrey N. Browndyke
Department of Psychiatry & Human Behavior, Miriam Hospital, Brown Medical School,
Providence, Rhode Island, USA
Ashlie L. Albert
Department of Psychiatry & Human Behavior, Louisiana State University, Baton Rouge, Louisiana, USA
Department of Psychology, Louisiana State University, Baton Rouge, Louisiana, USA
Department of Psychology, Saint Joseph’s University, Philadelphia, Pennsylvania, USA
Robert H. Paul and Ronald A. Cohen
Department of Psychiatry & Human Behavior, Brown Medical School, Miriam Hospital,
Providence, Rhode Island, USA
Karen A. Tucker and W. Drew Gouvier
Department of Psychology, Louisiana State University, Baton Rouge, Louisiana, USA
This study was conducted to examine the effect of impairment status and computer-specific anx-
iety on the performance of a computerized neuropsychological assessment measure. Computer-
related anxiety was measured using a standardized self-report measure tapping anxiety
specific to computers and tehnology. Outcome on this measure was compared with error
scores and response timing variables on a computerized version of the Category Test (CT) in
both normal individuals and individuals with neurological, psychiatric, or substance abuse
histories. Multivariate analysis results, controlling for psychomotor performance, revealed sig-
nificant main effects for group status and computer-related anxiety. CT performance was signif-
icantly related to the level of computer-related anxiety, in that high anxiety resulted in higher
CT error scores and longer response times, and the negative impact of computer-related anxiety
on computerized neuropsychological assessment performance was stronger in individuals with
impairment histories. Our results suggest that as computer-related anxiety increases, perfor-
mance on computer administered neuropsychological assessment measures tends to decrease.
Key words: computers, anxiety, computer-based task performance, clinical neuropsychology,
COMPUTER-RELATED ANXIETY AND TEST PERFORMANCE
of personal computers, individuals associated with neu-
ropsychology were relatively quick to integrate comput-
ers into research and clinical practice (Beaumont, 1975;
Denner, 1977; Long & Wagner, 1986; Russell,
Neuringer,& Goldstein,1970). These efforts,in addition
to more recent work, have highlighted the factors impor-
tant to technology application in neuropsychology and
have laid the foundation for future development and
technological integration (Adams & Heaton, 1987;
American Psychological Association, 1986; American
Psychological Association Committee on Professional
Standards & Committee on Psychological Tests and As-
sessment, 1987; Kane & Kay, 1997). Moreover, al-
though gains in the knowledge base have been made on
both research and clinical application fronts, the mass
appeal of using computer technology in the practice of
clinical neuropsychology has not been as forthcoming.
There are myriad reasons why a general reticence for
clinical computing application exists, some of which are
based on personal bias. Others tend to be more data
based. Most of these concerns are well founded and
must be tackled if clinical neuropsychological practice is
to accept the computerization of cognitive assessment.
One of the central concerns in computerization of
assessment procedures involves the effects of the com-
puter apparatus and environment on the examinee (Sal-
vendy, 1993; Shneiderman, 1998). Are there factors
inherent to human-computer interaction that neuropsy-
chologists must be mindful of if computerized neu-
ropsychological assessment is to be a reliable and valid
indicator of cognitive function? What personal vari-
ables and biases might an examinee bring to the com-
puterized assessment situation and how do these vari-
ables affect performance?
The belief that one can or cannot accomplish a task
involving a computer likely plays a vital role in deter-
mining the level of computer-specific negative affect
that a person may experience. In this regard, the notion
of self-efficacy becomes a consideration in the discus-
sion of computer-related anxiety. Bandura (1977,
1982) asserted that self-efficacy contributes heavily to
performance outcomes. He found that the easier a ma-
chine is to operate, the better a person perceives him-
self or herself as capable of successfully completing
the task. As such, self-efficacy is tied to the level of ap-
paratus difficulty that accompanies a task, which in
turn is related to an individual’s familiarity with the
task apparatus. Rozell and Gardner (1999) posited that
in the realm of computer-based assessment and tasks,
computer self-efficacy and computer-related anxiety
share an inverse relationship. To support the claim,
they demonstrated that higher computer self-efficacy
and associated low levels of computer-related anxiety
improve computer-based test performance. Conversely,
Johnson and Johnson (1981) found that high levels of
computer-related anxiety associated with low com-
puter self-efficacy result in lower test performances.
In an early study of computer-related negative affect,
Hedl, O’Neil, and Hansen (1973) found that students
given computer-based tests have higher levels of state
anxiety both before and after the administration of the
test, as compared to those students given the traditional
paper-and-pencil test who had much lower levels of anx-
iety. Admittedly, the Hedl et al. study was conducted in
an era when computer nonfamiliarity issues would have
been pervasive; consequently, their results may not re-
flect effects seen in more computer literate samples. To
address this confound,Marcoulides (1988) demonstrated
that high levels of computer-related anxiety in more
computer familiar individuals still determines the degree
to which computers can be effectively utilized. Llabre et
al. (1987) affirmed this relationship by showing that an
increase in computer-related anxiety resulted in a de-
crease in test performance and task utilization. George,
Lankford,and Wilson (1992) found significantly different
correlations between self-reported depression collected
through paper-and-pencil or computerized testing and
baseline computer-related anxiety. In their investigation,
correlations were stronger in the computer mode, and
George et al. speculated that those higher in computer-
related anxiety might have been paying more attention to
the computer and their manipulation of the apparatus
than to the test stimuli. However,some evidence contrary
to performance-related decrements has been reported.
Ward, Hooper, and Hannafin (1989) found that, although
some students experience high levels of computer-
related anxiety, the increased anxiety did not effect task
performance in their study sample. This study, notwith-
standing, the bulk of prior research involving negative
computer-related affect and performance has found that
individuals displaying high levels of anxiety about com-
puter-based tasks may not be performing to the best of
their abilities or may not be utilizing the computer in an
efficient and effective manner.
The ascendancy of computer-automated assessment
in neuropsychological practice and research makes the
relationship between negative computer-related affect
and task performance a particularly salient issue. If
task performance is impacted by affective factors spe-
cific to computer use, then an increase in measurement
error may be generated unique to the computerized as-
sessment situation. This measurement error, in turn, af-
fects the confidence that one can place in computer-
ized task performance results approximating true
BROWNDYKE ET AL.
cognitive ability. By investigating computer-related
anxiety on the performance of a computerized version
of the Category Test (CT) in both normal individuals
and individuals with impairment histories, this study
hopes to shed light on the issue of negative computer-
related affect and its relation to neuropsychological
Methods and Materials
Participants were recruited from an undergraduate
participant pool at a major university. Forty-one individ-
uals were used in this investigation—21 with a history
of neurological, psychiatric, or substance abuse difficul-
ties and 20 healthy controls. Table 1 lists the problem
types and frequencies of the participants comprising the
impairment history group, as well as the frequency of
disorder types between computer-related anxiety levels
for impairment history participants.
No significant differences were detected between the
study groups for any of the demographic variables (see
Table 2). Significantly greater proportions of women
(p ? .05) and right-handed (p ? .001) participants were
noted within each of the study groups. However, the
differences in gender and handedness proportionality
did not extend to between-group comparisons.
Screening questionnaire and structured inter-
A general screening questionnaire was used to
obtain information about participant demographics
(gender, age, education, etc.), as well as to detect a his-
tory of one of the following neurological or psycholog-
ical conditions: (a) head trauma greater than mild, un-
complicated severity or repeated, uncomplicated mild
severity head trauma; (b) seizure or seizure disorder;
(c) central nervous system disease (e.g., infection,
tumor, vascular, developmental, degenerative, toxic,
metabolic, and demyelinating); (d) stroke or transient
ischemic attack; (e) exposure to electroconvulsive
therapy or pharmacotherapy for psychiatric illness;
(f) psychiatric illness, including panic disorder, posttrau-
matic stress disorder, obsessive-compulsive disorder,
major depression, dysthymia, mania, and psychosis;
and (g) current excessive alcohol or drug use (e.g., al-
cohol, marijuana, cocaine, amphetamines, barbitu-
rates, and hallucinogens). The screening questionnaire
was administered in a brief interview format, which al-
lowed for the clarification and follow-up of impair-
ment group inclusion criteria.
Computer Anxiety Response Scale.
by Rosen, Sears, and Weil (1992), the Computer Anxi-
ety Response Scale (CARS) is composed of 20 ques-
tion Within Computer-Related Anxiety Groups
Impairment History Group Composition and Distribu-
Head Injury (? MTBI)
Substance Abuse Total
Note. Impairment group inclusion was based on participant
self-report on a structured interview tapping neurological, psychi-
atric, and substance abuse history. CARS ? Computer Anxiety
Rating Scale; MTBI ? minor traumatic brain injury.
aLow and high computer-related anxiety groups were deter-
mined by a median split of CARS (Rosen, Sears, & Weil, 1992)
M SD Ratio
(Right to Left)
5:15 3:18 .39
Note. Analyses were conducted using independent sample
t-test comparisons, unless noted otherwise.
an ? 20. bn ? 21. cEstimated Wechsler Adult Intelligence
Scale–Revised Full Scale IQ derived from the Shipley Institute of Liv-
ing Scale (Shipley, 1940; Zachary et al., 1985). dChi-square statistic.
COMPUTER-RELATED ANXIETY AND TEST PERFORMANCE
tions ranging from general technology contact (e.g.,
“resetting a digital clock after the electricity has been
off”) to varying levels of computer-specific experience
(e.g., “learning to write computer programs”) and is in-
tended as a self-report measure of computer-related
anxiety symptoms and cognition. Responses are an-
chored, 5-point Likert ratings, ranging from 1 (not at
all), indicating a low level of subjective anxiety, to 5
(very much), indicating a high level of subjective anxi-
ety. Summed responses yield a total CARS score, with
scores ranging from a minimum of 20 to a maximum of
100, with higher scores reflecting greater degrees of
computer-related anxiety and negative attitudes toward
working with computers and computer-mediated tech-
nology. Normative ranges based on a large college-
educated population indicate that CARS total scores
within the 20 to 40 range reflect the absence or very
low levels of computer-related anxiety; scores within
the 41 to 49 range reflect a moderate level of computer-
related anxiety; and CARS total scores within the 50 to
100 range indicate high levels of computer-related anx-
iety (Rosen et al., 1992).
Shipley Institute of Living Scale.
Institute of Living Scale (SILS; Shipley, 1940) was ad-
ministered to obtain an estimate current intellectual
functioning. The SILS has been widely used in re-
search and clinical settings where administration time
may be limited, yet a gross estimation of intellectual
skills is necessary for participant selection. The SILS
is divided into two components: (a) a verbal synonym
knowledge subtest comprised of 40 multiple-choice
items (e.g., “jocose ? humorous, paltry, fervid, or
plain”) and (b) 20 completion problems tapping logical
abstraction and sequencing abilities (e.g., “AB BC CD
D_”). Zachary, Crumpton, and Spiegel (1985) devel-
oped regression equations allowing for the conversion
of total SILS scores to estimated Wechsler Adult Intel-
ligence Scale–Revised (WAIS–R) Full Scale IQ (FSIQ)
scores; data from their conversion study showed esti-
mated WAIS–R FSIQ and SILS scores were highly
and significantly correlated (r ? .87). Other re-
searchers have reported that SILS and WAIS–R FSIQ
scores share a more modest positive correlation (r ?
.73; Dalton, Pederson, & McEntyre, 1987).
Halstead-Reitan Finger Tapping and Grooved
Described as sound estimates of
psychomotor performance speed and manual dexterity
(Reitan & Wolfson, 1985), the Finger Tapping and
Grooved Pegboard Tests were employed as possible co-
variates to account for computer task differences inde-
pendent of baseline motor abilities. The Finger Tapping
Test is administered bilaterally, with examinees de-
pressing a key connected to a counter as quickly as pos-
sible for 10 sec. Finger Tapping Test scores are derived
from the average count for each hand performance over
the course of 5 successive trials within ?5 tapping
counts. Performance on the Grooved Pegboard Test re-
quires examinees to rotate and place irregularly shaped
metal pins into a board with a 5 ? 5 array of holes as
quickly as possible. Pin placement is performed for
each hand separately, resulting in 2 performance time-
to-completion scores. Only dominant hand (i.e., hand
preference for use of a computer mouse) performance
on these 2 measures was collected in this study.
Remote Neuropsychological Assessment–Category
The Remote Neuropsychological Assessment–
Category Test (RNA–CT) is a computerized, multimedia
version of the CT developed by Jeffrey Browndyke to
utilize item response feedback (i.e.,bell and buzzer) sim-
ilar to the Halstead–Reitan Category Test (HRCT; Reitan
& Wolfson, 1985), combined with the addition of novel
visual cues (e.g., green and red lights), which add an
additional mode of response feedback for the examinee
(Browndyke, 2001). The instructions for the RNA–CT
are similar to those used by the HRCT and Booklet
Category Test (BCT; DeFilippis & McCampbell, 1991)
with alterations necessitated by the computerized admin-
istration format. Additionally, the RNA–CT differs from
the HRCT and BCT in method of instruction presenta-
tion. Rather than instructions for task completion being
read to the test participant only by the examiner,
RNA–CT instructions are presented in text form on the
computer monitor and simultaneously in auditory form
via computer speakers. Examinees are also able to re-
peat task instructions as many times as is necessary for
task comprehension. The subtest composition, stimuli,
and categorization principles are virtually identical to
those initially developed and revised by Halstead (1947;
Halstead & White 1950) and subsequently reproduced
in print by Simmel and Counts (1957). The scoring of
theRNA–CT follows the same conventions as the HRCT
and BCT (i.e.,total number of errors out of 208 stimulus
items). In addition to a total error score, the RNA–CT
provides a method of determining the number of errors
per subtest, as well as response times per item and total
response times for error and correct responses measured
in milliseconds. Although considerably different from
prior CT versions in its mode of presentation and re-
sponse feedback, performance on the RNA–CT was
demonstrated to be equivalent to BCT performance in a
college-based population (Browndyke, 2001).
BROWNDYKE ET AL.
Administration of the RNA–CT was performed on a
Dell Pentium 166 MHz personal desktop computer
(model type, OptiPlex GM?5100), equipped with
16 MB of RAM, 32-bit file system and virtual memory,
100 MB of hard disk storage space, and a Microsoft
Windows 95 operating system (Ver. 4.00.950).
RNA–CT stimuli were displayed on a 15″ viewable
size monitor using an S3 SVGA graphics card, and the
graphical resolution for the computer was set to a 256-
color palette. Participant responses were input via a
standard two-button computer mouse. Auditory presen-
tation of the RNA–CT task instructions and response
feedback was channeled through two external com-
puter speakers, which were connected to a Creative
Labs PCI128 sound card installed in the computer.
Sound levels on the speakers were varied according to
the listening preference of the participant, but in all
cases the sound level was loud enough to be indepen-
dently perceived by the experiment administrator.
Materials were presented in the following order for
• Neurological/psychiatric history screening ques-
tionnaire and interview
• Dominant hand performance on the Grooved
Pegboard and Finger Tapping Tests (Reitan &
• SILS (Shipley, 1940)
• CARS (Rosen et al., 1992)
• RNA–CT (Browndyke, 2001)
Each participant was familiarized with the computer
apparatus before starting the RNA–CT task. Hand
preference was determined prior to completing the
Grooved Pegboard and Finger Tapping Tests, and the
computer mouse was transferred to either the right or
left side of the participant to facilitate use of the domi-
nant hand for task responding. Once each participant
was comfortable with the apparatus, the experimenter
accessed the RNA–CT and started the task. During
task completion, the experimenter was situated behind
and out-of-view of the participant, allowing the
RNA–CT program to act as the primary administrator
of the task. If at any time during completion of the
RNA–CT the participant required additional assistance
from the experimenter, brief task clarification was al-
lowed. In all cases, though, the experimenter only reit-
erated the instructions presented for the task unless the
nature of the participant’s question was mechanical or
Participants were assigned to groups (normal or im-
pairment history) based on responses on the impair-
ment history screening questionnaire and interview. A
separate independent variable (IV) measuring the level
of computer-related anxiety as measured by CARS
total score was computed based on the median CARS
total score for each group (normal group CARS me-
dian ? 27; impairment history group CARS median ?
34). Thus, each of the IVs had two levels and resulted
in four distinct groups: normal, low computer-related
anxiety; impairment history, low computer-related anx-
iety; normal, high computer-related anxiety; and im-
pairment history, high computer-related anxiety. Total
error count, total time for error responses, and total
time for correct responses on the RNA–CT served as
the dependent variables (DVs) in this investigation.
The latter two DVs, though relatively novel to the CT
assessment paradigm, have been employed in prior
studies utilizing computerized versions of the CT
(Beaumont, 1975; Browndyke, 2001; Choca & Morris,
1992; Rattan, Dean, & Fischer, 1986). It would have
been preferable to use average CT error and correct re-
sponse times, but the variable difficulty of items on the
CT made these measurements impossible. In an effort
to isolate differences in CT response times independent
of psychomotor speed and manual dexterity, Finger
Tapping and Grooved Pegboard Test performances
were selected as possible covariates in subsequent
analyses. Once the appropriateness and selection of the
covariates was determined, analyses were conducted
using multivariate procedures (2 ? 2 multivariate analy-
sis of covariance [MANCOVA]) on the three RNA–CT
outcome DVs. Post hoc analyses were conducted using
Bonferroni multiple comparison corrections.
A preliminary comparison of the anticipated psy-
chomotor performance covariates (e.g., dominant hand
Finger Tapping and Grooved Pegboard Test t scores)
with the study IVs (e.g., group status and computer-
related anxiety) was carried out using a 2 ? 2 between-
subject MANCOVA. Results of the analysis using
COMPUTER-RELATED ANXIETY AND TEST PERFORMANCE
Wilks’Lambda criterion revealed a significant effect of
the combined DVs for group status, ? ? .79, F(2, 36)
? 4.79, p ? .05, but not for level of computer-related
anxiety or the interaction between group status and
computer-related anxiety. Univariate analysis of vari-
ance of the DVs in relation to group status revealed a
significant effect for Grooved Pegboard Test perfor-
mance, F(1, 40) ? 5.53, p ? .05, but not for Finger
Tapping Test performance. These two psychomotor
performance measure scores were subsequently corre-
lated with the primary study DVs (e.g., CT total errors,
CT total time for errors, and CT total time for correct
responses). Neither variable correlated significantly
with any of the primary DVs. However, Grooved Peg-
board performance did approach significance with the
CT total time for errors DV (p ? .07). Because of its
general lack of association with any of the primary DVs
and an absence of significant between-group differ-
ences, the Finger Tapping Test variable was dropped as
a possible covariate from subsequent analyses.
A 2 ? 2 between-subject MANCOVA was per-
formed on the three CT variables: total error, total time
for error responses, and total time for correct re-
sponses. Adjustment was made for one covariate—
dominant hand Grooved Pegboard Test performance.
Independent variables were group status (normal and
impairment) and level of computer-related anxiety
(low and high). The means and standard deviations of
the DVs based on group status and level of computer-
related anxiety are listed in Table 3, and graphical rep-
resentations of CT total errors and error time as a func-
tion of the IVs are noted in Figures 1 and 2.
SPSS (Windows,Ver. 10.01) was used for the analy-
ses with the hierarchical adjustment for nonorthogo-
Mean Scores and Standard Deviations for Category Test Variables as a Function of Group Status and Level of Computer-Related
Category Test Variables
Total Error Total Error Time Total Correct Time
M SDM SDM SD
Note. Computer-related anxiety groups were defined by a median split of the total score on the Computer Anxiety Rating Scale (Rosen
et al., 1992). Time scores are reported in seconds.
an ? 20. bn ? 21.
related anxiety level on Category Test error scores.
Relationship between group status and computer-
related anxiety level on Category Test—total time for error
Relationship between group status and computer-
BROWNDYKE ET AL.
nality. Order of entry of the independent variables was
group status, then computer-related anxiety level. A
total of 41 participants were included in the analy-
ses–20 individuals in the normal group and 21 individ-
uals in the impairment history group. There were no
univariate or multivariate within-cell outliers at p ?
.001. The assumptions of normality, linearity, and ho-
mogeneity of variance-covariance matrices were ade-
quately met. Correlational analyses between the total
CT error and total time for error responses DVs were at
the upper limit of what is thought to be acceptable (see
Table 4). However, a log-determinant of the pooled
within-cell correlation matrix was sufficiently different
from zero, suggesting that multicolinearity was not a
problem for the current analyses (Tabachnick & Fidell,
2001). The covariate was judged adequately reliable
for covariance analysis.
With the use of Wilks’ Lambda criterion, the com-
bined DVs (with the inclusion of the Grooved Peg-
board Test performance covariate), were significantly
affected by both group status, ? ? .61, F(3, 34) ?
7.17, p ? .01, and level of computer-related anxiety, ?
? .71, F(3, 34) ? 4.58, p ? .01, but not by their inter-
action (p ? .27; see Table 5). Given the sensitivity of
the CT to brain dysfunction, it was not surprising that
the results would reflect a strong association between
group status (normal vs. impairment history) and the
combined DVs, partial ?2? .39. A modest association
was noted between level of computer-related anxiety
and the DVs, partial ?2? .29.
Follow-up analyses of covariance (ANCOVA), con-
trolling for Grooved Pegboard performance, were con-
ducted for group status and computer-related anxiety
level on each of the DVs. Using the Bonferroni
method, each ANCOVA was tested at the .008 level.
The ANCOVA for group status on the CT total errors
variable was significant, F(1, 36) ? 22.17, p ? .001,
?2? .38. Differences in the CT total errors DV was
also significant for the level of computer-related anxi-
ety, F(1, 36) ? 12.62, p ? .001, ?2? .26. The CT total
time for errors was not significant for group status
(p ? .05). Trends toward significance were noted for
level of computer-related anxiety on CT total time for
correct responses (p ? .06), as well as the interaction
between group status and level of computer-related
anxiety on this DV (p ? .07).
Significant differences between the impairment his-
tory and healthy control groups were noted for CT
total error but not for the CT timing variables. The for-
mer finding was not entirely unexpected given the
well-founded and documented sensitivity of the CT to
brain dysfunction (Choca, Laatsch, Wetzel, & Agresti,
1997). A similar pattern of significance was noted for
the level of computer-related anxiety, where higher
CARS scores resulted in larger total CT errors, sug-
gesting the existence of a positive relationship between
computer-related anxiety and computerized neuropsy-
chological assessment performance.
One of our aims was to examine the relationship be-
tween computer-related anxiety and neuropsychologi-
cal test performance in a relatively computer savvy
population; that is, college students with at least a
working knowledge of computers and their use. By
holding generational effects and coincident fears asso-
ciated with a general lack of computer familiarity to a
minimum, it was hypothesized that the effect on com-
puterized assessment performance by affective vari-
ables could be studied in relative isolation. Our results
demonstrate that even in a sample thought to be rela-
tively unaffected by computer-use issues typically
plaguing older individuals (e.g., complete lack of com-
puter familiarity, distrust of technology, etc.), self-
reported computer-related anxiety of even a mild-to-
moderate severity may have a mediating effect on test
It is important to note the limitations inherent in our
investigation, especially the small sample size and lack
of a direct measurement of computer familiarity and
computer-related self-efficacy and their relationship
with CARS scores. However, the demographics of our
sample, pulled almost completely from the Nintendo
generation (e.g., born after 1980, high educational
level, Internet users, etc.), would suggest that these
factors, particularly computer familiarity, were likely
to be high in comparison to the population as a whole
(Calhoun, Staley, Hughes, & McLean, 1989; Rubey,
1999). Additionally, while helpful to the investigation
of a possible synergistic relationship between cogni-
tive impairment and negative computer-related affect
Correlation Coefficients for Relations Among Depen-
CT Total Errors
CT Total Error Time
Note: CT ? Category Test.
*p ? .01.
COMPUTER-RELATED ANXIETY AND TEST PERFORMANCE
mance, and, as such, illuminates only one piece of the
puzzle necessary to our complete understanding of the
computerized neuropsychological assessment process.
Adams, K. M., & Heaton, R. K. (1987). Computerized neuropsy-
chological assessment: Issues and applications. In J. Butcher
(Ed.), Computerized psychological assessment (pp. 355–365).
New York: Basic Books.
American Psychological Association. (1986). Guidelines for
computer-based tests and interpretations. Washington, DC:
American Psychological Association Committee on Professional
Standards & Committee on Psychological Tests and Assess-
ment. (1987). Division 40: Task force report in computer-
assisted neuropsychological evaluation. The Clinical Neuropsy-
chologist, 2, 161–184.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of be-
havior change. Psychological Review, 84, 191–215.
Bandura, A. (1982). Self-efficacy mechanism in human agency.
American Psychologist, 37, 122–147.
Beaumont, J. G. (1975). The validity of the category test administered
by on-line computer. Journal of Clinical Psychology,31,458–462.
Browndyke, J. N. (2001). The Remote Neuropsychological
Assessment–Category Test: Development and validation of a
computerized, Internet-based neuropsychological assessment
measure (Doctoral dissertation, Louisiana State University,
2001). Dissertation Abstracts International, 62, 2951B.
Calhoun, M., Staley, D., Hughes, L, & McLean, M. (1989). The re-
lationship of age, level of formal education, and duration of em-
ployment toward attitudes concerning the use of computers in
the workplace. Journal of Medical Systems, 13, 1–9.
Choca, J. P., Laatsch, L.,Wetzel, L., & Agresti,A. (1997). The Hal-
stead Category Test: A fifty year perspective. Neuropsychology
Review, 7, 61–75.
Choca, J., & Morris, J. (1992). Administering the category test
by computer: Equivalence of results. The Clinical Neurolo-
gist, 6, 9–15.
Dalton, J. E., Pederson, S. L., & McEntyre,W. L. (1987). A compar-
ison of the Shipley vs. WAIS–R subtests in predicting WAIS–R
full scale IQ. Journal of Clinical Psychology, 43, 278–279.
on computerized test outcome, our use of participants
with a history of neurological, psychiatric, or sub-
stance abuse issues for impairment group inclusion has
somewhat limited generalizability. These individuals,
some of whom reported significant medical histories
(e.g., seizures, moderate head injury, and treatment for
psychiatric disorder), were still functional enough to
perform at a collegiate level and are not likely to be
representative of most clinic or patient samples. As a
result, while suggesting a trend toward significance,
the lack of an interaction between impairment status
and computer-related anxiety may have been a byprod-
uct of our use of this group. It is quite possible, if our
observed trend reflects the combination of brain injury
or impairment and computer-related anxiety, that sig-
nificance may have been obtained if a more acute and
objective patient sample were employed for compari-
son. This interaction, if it does exist, carries profound
implications for test development, application, and in-
terpretation and must be clarified if confidence is to be
placed in impairment determinations based on com-
puterized evaluation procedure. However, before ex-
plicitly stating that computer-related anxiety and brain
dysfunction interact to magnify decrements in assess-
ment performance, it will be necessary to study the ef-
fect of negative computer-related affect on other com-
puterized neuropsychological measures and in various
patient population samples.
Issues of computer familiarity, level of computer-
related self-efficacy, affective variables specific to
technology, idiosyncratic human-computer interaction,
and computer apparatus ergonomics must all be con-
trolled and accounted for if neuropsychology as a
whole is to accept and adopt computerized neuropsy-
chological assessment as clinically sound and reliable.
This investigation demonstrates the potential impact
that one of these factors may have on testing perfor-
Multivariate and Univariate Analyses of Covariance for Category Test Measures
Total Error Timeb
Total Correct Timeb
Group Status (G)
Anxiety Level (C)
G ? C
1 1.381.71 .243.30
Note. Multivariate F ratios were generated from Wilks’Lambda statistic. Covariate equals the t score for dominant hand (i.e., hand used
to control computer mouse during task) performance on the Halstead Grooved Pegboard Test. Normative t-score conversions for this measure
were derived from Heaton, Grant, and Matthews (1991).
aMultivariate df ? 3, 34. bUnivariate df ? 1, 40.
*p ? .05. **p ? .01. ***p ? .001.
Denner, S. (1977). Automated psychological testing: A review.
British Journal of Social and Clinical Psychology, 16, 175–179.
DeFilippis, N. A., & McCampbell, E. (1991). The Booklet Cate-
gory Test manual: Research and clinical form. Odessa, FL: Psy-
chological Assessment Resources.
George, C. E., Lankford, J. S., & Wilson, S. E. (1992). The effects
of computerized versus paper-and-pencil administration on
measures of negative affect. Computers in Human Behavior, 8,
Halstead, W. C., & White, J. B. (1950). Manual for the Halstead
Battery of Neurophysiological Tests (Mimeographed manual).
Chicago: University of Chicago Press.
Heaton, R. K., Grant, I., & Matthews, C. G. (1991). Comprehen-
sive norms for an expanded Halstead-Reitan battery: Demo-
graphic corrections, research findings, and clinical applications.
Odessa, FL: Psychological Assessment Resources.
Hedl, J., O’Neil, F., & Hansen, D. (1973). Affective reactions to-
ward computer-based intelligence testing. Journal of Consult-
ing and Clinical Psychology, 40, 217–222.
Johnson, J., & Johnson, K. (1981). Psychological considerations
related to the development of computerized testing stations. Be-
havior Research Methods and Instrumentation, 13, 421–424.
Johnson, D. F., & White, C. B. (1980). Effects of training on com-
puterized test performance in the elderly. Journal of Applied
Psychology, 65, 357–358.
Kane, R. L., & Kay, G. G. (1997). Computer applications in neu-
ropsychological assessment. In G. Goldstein & T. Incagnoli
(Eds.), Contemporary approaches to neuropsychological as-
sessment (pp. 359–392). New York: Plenum.
Llabre, M., Clements, N., Fitzhugh, T., Lanellotta, G., Mazzagatti,
R., & Quinones, N. (1987). The effect of computer-administered
testing on test anxiety and performance. Journal of Educational
Computing Research, 3, 429–433.
Long, C. J., & Wagner, M. (1986). Computer applications in neu-
ropsychology. In D. Wedding (Ed.), Neuropsychology hand-
book (pp. 548–569). New York: Springer-Verlag.
Marcoulides, G. (1988). The relationship between computer anxi-
ety and computer achievement. Journal of Educational Com-
puting Research, 4, 151–158.
Rattan, G., Dean, R. S., & Fischer, W. E. (1986). Response time as
a dependent measure on the category test of the Halstead-Reitan
Neuropsychological Test Battery. Archives of Clinical Neu-
ropsychology, 1, 175–182.
BROWNDYKE ET AL.
Reitan, R. M., & Wolfson, D. (1985). The Halstead-Reitan Neu-
ropsychological Battery: Theory and clinical interpretation.
Tucson,AZ: Neuropsychology Press.
Rosen, L. D., Sears, D. C., & Weil, M. M. (1992). Measuring
technophobia. A manual for the administration and scoring of
three instruments: Computer Anxiety Rating Scale (Form C),
General Attitudes Toward Computers Scale (Form C) and Com-
puter Thoughts Survey (Form C). Dominguez Hills: California
State University, Computerphobia Reduction Program.
Rozell, E. J., & Gardner, W. L. (1999). Computer-related success
and failure:A longitudinal field study of the factors influencing
computer-related performance. Computers in Human Behavior,
Rubey, T. C. (1999). Profile of computer owners in the 1990s.
Monthly Labor Review, 122, 41–42.
Russell, E. W., Neuringer, C., & Goldstein, G. (1970). Assessment
of brain damage: A neuropsychological key approach. New
Salvendy, G. (1993). Handbook of human factors and ergonomics.
New York: Wiley.
Shipley, W. (1940). A self-administering scale for measuring intel-
lectual impairment and deterioration. Journal of Psychology, 9,
Shneiderman, B. (1998). Designing the user interface: Strategies
for effective human computer interaction (3rd ed.). New York:
Simmel, M. L., & Counts, S. (1957). Some stable response deter-
minants of perception, thinking, and learning:A study based on
the analysis of a single test. Genetic Psychology Monographs:
Child Behavior, Animal Behavior, and Comparative Psychol-
ogy, 56, 3–157.
Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate sta-
tistics (4th ed.). Boston:Allyn & Bacon.
Ward, T., Hooper, S., & Hannafin, K. (1989). The effect of comput-
erized tests on the performance and attitudes of college stu-
dents. Journal of Educational Computing Research, 5,
Zachary, R. A., Crumpton, E., & Spiegel, D. E. (1985). Estimating
WAIS–R IQ from the Shipley Institute of living scale. Journal
of Clinical Psychology, 41, 532–540.
Original submission September 13, 2002
Accepted September 20, 2002
Page 10 Download full-text