THE WORD READING TEST OF EFFORT IN ADULT
LEARNING DISABILITY: A SIMULATION STUDY
David C. Osmon, Elizabeth Plambeck, Liesa Klein,
and Quintino Mano
University of Wisconsin—Milwaukee, Milwaukee, WI, USA
The Word Reading Test (WRT) was designed to detect effort problems specific to a learn-
ing disability sample. The WRT and the Word Memory Test (WMT) were administered to
two simulator and normal control groups. The WRT showed excellent receiver operating
characteristics (e.g., 90% sensitivity and 100% positive predictive power) and outper-
formed the WMT in detecting both reading and mental speed simulators. This finding
and a double dissociation between reading and speed simulators on WRT errors and reac-
tion time suggested specific effort effects while poor effort of simulators on the WMT
suggested general effort effects. Results are supportive of the WRT as a potential effort
indicator in learning disability.
It is well established that effort is an important variable when trying to inter-
pret neuropsychological results in any case where disincentives to fully motivated
performance are present (Larrabee, 2003). In fact, some research demonstrates that
effort accounts for greater variance in performance than even neurological status
(Green, 2003). However, most neuropsychological effort tests are designed to
detect poor effort in memory performance. While such tests have been shown to
work well in traumatic brain injury (TBI; Binder, 1993; Nies & Sweet, 1994; Slick,
Hopp, Strauss, & Spellacy, 1996), their performance in other neuropsychological
disorders where memory abilities are less relevant is not clear (e.g., learning
Some researchers have attributed the success of forced choice memory tasks in
TBI to their appeal to layperson folklore about the nature of deficits in neurological
disorder (Gouvier, Prestholdt, & Warner, 1988). However, it is not clear that such
tests work well in populations who have disorders where memory ability is not gen-
erally part of the layperson’s conception of the disorder’s effects (e.g., learning dis-
ability). If, in fact, poor effort results from an attempt to simulate the effects of a
disorder, then any individual’s idiosyncratic conception of those effects will play a
role in determining performance. It is possible that memory effort tests work well
because memory deficits are widely believed to be a part of any neurological
Address correspondence to: David C. Osmon, Ph.D., ABPP, Department of Psychology, University
of Wisconsin—Milwaukee, 2441 E. Hartford Ave., Milwaukee, WI 53211. E-mail: firstname.lastname@example.org
Accepted for publication: October 11, 2004.
The Clinical Neuropsychologist, 20: 315–324, 2006
Copyright # Taylor and Francis Group, LLC
ISSN: 1385-4046 print=1744-4144 online
disorder. However, there are probably individuals who are exceptions to this rule
and do not hold this general conception of neurological disorders and memory def-
icits. Such individuals would likely be false negatives on memory effort tests.
Additionally, it is likely that a disorder such as learning disability may not conform
to the general folklore about neurological disorders and memory deficits.
In contrast to the folklore conception, some might argue that memory mea-
sures are sensitive in general to effort independent of an individual’s conception of
the disorder’s effects (Green, personal communication). In this view, effort is a
dimensional construct that exerts its effect through some general process, such as
the generally effortful nature of memory tasks. Thus, the effect does not arise
because the person is necessarily attempting to simulate the neuropsychological
effects of a disorder, but because the motivation to perform is less than complete.
Such a conception is suggested by a positive correlation between performance on
effort tests and neuropsychological tests (Green, 2003).
These two conceptions of effort might be termed the ‘‘domain-specific’’ and
‘‘general-global’’ hypotheses, similar to Lanyon’s (1997) ‘‘global signs of lying’’
and ‘‘accuracy of knowledge’’ conception of malingering. The domain-specific
hypothesis holds that individuals’ difficulty on effort tests arises from poor effort
on specific tests that are face valid for the types of cognitive deficits attributed to
the disorder in question by laypersons. Larrabee (2003) has shown, for example, that
malingering can occur in a wide array of cognitive domains as represented by tests
of visual perception (Visual Form Discrimination), motor functioning (Finger
Tapping), memory=attention (Reliable Digit Span), problem-solving (Wisconsin
Card Sorting), and symptom exaggeration (MMPI Lees-Haley Fake Bad Scale). Fur-
thermore, poor effort may be detected on one or another but not each of these
domains in any given individual, and most researchers advocate using a wide array
of effort measures spaced throughout the evaluation because of the likelihood of poor
effort on only some instruments within the evaluation. Conversely, the generality
hypothesis attributes effort problems to broader issues and would predict effortful
tasks of all kinds to be sensitive to effort issues. Thus, a generally effortful task, such
as typical memory-based effort tests, would be sensitive to effort even in disorders not
generally associated with memory problems by the lay public as a whole.
The present study sought to validate a test of effort (WRT) designed specifi-
cally according to likely layperson conceptions of learning disability. A word is pre-
sented on a computer screen for a brief duration, then two words are immediately
presented on a subsequent screen without delay and without backward mask, such
that the task would not likely tax word reading skills even in poor readers. The
choice is between the actual target word and a foil that contains a similar but incor-
rect choice. Thus, foils consist of choices with characteristics that might likely be
attributed to learning problems, such as mirror letters (develop vs. bevelop), homo-
phones (too vs. two), and additions=deletions of letters (e.g., through vs. thorough).
Additionally, task instructions indicate that speedy but accurate performance is
important and both reaction time and error scores are included.
The availability of both error and reaction time scores allowed a test of
whether specific simulation was discernable. That is, simulation specific to reading
errors or mental speed was tested. If reading-specific simulation is possible, then a
simulator group with instructions to feign reading deficits would be expected to show
316 DAVID C. OSMON ET AL.
greater error scores but comparable reaction times to a comparison group. Con-
versely, if specific mental speed simulation can occur, then a simulator group with
instructions to feign mental speed deficits would be expected to show greater reac-
tion times but fewer errors than the comparison group.
Finally, the Word Memory Test (Green, 2003) was administered for two pur-
poses. First, as a known malingering measure that equals or out performs many
existing forced choice effort tests (Gervais, Rohling, Green, & Ford, 2004; Green,
Berendt, Mandel, & Allen, 2000; Tan, Slick, Strauss, & Hultsch, 2002), it serves as
a standard against which to compare the learning disability measure developed for
this study. Second, the viability of standard effort measures in learning disability
simulation can be tested. If simulators perform poorly on the Word Memory Test,
then some evidence of effort as a dimensional construct related generally to motiv-
ation is provided.
1. The validity of the WRT as a measure of learning disability effort was tested by
determining its ability to distinguish a simulated learning disability group from a
normal control group.
2. The validity of the WRT was further tested by comparing its ability to distinguish
simulator and control groups compared to the WMT.
3. The general effort versus specific effort hypotheses were tested by comparing the
reading and mental speed simulator groups to the control group on both effort
tests. Failure on both measures would be evidence of a general effort effect, while
failure on only the WRT would support a specific effort effect.
4. A further test of the generality versus specificity hypotheses compared a reading
simulator to a speed simulator group. If the speed simulator group was selectively
impaired on reaction time and the reading simulator group was selectively
impaired on the error score, then strong support for a specificity hypothesis
would be assumed.
Eighty-four college students (53 were female), recruited for course credit and
treated in accordance with human subject review board dictates, were divided
equally and randomly into three groups of participants. The first two groups con-
sisted of simulators given instructions to portray someone with reading difficulties
or someone with slowed thinking associated with learning disability. The third group
consisted of non-simulator control participants. Neither age nor education differed
between groups according to one-way ANOVAs, with means and standard devia-
tions given in Table 1. Also in Table 1, groups did not differ in WAIS-R estimated
IQ as derived from the Shipley-Hartford.
Participants were excluded on the basis of reported history of neurological
difficulty, learning disability, or attention deficit disorder. Reported handedness
for writing was 89% right-handed, and all participants were Caucasian.
WORD READING TEST317
Participants were first individually and personally interviewed to collect initial
demographic data and then were given the Shipley-Hartford Test. Next, participants
were randomly assigned to three groups (normal effort, reading simulator, and men-
tal speed simulator) according to instructions they received in a sealed envelope.
Instructions were sealed in an envelope so that the experimenter was blind to group
assignment of the participants, and participants were told not to reveal their instru-
ctions to the experimenter. If participants had a question, a graduate student not
participating in the data collection was on hand. Participants for the non-simulator
group were told that they were in the ‘‘normal control’’ group and that they should
complete the tests to the best of their ability. Reading simulators were given the
You should pretend that you are being evaluated for reading problems because
you are concerned about your performance in college and want to receive accom-
modations in the classroom.
Imagine that your performance on the tests today determines whether you
will get various accommodations, such as access to taped books, a tutor, extra
time for exams, being able to take tests alone in a separate room, among others.
Because you think these accommodations will make college easier for you
and that you will get better grades and possibly even receive money from the
Department of Vocational Rehabilitation, you are eager to convince us that
you have the kind of reading problems that occur in dyslexia. Therefore, you
should try to perform on these tests the way you think someone with dyslexia
I cannot tell you how to fake your performance; you have to decide how to
do that in a way that will get you accommodations.
Mental speed simulators were given the following instructions:
You should pretend that you are being evaluated for slowed thinking associated
with learning disability because you have noticed being the last to finish exams
and having to take longer to learn than your college classmates.
Imagine that your performance on the tests today determine whether you
will get various accommodations in the classroom, such as access to taped books,
a tutor, extra time for exams, being able to take tests alone in a separate room,
Table 1 Characteristicsofthesampleforthenon-simulator,readingsimulator,andspeedsimulatorgroups
Non-simulators Reading simulatorsSpeed simulators
CharacteristicMeanSD MeanSD MeanSD
Note. WAIS-R est. ¼ estimate from Shipley-Hartford, SD ¼ standard deviation.
318 DAVID C. OSMON ET AL.
Because you think these accommodations will make college easier for you
and that you will get better grades and possibly even receive money from the
Department of Vocational Rehabilitation, you are eager to convince us that
you have these kinds of learning problems. Therefore, you should try to perform
on these tests the way you think someone with learning disability might perform.
I cannot tell you how to fake your performance; you have to decide how to do
that in a way that will get you accommodations.
At this point, participants were given the WMT first, and then the WRT. Part-
icipants were asked questions about their experience and then were debriefed at the
end of the study.
The two effort tests included an existing effort test, the WMT, and the WRT
developed specifically for this study using an experimental environment (DirectRT:
www.empirisoft.com). The WRT was a computerized forced-choice task that pre-
sented participants with a word for three seconds followed by an interstimulus delay
of one second, whereupon two word choices were presented. Participants were asked
to pick which word was the one seen on the previous screen and pressed the ‘‘1’’ key
on the number pad of the computer keyboard to select the left-sided choice or the ‘‘2’’
key for the right-sided choice. Both errors and reaction time were recorded automati-
cally. Stimuli included a total of 40 words, half of which were high-frequency words
(frequency counts greater than 550; Francis & Kucera, 1982) and half of which were
low-frequency words (frequency counts less than 25). The two word choices on the
second screen of each trial consisted of either the correct word or a foil having one
of several characteristics consistent with layperson notions of dyslexia. The foil types
included a word that began with a mirror image letter that did not spell a real word
(e.g., the two choices might be: dall and ball, with ball being the correct choice), an
orthographically illegal letter combination in a non-real word (e.g., the two choices
might be: quicb and quick, with quick being the correct choice), a misspelled word
(cliver and silver), a homophone (witch and which), or an orthographically similar
substituted letter (breeze and breege).
Because a past critique of analogue research has called for incentives (e.g.,
Rogers, Harrell, & Liff, 1993), a lottery was used. Participants were told that they
had a chance to win $100 based upon their effort (non-simulator group) or their
ability to simulate learning disability (simulator groups). An award of $100 was
given to one participant in each of the three groups in the study based upon random
Word Reading Test
According to one-way ANOVA, the groups differed on the WRT correct score
(F[2, 80] ¼ 32.39, p < .0001). Fisher’s post hoc tests demonstrated that all groups
differed from each other (p < .01) with non-simulators performing best, speed simu-
lators performing intermediately, and reading simulators performing worst. Similar
results were found for WRT reaction times (F[2, 80] ¼ 6.68, p < .005), with the
speed simulators performing worse than both the non-simulators and reading
simulators (p < .01) and no difference (p > .05) between the non-simulators and
WORD READING TEST 319
reading simulators according to Fisher’s post hoc tests. Table 2 shows the means,
standard deviations, and group differences for both number correct and reaction
time on the WRT. Effect sizes (Cohen’s d using pooled variance of the non-simulator
and appropriate simulator groups) were 2.40 and 2.44 for the Reading and Speed
Simulators on WRT correct score and 1.5 and .74 for Reading and Speed Simulators
on WRT reaction time.
No non-simulators made more than three errors, while 28=31 reading simula-
tors and 20=27 speed simulators made four or more errors. While it is premature to
use the test clinically, comparative analyses between the WRT and the WMT benefit
from analyzing each tests’ operating characteristics. Thus, using a cut-off of four
errors or greater the WRT score yielded a 90.32% sensitivity, a 100% specificity,
a positive predictive power of 100%, and a negative predictive power of 89.29%
for reading simulators. For speed simulators, the four or more errors criterion for
WRT score gave a 74.07% sensitivity, a 100% specificity, a positive predictive power
of 100%, and a negative predictive power of 78.13%.
Item-total correlations revealed all items to be adequately contributing to the
test construct with correlations between .575–.793. Cronbach’s alpha for the WRT
error score was .973. No difference in WRT error score was present between low-
and high-frequency words (t ¼ 1.14, p > .05).
Word Memory Test
Accordingto MANOVAthe groups differed onall WMT scores
(Wilks’[10, 52] ¼ 5.77, p < .0001) with Fisher post hoc tests demonstrating differ-
ences (p < .0001) between non-simulators and simulators without differences on
any of the six scores (IR, DR, CNS, MC, PA, FR) between reading simulator and
speed simulator groups. It was noted that WMT scores are not normally distributed,
thus Kruskal-Wallis nonparametric statistics are strictly more appropriate, although
all values were highly significant and did not change interpretation of the results thus
parametricsarereportedforeaseofinterpretation. Table3showsthemeans, standard
deviations, and F-values (all significance levels are p < .001) of percent correct on the
six scores of the WMT. Since the WMT has several scores and subjects in this sample
tended to fail some but not other scores, sensitivities and specificities were generated
on the basis of whether a subject failed any score of the test. Using this procedure, the
WMT tended to have good specificity (96.00%) and positive predictive power
(97.44%) but poor sensitivity (65.52%) and negative predictive power (54.55%).
Table 2 Group meansand standard deviations for the errorsand reactions times on the Word Reading Test
GroupMean correct SDMean RT SD RT
Notes. RT ¼ reaction time in milliseconds.
aGroup differs from non-simulator group.
bGroups differ from read simulator group.
cGroup differs from speed simulator group.
320 DAVID C. OSMON ET AL.
The WMT manual (Green, 2003) notes that scores below 90% on IR and DR
are suspicious for poor effort. Using this criterion, no change in classification occurs
for the non-simulator or reading simulator groups, but the speed simulator group
classification rate improves by 7% (63 to 70%). Only one non-simulator is classified
as having poor effort using the 90% criterion. Further, less than chance scores
occurred in 10=31 reading simulators on the IR score and 17=31 on the DR score.
A ‘‘malingering’’ group was composed of any simulator scoring at or below
82.5% on any of the three main WMT scores (IR, DR, or CNS). Effect sizes
(Cohen’s d using pooled variance) were computed using non-simulator group per-
formance as the control comparison for all WMT scores and the WRT Correct score
and WRT Reaction Time. Table 4 shows the means, standard deviations, and effect
sizes with the largest effect attributable to the WRT scores, although all scores,
except the WRT reaction time for the Speed simulators (owing to its large standard
deviation), were large.
Table 3 Group means, standard deviations, and F-values for the errors on the Word Memory Test scores
Non-simulators Reading simulatorsSpeed simulators
MeanSD Mean SDMeanSDF-value
Notes. WMT ¼ Word Memory Test, SD ¼ standard deviation.
aDifference between non-simulators and reading simulators.
bDifference between non-simulators and speed simulators.
All F-values are associated with p < .0001.
Table 4 Group means, standard deviations, and effect sizes for the simulator groups including only those
simulators defined by WMT criteria as having poor effort and compared to the entire non-simulator group
(N ¼ 25)
(N ¼ 19)
(N ¼ 19)
WMT scoreMeanSD Mean SDES Mean SD ES
Notes. WMT ¼ Word Memory Test, SD ¼ standard deviation, ES ¼ Cohen’s d with pooled variance.
WORD READING TEST321
Present results supported both the validity of the WRT and provided evidence
that specific populations are best detected by effort tests designed to address layper-
son notions of the deficits associated with that population’s disorder. The WRT dis-
tinguished both a reading and a mental speed simulator group from a normal control
group with little overlap between the score distributions.
The WRT achieved a 100% positive predictive power and over 89% negative
predictive power in distinguishing reading simulators from normal controls. Further-
more, only 10 of 84 subjects were incorrectly classified, with most of those subjects
coming from the speed simulator group when using the error criterion. Additionally,
the WRT outperformed the WMT effort measure, even using the liberal criterion of
failure on any of the five measures of the WMT as diagnostic of effort problems
(although using the liberal criterion of 90% on the WMT does identify another
7% of the speed simulators and does not classify any further non-simulators as hav-
ing poor effort but does not identify any further Reading simulators). Thus, the
WRT appears to be a potentially clinically useful measure to detect effort in adult
learning disability populations. Further research in an actual clinical population is
warranted before using the instrument in a clinical setting.
A second goal of the study was to test the generality hypothesis which says that
memory tests are generally sensitive to effort issues regardless of the population.
Current results showed that learning disability simulators were better detected by
a specific effort measure such as the WRT and had a very large effect size for the
WRT correct score compared to the large but lesser effect sizes for all WMT scores.
Also, those subjects given instructions to simulate specific speed and reading diffi-
culties performed differentially on reaction time and error scores of a specific effort
measure. Both of these results suggest that effort may be the result of specific layper-
son constructs about the cognitive process involved in a disorder.
Despite these results favoring the specificity hypothesis, both reading and
speed simulators also performed poorly on a general effort measure, with a large
proportion of these samples failing at least one measure on the WMT. Additionally,
the WMT had large effect sizes associated with both the reading and speed simulator
groups, and almost a third of the reading simulators scores less than 50% on the IR
score alone, suggesting that a general effort effect was present. Furthermore, the
WMT performed better in some respects with the speed simulator group than the
more specific WRT measure. For example, effect sizes associated with the WMT
generally outstripped effect sizes for the WRT reaction time scores (but not the
WRT correct score) in the speed simulator group. Since simulators were told that
speed is a part of the problem in learning disability, a memory result on the
WMT outperforming a reaction time measure on the WRT suggests that effort is
also partly a general phenomenon (assuming that an interaction between speed
and memory performance is not present). Given such a general effect, simulators
may conceive of memory tests as related to several different kinds of disorders
(e.g., brain injury and learning disability). Alternatively, effortful tests like the
WMT (and probably other memory effort measures like the TOMM and Victoria
Symptom Validity Test) may naturally tap mentally effortful processes that are less
than fully engaged when the incentive to perform is low.
322 DAVID C. OSMON ET AL.
A strong specificity position would argue that a learning disability simulator
group would fail a learning disability effort test but pass a memory effort test. Thus,
the high failure rate of the current simulator groups on the WMT argues against a
strong specificity hypothesis. A strong generality hypothesis would argue that the
WMT should be highly effective in discriminating simulators from non-simulators.
However, the WMT scores had low sensitivity and negative predictive power because
of normal performance of several simulators. As a result, there was large overlap in
WMT error scores between simulator and non-simulator groups, a finding that is not
consistent with a strong generality hypothesis. Additionally, the WRT performed
better than the WMT, with little overlap between scores of the simulator and non-
simulator groups, another finding that is not consistent with a strong generality
hypothesis. Such findings are in support of a study by Green, Lees-Haley, and Allen
(2002) in which the WMT performed at 100% accuracy in identifying simulators
asked to fake memory impairment. It would appear that if simulators are asked to
fake memory impairment, the WMT performs excellently, and if asked to fake a
different impairment, the WMT performs less well, although it may still do an
adequate job of identifying individuals with poor effort.
Therefore, the present results support both specificity and generality hypo-
theses in effort. Both the better performance of the WRT compared to the WMT
and a double dissociation between the reading and speed simulator groups on the
error and reaction time scores of the WRT argue for a specific effort effect. Con-
versely, the failure of many simulators on the WMT argues for a general effort effect.
While initial results are supportive of the validity of the WRT, limitations of the
current study need to be considered in the use of this instrument. First, studies using
actual learning disabled individuals need to be completed before clinical use of the
instrument can be considered. While receiver operating characteristics of the WRT
are favorable, participants in this study were working clearly toward a malingering
goal, with monetary incentive to succeed. Actual learning disabled clients may have
a less clear goal to malinger (due to guilt about cheating or only an implicit cognitive
schema to fake performance), with less tangible rewards for achieving the goal of
poor effort on testing. With a less clear goal it might be anticipated that actual poor
effort in learning disabled clientele would manifest as less extreme scores on the WRT
and possibly on effort tests in general. Another limitation of the present findings
involves their applicability to a general adult learning disability population. Given
specific effort effects, the present findings might apply more to reading disorder
and less to math and non-verbal learning disability. Finally, order effects between
the WRT and WMT need to be explored in future studies.
Future directions include further evaluation of the WRT’s construct validity.
The WRT should be compared to other effort tests to validate its accuracy beyond
its current comparison with the WMT. Additionally, the WRT should be evaluated
for its relationship with cognitive test performance. Given recent findings (Green,
2003), effort should be viewed as a dimensional construct in which poorer perfor-
mance on effort tests is associated with greater cognitive test performance deficit.
Finding this effect for the WRT would be further confirmation of its status as an
effort test and would provide useful information about the ability of the WRT to
detect poor effort as distinct from malingering. The wide distribution of poor effort
scores in the current sample suggests that levels of effort are represented on the test,
WORD READING TEST323
although a healthy correlation between WRT scores and cognitive performance
would strength that assumption. Additionally, effort tests developed in one popu-
lation (e.g., traumatic brain injury) have been found to detect effort in other popula-
tions, consistent with the current findings supporting some aspect of the generality
hypothesis in effort (Gervais et al., 2001). Thus, the WRT should be evaluated for
its ability to detect effort issues in populations beyond learning disability, further
testing the generality and specificity hypotheses. Along these lines, the obverse study
from the present one can be done in which simulators are asked to fake memory
rather than learning disability performance. It would be anticipated that the
WMT would outperform the WRT since neither test was designed to detect effort
in the others’ domain, further supporting the domain-specific hypothesis.
Binder, L. (1993). Assessment of malingering after mild head trauma with the Portland Digit
Recognition Test. Journal of Clinical and Experimental Neuropsychology, 15, 170–182.
Francis, W. N. & Kucera, H. (1982). Frequency analysis of English usage: Lexicon and gram-
mar. Boston: Houghton & Mifflin.
Gervais, R., Rohling, M., Green, P., & Ford, W. (2004). A comparison of WMT, CARB, and
TOMM failure rates in non-head injury disability claimants. Archives of Clinical Neuro-
psychology, 19, 475–487.
Gervais, R., Russell, A. S., Green, P., Allen, L. M., Ferrari, R., & Pieschl, S. (2001). Effort
testing in patients with fibromyalgia and disability incentives. Journal of Rheumatology,
Gouvier, W. D., Prestholdt, P., & Warner, M. (1988). A survery of common misconceptions
about head injury and recovery. Archives of Clinical Neuropsychology, 3, 331–343.
Green, P. (2003). Green’s Word Memory Test for Windows: User’s manual. Edmonton: Green’s
Green, P., Berendt, J., Mandel, A., & Allen, L. M. (2000). Relative sensitivity of the Word
Memory Test and Test of Memory Malingering in 144 disability claimants. Archives of
Clinical Neuropsychology, 15, 841.
Green, P., Lees-Haley, P. R., & Allen, III, L. M. (2002). The Word Memory Test and the Val-
idity of Neuropsychological Test Scores. Journal of Forensic Neuropsychology, 2, 97–124.
Lanyon, R. (1997). Detecting deception: Current models and directions. Clinical Psychology:
Science and Practice, 4, 377–387.
Larrabee, G. J. (2003). Detection of malingering using atypical performance patterns on stan-
dard neuropsychological tests. Clinical Neuropsychologist, 17, 410–425.
Nies, K. J. & Sweet, J. J. (1994). Neuropsychological assessment and malingering: A critical
review of past and present strategies. Archives of Clinical Neuropsychology, 9, 501–552.
Rogers, R., Harrell, E., & Liff, C. (1993). Feigning neuropsychological impairment: A critical
review of methodological and clinical considerations. Clinical Psychology Review, 13,
for detecting feigned memory impairment and relationship to neuropsychological tests and
MMPI-2validity scales. Journal of Clinical and ExperimentalNeuropsychology, 18, 911–922.
Tan, J. E., Slick, D. J., Strauss, E., & Hultsch, D. F. (2002). How’d they do it? Malingering
strategies on symptom validity tests. Clinical Neuropsychologist, 16, 495–505.
324 DAVID C. OSMON ET AL.