The word reading test of effort in adult learning disability: a simulation study.
ABSTRACT The Word Reading Test (WRT) was designed to detect effort problems specific to a learning disability sample. The WRT and the Word Memory Test (WMT) were administered to two simulator and normal control groups. The WRT showed excellent receiver operating characteristics (e.g., 90% sensitivity and 100% positive predictive power) and outperformed the WMT in detecting both reading and mental speed simulators. This finding and a double dissociation between reading and speed simulators on WRT errors and reaction time suggested specific effort effects while poor effort of simulators on the WMT suggested general effort effects. Results are supportive of the WRT as a potential effort indicator in learning disability.
- SourceAvailable from: Emily M Elliott[Show abstract] [Hide abstract]
ABSTRACT: Although recent findings have indicated that a portion of college students presenting for psychoeducational evaluations fail validity measures, methods for determining the validity of cognitive test results in psychoeducational evaluations remain under-studied. In light of this, data are needed to evaluate utility of validity indices in this population and to provide base rates for students meeting research criteria for malingering and to report the relationship between testing performance and the level of external incentive. The authors utilized archival data from: (i) a university psychological clinic (n = 986) and (ii) a university control sample (n = 182). Empirically supported embedded validity indices were utilized to identify retrospectively suspected malingering patients. Group performance, according to invalidity and the level of incentive seeking, was evaluated through a series of multivariate mean comparisons. The current study supports classifying patients according to the level of incentive seeking when evaluating neurocognitive performance and feigned/exaggerated deficits.Archives of Clinical Neuropsychology 11/2011; 27(1):45-57. · 2.00 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: To update primary health care providers on the guidelines and standards for documentation of attention deficit hyperactivity disorder (ADHD) at the postsecondary level. We synthesized information from consultations with other experts at postsecondary disability offices and from relevant research in this area (specifically, PsycLIT, PsychINFO, and MEDLINE databases were searched for systematic reviews and meta-analyses from January 1990 to June 2009). Most evidence included was level III. Symptoms of ADHD can occur for many reasons, and primary health care providers need to be cautious when making this diagnosis in young adults. Diagnosis alone is not sufficient to guarantee academic accommodations. Documentation of a disability presented to postsecondary-level service providers must address all aspects of the Diagnostic and Statistical Manual of Mental Disorders, 4th edition, criteria for diagnosis of ADHD, and must also clearly demonstrate how recommended academic accommodations were objectively determined. Students with ADHD require comprehensive documentation of their disabilities to obtain accommodations at the postsecondary level. Implementing the guidelines proposed here would improve access to appropriate services and supports for young adults with ADHD, reduce the risk of misdiagnosis of other psychological causes, and minimize the opportunity for students to obtain stimulant medications for illicit use.Canadian family physician Medecin de famille canadien 08/2010; 56(8):761-5. · 1.19 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: The current investigation identified characteristics that discriminated authentic dyslexia from its simulation using measures common to postsecondary learning disability evaluations. Analyses revealed accurate simulation on most achievement measures but inaccurate feigning on neurolinguistic processing measures, speed on timed tasks, and error quantity. The largest group separations were on rapid naming, speeded orthographic, and reading fluency tasks. Simulators accurately feigned dyslexia profiles on cut-score and discrepancy diagnostic models but not on the more complex aspects of the clinical judgment model. Regarding simulation detection, a multivariate rule exhibited the greatest classification accuracy, followed by univariate indices developed from rapid naming tasks. The findings of the current study suggest that aspects of a comprehensive evaluation may aid in the detection of simulated dyslexia.The Clinical Neuropsychologist 02/2011; 25(2):302-22. · 1.68 Impact Factor