Lab

Laszlo A Erdodi's Lab


Featured research (9)

There are growing concerns that increasing the number of performance validity tests (PVTs) may inflate the false positive rate. Indeed, the number of available embedded PVTs increased exponentially within the last decades. However, the standard for declaring a neurocognitive profile invalid (≥ 2 PVT failures) has not been adjusted to reflect this change. Data were collected from 100 clinically referred patients with traumatic brain injury. Two distinct aggregation methods were used to combine multiple (5, 7, 9, 11 and 13) embedded PVTs into a single-number summary of performance validity using two established free-standing PVTs as criteria. Multivariate cutoffs had to be adjusted to contain false positives: ≥ 2 failures out of nine or more dichotomized (Pass/Fail) PVTs had unexpectedly low multivariate specificity (.76-.79). However, ≥ 4 failures resulted in high specificity (.90-.96), even out of 13 embedded PVTs. Multivariate models of embedded PVTs correctly classified between 92 and 96% of the sample at ≥ .90 specificity. Alternative aggregation methods produced similar results. Findings support the notion of the elasticity of multivariate cutoffs: as the number of PVTs interpreted increases, more stringent cutoffs are required to deem the profile invalid – at least until a certain level of evidence for non-credible responding accumulates (cutoff elasticity). A desirable byproduct of increasing the number of PVTs was improved sensitivity (.85–1.00). There is no such thing as too many PVTs – only insufficiently conservative multivariate cutoffs.
This study was designed to empirically evaluate the classification accuracy of various definitions of invalid performance in two forced-choice recognition performance validity tests (PVTs; FCR CVLT-II and Test of Memory Malingering [TOMM-2]). The proportion of at and below chance level responding defined by the binomial theory and making any errors was computed across two mixed clinical samples from the United States and Canada (N = 470) and two sets of criterion PVTs. There was virtually no overlap between the bino-mial and empirical distributions. Over 95% of patients who passed all PVTs obtained a perfect score. At chance level responding was limited to patients who failed ≥2 PVTs (91% of them failed 3 PVTs). No one scored below chance level on FCR CVLT-II or TOMM-2. All 40 patients with dementia scored above chance. Although at or below chance level performance provides very strong evidence of non-credible responding, scores above chance level have no negative predictive value. Even at chance level scores on PVTs provide compelling evidence for non-credible presentation. A single error on the FCR CVLT-II or TOMM-2 is highly specific (0.95) to psychometrically defined invalid performance. Defining non-credible responding as below chance level This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
Objective This study was design to evaluate the potential of the recognition trials for the Logical Memory (LM), Visual Reproduction (VR), and Verbal Paired Associates (VPA) subtests of the Wechsler Memory Scales–Fourth Edition (WMS-IV) to serve as embedded performance validity tests (PVTs). Method The classification accuracy of the three WMS-IV subtests was computed against three different criterion PVTs in a sample of 103 adults with traumatic brain injury (TBI). Results The optimal cutoffs (LM ≤ 20, VR ≤ 3, VPA ≤ 36) produced good combinations of sensitivity (.33–.87) and specificity (.92–.98). An age-corrected scaled score of ≤5 on either of the free recall trials on the VPA was specific (.91–.92) and relatively sensitive (.48–.57) to psychometrically defined invalid performance. A VR I ≤ 5 or VR II ≤ 4 had comparable specificity, but lower sensitivity (.25–.42). There was no difference in failure rate as a function of TBI severity. Conclusions In addition to LM, VR, and VPA can also function as embedded PVTs. Failing validity cutoffs on these subtests signals an increased risk of non-credible presentation and is robust to genuine neurocognitive impairment. However, they should not be used in isolation to determine the validity of an overall neurocognitive profile.
Objective The study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.

Lab head

Laszlo A Erdodi
Department
  • Department of Psychology

Members (8)

Laura Cutler
  • Yorkville University
Jaspreet Rai
  • Precision Neuropsychological Assessments Inc.
Sami Ali
  • University of Windsor
Kelly An
  • University of Windsor
Christina Sirianni
  • University of Windsor
Kaitlyn Abeare
  • University of Windsor
Maame Adwoa Brantuo
  • University of Windsor
Natalie May
  • University of Windsor
Kristian Seke
Kristian Seke
  • Not confirmed yet