Lab

Laszlo A Erdodi's Lab

Featured research (5)

Objective The study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT). Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs. The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFT FCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFT FCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFT FCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFT FCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
This study was designed to cross-validate the V-5, a quick psychiatric screener, across administration formats and levels of examinee acculturation. The V-5 was administered twice (once at the beginning and once at the end of the testing session) to three samples (N = 277) with varying levels of symptom severity and English language proficiency, varying type of administration, alongside traditional self-reported symptom inventories as criterion measures. The highest rest-retest reliability was observed on the Depression (.84) and Pain scales (.85). The V-5 was sensitive to the variability in symptom severity. Classification accuracy was driven by the base rate of the target construct, and was invariant across administration format (in-person or online) and level of English proficiency. The V-5 demonstrated promise as a cross-culturally robust screening instrument that is sensitive to change over time, lends itself to online administration, and is suitable for examinees with limited English proficiency.

Lab head

Laszlo A Erdodi
Department
  • Department of Psychology

Members (8)

Laura Cutler
  • University of Windsor
Jaspreet Rai
  • Precision Neuropsychological Assessments Inc.
Kelly An
  • University of Windsor
Christina Sirianni
  • University of Windsor
Sami Ali
  • University of Windsor
Kaitlyn Abeare
  • University of Windsor
Maame Adwoa Brantuo
  • University of Windsor
Natalie May
  • University of Windsor
Kristian Seke
Kristian Seke
  • Not confirmed yet