ArticlePublisher preview available

Psychometric Markers of Genuine and Feigned Neurodevelopmental Disorders in the Context of Applying for Academic Accommodations

To read the full-text of this research, you can request a copy directly from the authors.


The article reviews systemic and context-specific challenges of psychoeducational assessment using two case studies: a 19-year-old woman with feigned attention-deficit/ hyperactivity disorder and a 50-year-old man with genuine dyslexia. These cases demonstrate that providing a thorough evaluation of performance validity is an essential component of determining eligibility for academic accommodations in both clinical and higher education settings. At the same time, discounting failure on certain performance validity tests may be necessary to protect against false positive errors. In addition , empirically based test selection and interpretation has the potential to enhance the clinical confidence during differential diagnosis. Examining the internal consistency of a given neurocognitive profile provides valuable clinical information to determine both the credibility of the overall presentation and applying established diagnostic criteria. Although clinical research has yet to identify definitive markers of non-credible neurocognitive profiles, a multivariate approach to performance validity assessment that combines empirically validated indicators and sound clinical judgment can improve detection rates while simultaneously protecting against false positive errors.
Psychometric Markers of Genuine and Feigned
Neurodevelopmental Disorders in the Context of Applying
for Academic Accommodations
Jessica L. Hurtubise
&Antonette Scavone
&Sanya Sagar
&Laszlo A. Erdodi
Received: 15 May 2017 /Accepted: 16 May 2017 /Published online: 6 June 2017
#Springer Science+Business Media New York 2017
Abstract The article reviews systemic and context-specific
challenges of psychoeducational assessment using two case
studies: a 19-year-old woman with feigned attention-deficit/
hyperactivity disorder and a 50-year-old man with genuine
dyslexia. These cases demonstrate that providing a thorough
evaluation of performance validity is an essential component
of determining eligibility for academic accommodations in
both clinical and higher education settings. At the same time,
discounting failure on certain performance validity tests may
be necessary to protect against false positive errors. In addi-
tion, empirically based test selection and interpretation has the
potential to enhance the clinical confidence during differential
diagnosis. Examining the internal consistency of a given
neurocognitive profile provides valuable clinical information
to determine both the credibility of the overall presentation
and applying established diagnostic criteria. Although clinical
research has yet to identify definitive markers of non-credible
neurocognitive profiles, a multivariate approach to perfor-
mance validity assessment that combines empirically validat-
ed indicators and sound clinical judgment can improve detec-
tion rates while simultaneously protecting against false posi-
tive errors.
Keywords ADHD .Dyslexia .Feigning .Disability
evaluation .Performance validity assessment
Non-credible presentation in young adults applying for special
accommodations in higher education settings is a growing
concern (Harrison & Edwards, 2010). Having a documented
disability has a number of tangible benefits, including extend-
ed test taking time, flexible deadlines, and access to
psychostimulant medication (Barrett, Darredeau, Bordy, &
Pihl, 2005;Harrison&Edwards,2010). While allowing extra
time to complete a task may be a legitimate accommodation in
the skill development phase (Stretch & Osborne, 2005), it
alters the underlying construct measured by speeded testing
(Bridgeman, Cline, & Hessinger, 2004). As such, additional
test-taking time changes the meaning of the test scores and
provides an unfair advantage to individuals without a genuine
disability, violating the ethical principle of equal opportunity
in a highly competitive environment.
When considering special accommodations within an edu-
cational setting, neurodevelopmental disorders are of particu-
lar interest. Neurodevelopmental disorders, including atten-
tion-deficit/hyperactivity disorder (ADHD) and specific learn-
ing disabilities (LDs), have been associated with lower grad-
uation rates in high school (Kent et al., 2011) and university
(Weyandt & DuPaul, 2008). This interference of
neurodevelopmental disorders on success in educational set-
tings suggests that the base rates for these disorders should
gradually decline with higher levels of education. However,
the base rate of certain neurodevelopmental disorders is nota-
bly higher among individuals preparing for high-stake exam-
inations compared to the general population (Julian et al.,
2004). This provides indirect evidence that a proportion of
the diagnosed that serve as the basis of academic accommo-
dation during high-stake exams is likely based on non-
credible presentation.
Given the implications of diagnostic errors in determining
disability status in academic settings, it is important to distin-
guish between genuine and feigned disorders. In fact, both
classification errors are costly. False positives (giving a diag-
nosis to someone who is feigning a condition) undermine
*Laszlo A. Erdodi
University of Windsor, 168 Chrysler Hall South, 401 Sunset Ave.,
Windsor, ON N9B 3P4, Canada
Psychol. Inj. and Law (2017) 10:121137
DOI 10.1007/s12207-017-9287-5
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... PVTs are routinely used to detect malingering and low-effort test-taking in other neuropsychological contexts (Bush, Heilbronner, & Ruff, 2014;Chafetz et al., 2015;Heilbronner et al., 2009); furthermore, PVTs are superior to other assessment methods, such as clinical judgment, at detecting noncredible responding (Erdodi & Roth, 2017). Given students' motivation to access accommodations, ease at simulating disabilities, and base rate for noncredible responding, experts argue that PVTs should also be administered to adults seeking accommodations for learning disabilities Hurtubise, Scavone, Sagar, & Erdodi, 2017;Lovett, 2014;Lovett, Nelson, & Lindstrom, 2015;Sullivan et al., 2007). In practice, however, PVTs are rarely required by college disability offices or high-stakes testing agencies (Lindstrom & Lindstrom, 2017) and are infrequently administered to postsecondary students seeking accommodations (Nelson, Whipple, Lindstrom, & Foels, 2014). ...
... Diagnostic errors in disability determination and accommodation decision-making have serious consequences for postsecondary students (Hurtubise et al., 2017). Most attention has been given to the dangers of false negatives, that is, denying accommodations and services to students who experience substantial limitations in major life activities requiring academic skills. ...
... Altogether, these data suggest that a sizable minority of students complete disability evaluations with suboptimal effort, increasing the likelihood of misclassification. These data also support the recommendation of routinely administering PVTs to adults seeking academic accommodations Hurtubise et al., 2017). ...
Full-text available
Although performance validity tests (PVTs) are routinely used in neuropsychological assessment to detect malingering or low-effort test-taking, they are seldom administered to college students seeking academic accommodations and other benefits for reading disabilities. Previous research indicates that between 9.5 and 31% of students seeking learning disability evaluations at university-based clinics provide noncredible test scores indicative of symptom exaggeration or low effort. We developed a brief reading–specific PVT designed for college students participating in reading disability testing: the College Assessment of Reading Effort (CARE). We administered the CARE and standardized reading tests to three groups of students: honest controls, students with documented reading disabilities, and students coached to simulate reading disabilities. Simulators displayed normative deficits on standardized reading measures, similar to the scores earned by students with actual reading disabilities and lower than the scores earned by honest controls. In contrast, CARE scores differentiated simulators from honest examinees with and without disabilities. ROC curve analysis showed that CARE composite scores could be used diagnostically to detect low effort with sensitivity, specificity, and predictive power ≥ 0.90. The CARE offers a time- and cost-effective way to assess performance validity during reading disability testing for postsecondary students.
... These subtests were described as cognitively undemanding and even children with intellectual disabilities and other sources of severe cognitive impairment perform them adequately (Carone, 2014;Green & Flaro, 2016). Correspondingly, studies suggest that valid conclusions can be drawn from the WMT as long as the examinee has intact single-word reading ability (Hurtubise et al., 2017; for a remarkable study in which non-French-speaking children and adolescents were tested in the French version of the WMT, see Richman et al., 2006). Overall, extensive literature exists which suggests the WMT's utility in the context of a variety of neuro-psychiatric disorders (see, for example, Erdodi et al., 2019;Stevens & Licha, 2019). ...
... It is therefore reassuring that education level, as well as other socio-demographic variables, did not significantly correlate with the validity indicators of the WMT ARB . This, however, is not surprising as it was found that valid conclusions can be drawn from the WMT as long as the examinee has intact single-word reading ability (Hurtubise et al., 2017). It is also reassuring that the WMT classification accuracy (i.e., utility estimates) was like that of earlier studies, including those performed on native Hebrew speakers (as noted earlier). ...
Background The feigning of cognitive impairment is common in neuropsychological assessments, especially in a medicolegal setting. The Word Memory Test (WMT) is a forced-choice recognition memory performance validity test (PVT) which is widely used to detect noncredible performance. Though translated to several languages, this was not done for one of the most common languages, Arabic. The aim of the current study was to evaluate the convergent validity of the Arabic adaptation of the WMT (WMTARB) among Israeli Arabic speakers. Methods We adapted the WMT to Arabic using the back-translation method and in accordance with relevant guidelines. We then randomly assigned healthy Arabic speaking adults (N = 63) to either a simulation or honest control condition. The participants then performed neuropsychological tests which included the WMTARB and the Test of Memory Malingering (TOMM), a well-validated nonverbal PVT. Results The WMTARB had high split-half reliability and its measures were significantly correlated with that of the TOMM (p < .001). High concordance was found in classification of participants using the WMTARB and TOMM (specificity = 94.29% and sensitivity = 100% using the conventional TOMM trial 2 cutoff as gold standard). As expected, simulators' accuracy on the WMTARB was significantly lower than that of honest controls. None of the demographic variables significantly correlated with WMTARB measures. Conclusion The WMTARB shows initial evidence of reliability and validity, emphasizing its potential use in the large population of Arabic speakers and universality in detecting noncredible performance. The findings, however, are preliminary and mandate validation in clinical settings.
... Conceptually, the first criterion of malingering ("medicolegal context of presentation"; APA, 2013) is analogous to declaring someone "likely guilty" in a criminal case based on the mere fact that the individual has a defense attorney. Second, "external incentive to appear impaired" is arguably a false dichotomy: it is either definitely present or unknown, as it is virtually impossible to prove a negative (Erdodi et al., 2018d;Hurtubise et al., 2017). Reasons for deliberate underperformance and symptom exaggeration are not always readily apparent at the time of the assessment; therefore, individuals may be incorrectly cleared from the suspicion of malingering. ...
... However, studies that examined symptom report by patients with mTBI who passed PVTs separately from those who failed them, found that elevations (especially on scales measuring anxiety, depression, and somatic complaints) were significantly more common in the latter group (Boone & Lu, 1999;Lange et al., 2012;Lange et al., 2010a, b;Suhr et al., 1997). The correlation between PVT failure and symptom exaggeration was also observed in psychoeducational assessment (Hurtubise et al., 2017;Suhr et al., 2008) and mixed clinical patients . Tsanadis et al. (2008) provided even more compelling evidence for the questionable validity of symptom elevations by demonstrating that patients with mTBI reported higher levels of post-concussive symptoms than those with more severe TBI. ...
Full-text available
This study was designed to examine the relative contribution of symptom (SVT) and performance validity tests (PVTs) to the evaluation of the credibility of neuropsychological profiles in mild traumatic brain injury (mTBI). An archival sample of 326 patients with mTBI was divided into four psychometrically defined criterion groups: pass both SVT and PVT; pass one, but fail the other; and fail both. Scores on performance-based tests of neurocognitive ability and self-reported symptom inventories were compared across the groups. As expected, PVT failure was associated with lower scores on ability tests (ηp2 .042–.184; d 0.56–1.00; medium-large effects), and SVT failure was associated with higher levels of symptom report (ηp2 .039–.312; d 0.32–1.58; small-very large effects). However, SVT failure also had a marginal deleterious effect on performance based measures (ηp2 .017–.023; d 0.23–0.46; small-medium effects) and elevations on self-report inventories were observed in the context of PVT failure (ηp2 .026; d 0.23–0.57; small-medium effects). SVT failure was associated with not only inflated symptom reports but also distorted configural patterns of psychopathology. Patients with clinically elevated somatic and depressive symptoms were twice as likely to fail PVTs. Consistent with previous research, SVTs and PVTs provide overlapping, but non-redundant information about the credibility of neuropsychological profiles associated with mTBI. Therefore, they should be used in combination to afford a comprehensive evaluation of cognitive and emotional functioning. The heuristic value of validity tests has both clinical and forensic relevance.
... Therefore, assessors should rely on multivariate cutoffs: failing ≥3 ACSSs at ≤6, failing ≥2 ACSSs at ≤5, a sum of ACSSs ≤23 across all four subtests or a DSI ≥5 was specific to non-credible responding in this study. At the same time, performance on select D-KEFS Stroop subtests may be suppressed by genuine deficits in certain populations, such as dyslexia (Word Reading; Gabay et al., 2020;Hurtubise et al., 2017) or non-native speakers of English (Color naming; Brantuo et al., 2022). To protect against false positive errors, assessors may choose to suspend the PVT function of the D-KEFS Stroop in patients with such clinical characteristics. ...
Full-text available
Objective The study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
... However, numerous researchers have expressed concerns regarding the provision of extended time accommodations. Quite apart from the recent reports showing how easily students can feign slow reading speed in order to benefit from extra time accommodations (e.g., Belkin et al., 2019;Hurtubise et al., 2017), and that criteria used by clinicians to diagnose invisible disorders often include identifying individuals with no normative impairments (Goegan & Harrison, 2017), concerns have been raised that extended time accommodations may be given to students too readily, without fully considering the effects of the additional time on the validity of obtained test scores (Jansen et al., 2019;Lovett, 2020;Sokal & Vermette, 2017). In addition, as there is no set criterion for determining when extended time is warranted or how much should be provided, disability service providers and psychologists are often left to use their best judgement (Lovett, 2011). ...
Full-text available
Although extended time for tests and examinations is the most commonly requested and provided accommodation in post-secondary institutions, best practice guidelines from existing research are rarely translated into practice. Thus, a review of the literature was undertaken to examine support for granting additional assessment time to persons in specific disability categories. Based on this review, no more than 25% additional time is supported for students with learning disabilities, and even then, only when their documented area of functional impairment overlaps with assessment task requirements. No research support exists for the provision of extra time for students with attention deficit/hyperactivity disorder (AD/HD) or mental health diagnoses. Research is silent on the appropriateness of additional assessment time for individuals with autism spectrum disorder and thus individuals need to be considered on a case-by-case basis. In very exceptional situations, more than 25% additional time may be warranted, but this would need to be well considered using an established decision-making model.
... FCRM-PVTs provide the strongest type of evidence since they have been found to be relatively immune to cultural factors and are unaffected by proxy variables (e.g., language proficiency and education level). For example, it has been demonstrated that valid conclusions can be drawn from the WMT as long as the examinee has intact single-word reading ability (Hurtubise, Scavone, Sagar, & Erdodi, 2017). Correspondingly, cutoffs could be made more stringent for the WRMT-Words when those with English as a Second Language (ESL) status were assessed (Salazar et al., 2007). ...
Full-text available
The effects of cross-cultural factors on cognitive testing have attracted increasing attention in recent years. Studies have indicated that these factors influence examinees’ performance in standard cognitive tests, raising concerns that they may bias decisions regarding whether malingering is at play. The current chapter briefly reviews the effects of cultural factors on performance in cognitive tests as well as the current understanding regarding their sources. Next, effects of cross-cultural factors on the determination of malingering are reviewed, focusing on stand-alone and embedded validity indicators. Though findings suggest that stand-alone validity indicators are relatively immune to these effects, embedded validity indicators necessitate a more cautious approach. However, at present conclusions can be stated only tentatively, as empirical data is still limited. Taking these caveats into consideration, recommendations are suggested to improve the validity of malingering assessment in a cross-cultural context, including a proposal for an interpretative scheme that can be utilized in such assessments. Hopefully, as empirical data accumulates, these recommendations will provide a basis for future refinements.
... ADHD typically manifests cognitively as impaired attention and executive dysfunction, whereas with LD, academic skills and executive functions are often affected. Examinees with these possible conditions whose intent is to receive accommodations in educational settings (e.g. for standardized testing, such as SAT, GMAT, LSAT) or prescriptions for stimulant medications may exaggerate or feign cognitive impairment and psychological dysfunction to attain such goals (Harp et al., 2011;Harrison et al., 2021;Hurtubise et al., 2017), underscoring the importance of including SVTs and PVTs in these evaluations. Moreover, both LD and ADHD commonly have emotional concomitants, such as emotional dysregulation, anxiety, and depression (Wehmeier et al., 2010;Sobanski et al., 2010;Sj€ owall et al., 2013), necessitating SVT assessment. ...
Objective: Citation and download data pertaining to the 2009 AACN consensus statement on validity assessment indicated that the topic maintained high interest in subsequent years, during which key terminology evolved and relevant empirical research proliferated. With a general goal of providing current guidance to the clinical neuropsychology community regarding this important topic, the specific update goals were to: identify current key definitions of terms relevant to validity assessment; learn what experts believe should be reaffirmed from the original consensus paper, as well as new consensus points; and incorporate the latest recommendations regarding the use of validity testing, as well as current application of the term 'malingering.' Methods: In the spring of 2019, four of the original 2009 work group chairs and additional experts for each work group were impaneled. A total of 20 individuals shared ideas and writing drafts until reaching consensus on January 21, 2021. Results: Consensus was reached regarding affirmation of prior salient points that continue to garner clinical and scientific support, as well as creation of new points. The resulting consensus statement addresses definitions and differential diagnosis, performance and symptom validity assessment, and research design and statistical issues. Conclusions/Importance: In order to provide bases for diagnoses and interpretations, the current consensus is that all clinical and forensic evaluations must proactively address the degree to which results of neuropsychological and psychological testing are valid. There is a strong and continually-growing evidence-based literature on which practitioners can confidently base their judgments regarding the selection and interpretation of validity measures.
... Essentially, our results suggest that failing the proposed validity cutoffs is specific to non-credible presentation on both other self-report measures and performance-based measures of cognitive ability. As such, the BRIEF-A-SR as an SVT can provide a short-term, partial solution to the controversy over how to best identify non-credible ADHD: through SVTs or PVTs (Hurtubise et al., 2017;Jasinski et al., 2011;Lee Booksh et al., 2010;Musso et al., 2016;Sagar et al., 2017;Sollman et al., 2010). ...
Full-text available
This study was designed to investigate the potential of extreme scores on the Behavioral Rating Inventory of Executive Function-Adult Self-Report Version (BRIEF-A-SR) to serve as validity indicators. The BRIEF-A-SR was administered to 73 university students and 50 clinically referred adults. In the student sample, symptom validity was operationalized as the outcome on the Inventory of Problems (IOP-29). In the patient sample, performance validity was operationalized as the outcome on a combination of free-standing and embedded indicators. The BRIEF-A-SR had better classification accuracy in the student sample (.13–.56 sensitivity at .88–.95 specificity) compared with the patient sample (.22–.44 sensitivity at .85–.97 specificity). Combining individual cutoffs into a multivariate model improved specificity (.93) and stabilized sensitivity (.33) in the clinical sample. Failing the newly introduced cutoffs (T ≥ 65/T ≥ 80 in the student sample and T ≥ 80/T ≥ 90 in the clinical sample) was associated with failure on performance validity tests and elevations on other symptom inventories. Results provide preliminary support for an alternative method for establishing the credibility of symptom reports both within the BRIEF-A-SR and other inventories. Pending replication by future research, the newly proposed cutoffs could provide a much needed psychometric safeguard against over-diagnosing neuropsychiatric disorders due to undetected symptom exaggeration.
... Finally, although technical manuals often state a minimum reading level for a given test, in clinical practice, objective measures of reading skills are rarely administered to verify these requirements. Nevertheless, evidence suggests that certain neuropsychiatric conditions, such as dyslexia, can make written text inaccessible to otherwise high-functioning individuals (Eicher & Gruen, 2013;Hurtubise, Scavone, Sagar, & Erdodi, 2017). ...
... Only 10% of a sample of children with severe dyslexia (Larochette & Harrison, 2012) failed the WMT, a population that is at high risk for failing a PVT that requires word reading. This is consistent with subsequent reports that severe dyslexia does not prevent credible examinees from passing PVTs based on the forced choice recognition paradigm that appear to require intact single word reading ability during the encoding trial (Hurtubise, Scavone, Sagar, & Erdodi, 2017). Following the recommendation to use the oral rather than the computerized WMT if the person's reading level is less than grade 3 may further decrease failure rate. ...
Full-text available
This study was designed to replicate previous reports of elevated false-positive rates (FPR) on the Word Memory Test (WMT) in patients with mild traumatic brain injury (TBI) and to evaluate previous claims that genuine memory deficits and non-credible responding are conflated on the WMT. Data from a consecutive case sequence of 170 patients with mild TBI referred for neuropsychological assessment were collected. Failure rate on the WMT was compared to that on other performance validity tests (PVTs). The clinical characteristics and neuropsychological profiles of patients who passed and those who failed the WMT and other PVTs were compared. Base rate of failure was the highest on the WMT (44.7%), but comparable to that on other established PVTs (39.4–41.8%). The vast majority of patients (94.7%) who failed the WMT had independent evidence of invalid performance, refuting previous estimates of 20–30% FPR. Failing the WMT was associated with globally lower scores on tests measuring various cognitive domains. The neurocognitive profile of individuals with invalid performance was remarkably consistent across various PVTs. Previously reported FPR of the WMT were not replicated. Failing the WMT typically occurred in the context of failing other PVTs too. Results suggest a common factor behind non-credible responding that is invariant of the psychometric definition of invalid performance. Failure on the WMT should not be discounted based on rational arguments unsubstantiated by objective data. Inferring elevated FPR from high failure rate alone is a fundamental epistemological error.
Full-text available
KIAA0319 is a transmembrane protein associated with dyslexia with a presumed role in neuronal migration. Here we show that KIAA0319 expression is not restricted to the brain but also occurs in sensory and spinal cord neurons, increasing from early postnatal stages to adulthood and being downregulated by injury. This suggested that KIAA0319 participates in functions unrelated to neuronal migration. Supporting this hypothesis, overexpression of KIAA0319 repressed axon growth in hippocampal and dorsal root ganglia neurons; the intracellular domain of KIAA0319 was sufficient to elicit this effect. A similar inhibitory effect was observed in vivo as axon regeneration was impaired after transduction of sensory neurons with KIAA0319. Conversely, the deletion of Kiaa0319 in neurons increased neurite outgrowth in vitro and improved axon regeneration in vivo. At the mechanistic level, KIAA0319 engaged the JAK2-SH2B1 pathway to activate Smad2, which played a central role in KIAA0319-mediated repression of axon growth. In summary, we establish KIAA0319 as a novel player in axon growth and regeneration with the ability to repress the intrinsic growth potential of axons. This study describes a novel regulatory mechanism operating during peripheral nervous system and central nervous system axon growth, and offers novel targets for the development of effective therapies to promote axon regeneration.
Full-text available
Purpose: This longitudinal study examines measures of temporal auditory processing in pre-reading children with a family risk of dyslexia. Specifically, it attempts to ascertain whether pre-reading auditory processing, speech perception, and phonological awareness (PA) reliably predict later literacy achievement. Additionally, this study retrospectively examines the presence of pre-reading auditory processing, speech perception, and PA impairments in children later found to be literacy impaired. Method: Forty-four pre-reading children with and without a family risk of dyslexia were assessed at three time points (kindergarten, first, and second grade). Auditory processing measures of rise time (RT) discrimination and frequency modulation (FM) along with speech perception, PA, and various literacy tasks were assessed. Results: Kindergarten RT uniquely contributed to growth in literacy in grades one and two, even after controlling for letter knowledge and PA. Highly significant concurrent and predictive correlations were observed with kindergarten RT significantly predicting first grade PA. Retrospective analysis demonstrated atypical performance in RT and PA at all three time points in children who later developed literacy impairments. Conclusions: Although significant, kindergarten auditory processing contributions to later literacy growth lack the power to be considered as a single-cause predictor; thus results support temporal processing deficits' contribution within a multiple deficit model of dyslexia.
Full-text available
Developmental dyslexia (DD) is a complex neurodevelopmental deficit characterized by impaired reading acquisition, in spite of adequate neurological and sensorial conditions, educational opportunities and normal intelligence. Despite the successful characterization of DD-susceptibility genes, we are far from understanding the molecular etiological pathways underlying the development of reading (dis)ability. By focusing mainly on clinical phenotypes, the molecular genetics approach has yielded mixed results. More optimally reduced measures of functioning, that is, intermediate phenotypes (IPs), represent a target for researching disease-associated genetic variants and for elucidating the underlying mechanisms. Imaging data provide a viable IP for complex neurobehavioral disorders and have been extensively used to investigate both morphological, structural and functional brain abnormalities in DD. Performing joint genetic and neuroimaging studies in humans is an emerging strategy to link DD-candidate genes to the brain structure and function. A limited number of studies has already pursued the imaging–genetics integration in DD. However, the results are still not sufficient to unravel the complexity of the reading circuit due to heterogeneous study design and data processing. Here, we propose an interdisciplinary, multilevel, imaging–genetic approach to disentangle the pathways from genes to behavior. As the presence of putative functional genetic variants has been provided and as genetic associations with specific cognitive/sensorial mechanisms have been reported, new hypothesis-driven imaging–genetic studies must gain momentum. This approach would lead to the optimization of diagnostic criteria and to the early identification of 'biologically at-risk' children, supporting the definition of adequate and well-timed prevention strategies and the implementation of novel, specific remediation approach.
Full-text available
Reading is a highly complex process in which integrative neurocognitive functions are required. Visual-spatial abilities play a pivotal role because of the multi-faceted visual sensory processing involved in reading. Several studies show that children with developmental dyslexia (DD) fail to develop effective visual strategies and that some reading difficulties are linked to visual-spatial deficits. However, the relationship between visual-spatial skills and reading abilities is still a controversial issue. Crucially, the role that age plays has not been investigated in depth in this population, and it is still not clear if visual-spatial abilities differ across educational stages in DD. The aim of the present study was to investigate visual-spatial abilities in children with DD and in age-matched normal readers (NR) according to different educational stages: in children attending primary school and in children and adolescents attending secondary school. Moreover, in order to verify whether visual-spatial measures could predict reading performance, a regression analysis has been performed in younger and older children. The results showed that younger children with DD performed significantly worse than NR in a mental rotation task, a more-local visual-spatial task, a more-global visual-perceptual task and a visual-motor integration task. However, older children with DD showed deficits in the more-global visual-perceptual task, in a mental rotation task and in a visual attention task. In younger children, the regression analysis documented that reading abilities are predicted by the visual-motor integration task, while in older children only the more-global visual-perceptual task predicted reading performances. Present findings showed that visual-spatial deficits in children with DD were age-dependent and that visual-spatial abilities engaged in reading varied across different educational stages. In order to better understand their potential role in affecting reading, a comprehensive description and a multi-componential evaluation of visual-spatial abilities is needed with children with DD.
Full-text available
Elevations on certain Conners? CPT-II scales are known to be associated with invalid responding. However, scales and cutoffs vary across studies. In addition, the methodology behind developing performance validity tests (PVTs) has been challenged for mistaking true impairment for noncredible presentation. Using ability-based tests as a PVT makes clinicians especially vulnerable to this criticism. The present study examined the ability of CPT-II to dissociate effort from impairment in 47 adults clinically referred for neuropsychological assessment. CPT-II scales previously identified as PVTs (Omissions, Commissions, Hit Reaction Time SE, Variability, and Perseverations) produced classification accuracies hovering around .50 sensitivity at .90 specificity. The subsample that failed these PVTs performed within normal range on other tests of working memory, processing speed, visual attention, and executive function. Results suggest that the select CPT-II based PVTs are sensitive to invalid responding, and are associated with depression and anxiety, but are unrelated to cognitive functioning.
Full-text available
Introduction: The Recognition Memory Test (RMT) and Word Choice Test (WCT) are structurally similar, but psychometrically different. Previous research demonstrated that adding a time-to-completion cutoff improved the classification accuracy of the RMT. However, the contribution of WCT time-cutoffs to improve the detection of invalid responding has not been investigated. The present study was designed to evaluate the classification accuracy of time-to-completion on the WCT compared to the accuracy score and the RMT. Method: Both tests were administered to 202 adults (Mage = 45.3 years, SD = 16.8; 54.5% female) clinically referred for neuropsychological assessment in counterbalanced order as part of a larger battery of cognitive tests. Results: Participants obtained lower and more variable scores on the RMT (M = 44.1, SD = 7.6) than on the WCT (M = 46.9, SD = 5.7). Similarly, they took longer to complete the recognition trial on the RMT (M = 157.2 s,SD = 71.8) than the WCT (M = 137.2 s, SD = 75.7). The optimal cutoff on the RMT (≤43) produced .60 sensitivity at .87 specificity. The optimal cutoff on the WCT (≤47) produced .57 sensitivity at .87 specificity. Time-cutoffs produced comparable classification accuracies for both RMT (≥192 s; .48 sensitivity at .88 specificity) and WCT (≥171 s; .49 sensitivity at .91 specificity). They also identified an additional 6-10% of the invalid profiles missed by accuracy score cutoffs, while maintaining good specificity (.93-.95). Functional equivalence was reached at accuracy scores ≤43 (RMT) and ≤47 (WCT) or time-to-completion ≥192 s (RMT) and ≥171 s (WCT). Conclusions: Time-to-completion cutoffs are valuable additions to both tests. They can function as independent validity indicators or enhance the sensitivity of accuracy scores without requiring additional measures or extending standard administration time.
Full-text available
Objectives: The Forced Choice Recognition (FCR) trial of the California Verbal Learning Test, 2nd edition, was designed as an embedded performance validity test (PVT). To our knowledge, this is the first systematic review of classification accuracy against reference PVTs. Methods: Results from peer-reviewed studies with FCR data published since 2002 encompassing a variety of clinical, research, and forensic samples were summarized, including 37 studies with FCR failure rates (N=7575) and 17 with concordance rates with established PVTs (N=4432). Results: All healthy controls scored >14 on FCR. On average, 16.9% of the entire sample scored ≤14, while 25.9% failed reference PVTs. Presence or absence of external incentives to appear impaired (as identified by researchers) resulted in different failure rates (13.6% vs. 3.5%), as did failing or passing reference PVTs (49.0% vs. 6.4%). FCR ≤14 produced an overall classification accuracy of 72%, demonstrating higher specificity (.93) than sensitivity (.50) to invalid performance. Failure rates increased with the severity of cognitive impairment. Conclusions: In the absence of serious neurocognitive disorder, FCR ≤14 is highly specific, but only moderately sensitive to invalid responding. Passing FCR does not rule out a non-credible presentation, but failing FCR rules it in with high accuracy. The heterogeneity in sample characteristics and reference PVTs, as well as the quality of the criterion measure across studies, is a major limitation of this review and the basic methodology of PVT research in general. (JINS, 2016, 22, 851-858).
Visuo-constructive and perceptual abilities have been poorly investigated in children with learning disabilities. The present study focused on local or global visuospatial processing in children with nonverbal learning disability (NLD) and dyslexia compared with typically-developing (TD) controls. Participants were presented with a modified block design task (BDT), in both a typical visuo-constructive version that involves reconstructing figures from blocks, and a perceptual version in which respondents must rapidly match unfragmented figures with a corresponding fragmented target figure. The figures used in the tasks were devised by manipulating two variables: the perceptual cohesiveness and the task uncertainty, stimulating global or local processes. Our results confirmed that children with NLD had more problems with the visuo-constructive version of the task, whereas those with dyslexia showed only a slight difficulty with the visuo-constructive version, but were in greater difficulty with the perceptual version, especially in terms of response times. These findings are interpreted in relation to the slower visual processing speed of children with dyslexia, and to the visuo-constructive problems and difficulty in using flexibly-experienced global vs local processes of children with NLD. The clinical and educational implications of these findings are discussed.