Matthew Holcomb’s research while affiliated with Louisiana State University Health Sciences Center New Orleans and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (31)


Logical memory, visual reproduction, and verbal paired associates are effective embedded validity indicators in patients with traumatic brain injury View supplementary material Logical memory, visual reproduction, and verbal paired associates are effective embedded validity indicators in patients with traumatic brain injury
  • Article
  • Full-text available

March 2023

·

466 Reads

·

19 Citations

·

·

·

[...]

·

Objective This study was design to evaluate the potential of the recognition trials for the Logical Memory (LM), Visual Reproduction (VR), and Verbal Paired Associates (VPA) subtests of the Wechsler Memory Scales–Fourth Edition (WMS-IV) to serve as embedded performance validity tests (PVTs). Method The classification accuracy of the three WMS-IV subtests was computed against three different criterion PVTs in a sample of 103 adults with traumatic brain injury (TBI). Results The optimal cutoffs (LM ≤ 20, VR ≤ 3, VPA ≤ 36) produced good combinations of sensitivity (.33–.87) and specificity (.92–.98). An age-corrected scaled score of ≤5 on either of the free recall trials on the VPA was specific (.91–.92) and relatively sensitive (.48–.57) to psychometrically defined invalid performance. A VR I ≤ 5 or VR II ≤ 4 had comparable specificity, but lower sensitivity (.25–.42). There was no difference in failure rate as a function of TBI severity. Conclusions In addition to LM, VR, and VPA can also function as embedded PVTs. Failing validity cutoffs on these subtests signals an increased risk of non-credible presentation and is robust to genuine neurocognitive impairment. However, they should not be used in isolation to determine the validity of an overall neurocognitive profile.

Download

M is For Performance Validity: The IOP-M Provides a Cost-Effective Measure of the Credibility of Memory Deficits during Neuropsychological Evaluations M is For Performance Validity: The IOP-M Provides a Cost-Effective Measure of the Credibility of Memory Deficits during Neuropsychological Evaluations

January 2023

·

464 Reads

·

20 Citations

Journal of Forensic Psychology Research and Practice

This study was designed to evaluate the classification accuracy of the Memory module for the Inventory of Problems (IOP-M) in a sample of real-world patients. Archival data were collected from a mixed clinical sample of 90 adults clinically referred for neuropsychological testing. The classification accuracy of the IOP-M was computed against psychometrically defined invalid performance. IOP-M ≤30 produced a good combination of sensitivity (.46-.75) and specificity (.86-.95). Lowering the cutoff to ≤29 improved specificity (.94-1.00) at the expense of sensitivity (.29-.63). The IOP-M correctly classified between 73% and 91% of the sample. Given its low cost, ease of administration/scoring in combination with robust classification accuracy, the IOP-M has the potential to expand the existing toolkit for the evaluation of performance validity during neuropsychological assessments.


Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity

August 2022

·

345 Reads

·

28 Citations

This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79–81%; OCCNIM-PAI: 71–79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89–91%; OCCTOMM-1: 84–86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.


Figure 1 The Linear Relationship Between Failing (≤46) the Word Choice Test (WCT) Across a Three-Way Classification (Pass = Valid; Borderline = Indeterminate; Fail = Invalid) on the Validity Index Nine (VI-9) and the Erdodi Index Five (EI-5)
Classification Accuracy of Various WCT Cutoffs
WCT Scores Across Levels of Invalid Performance
Concordance Rates Between Criterion PVTs and the WCT as a Function of Passing or Failing CR Cutoffs After Failing Traditional (Total Score) Cutoffs
Comparing Demographic Characteristics and Cognitive Profiles Between White and Non-White Patients
Critical Item (CR) Analysis Expands the Classification Accuracy of Performance Validity Tests Based on the Forced Choice Paradigm—Replicating Previously Introduced CR Cutoffs Within the Word Choice Test

July 2022

·

342 Reads

·

31 Citations

Neuropsychology

Objective: This study was designed to replicate previous research on critical item analysis within the Word Choice Test (WCT). Method: Archival data were collected from a mixed clinical sample of 119 consecutively referred adults (Mage = 51.7, Meducation = 14.7). The classification accuracy of the WCT was calculated against psychometrically defined criterion groups. Results: Critical item analysis identified an additional 2%-5% of the sample that passed traditional cutoffs as noncredible. Passing critical items after failing traditional cutoffs was associated with weaker independent evidence of invalid performance, alerting the assessor to the elevated risk for false positives. Failing critical items in addition to failing select traditional cutoffs increased overall specificity. Non-White patients were 2.5 to 3.5 times more likely to Fail traditional WCT cutoffs, but select critical item cutoffs limited the risk to 1.5-2. Conclusions: Results confirmed the clinical utility of critical item analysis. Although the improvement in sensitivity was modest, critical items were effective at containing false positive errors in general, and especially in racially diverse patients. Critical item analysis appears to be a cost-effective and equitable method to improve an instrument's classification accuracy. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges

May 2022

·

1,142 Reads

·

22 Citations

Cognitive and Behavioral Neurology

Background: Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. Objective: To replicate validity cutoffs for the Boston Naming Test —Short Form (BNT–15) and to introduce a clinical classification system for the BNT–15 as a measure of object-naming skills. Method: We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z-score transformation. Results: Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20–0.38 sensitivity at 0.92–0.96 specificity). As a measure of cognitive ability, a perfect BNT–15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT–15 T-scores correlated strongly (0.86) with the newly developed z-scores. Conclusion: Given its brevity (< 5 minutes), ease of administration and scoring, the BNT–15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners. Key Words: Boston Naming Test, performance validity, normative data


One-Minute SVT? The V-5 Is A Stronger Predictor Of Symptom Exaggeration Than Self-Reported Trauma History

March 2022

·

284 Reads

·

9 Citations

Journal of Forensic Psychology Research and Practice

To examine the potential of the Five-Variable Psychiatric Screener (V-5) to serve as an embedded symptom validity test (SVT). In Study 1, 43 undergraduate students were randomly assigned to a control or an experimental malingering condition. In Study 2, 150 undergraduate students were recruited to examine the cognitive and emotional sequelae of self-reported trauma history. The classification accuracy of the V-5 was computed against the Inventory of Problems (IOP-29), a free-standing SVT. In Study 1, the V-5 was a poor predictor of experimental malingering status, but produced a high overall classification against the IOP-29. In Study 2, the V-5 was a stronger predictor of IOP-29 than self-reported trauma history. Results provide preliminary support for the utility of the V-5 as an embedded SVT. Given the combination of growing awareness of the need to determine the credibility of subjective symptom report using objective empirical methods and systemic pressures to abbreviate assessment, research on SVTs within rapid assessment instruments can provide practical psychometric solutions to this dilemma.


EVIs within tests of attention and processing speed.
EVIs within memory tests.
EVIs within language tests.
EVIs within tests of manual dexterity.
They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability

August 2021

·

717 Reads

·

39 Citations

Australian Journal of Forensic Sciences

This study was designed to determine the clinical utility of embedded performance validity indicators (EVIs) in adults with intellectual disability (ID) during neuropsychological assessment. Based on previous research, unacceptably high (>16%) base rates of failure (BRFail) were predicted on EVIs using on the method of threshold, but not on EVIs based on alternative detection methods. A comprehensive battery of neuropsychological tests was administered to 23 adults with ID (MAge = 37.7 years, MFSIQ = 64.9). BRFail were computed at two levels of cut-offs for 32 EVIs. Patients produced very high BRFail on 22 EVIs (18.2%-100%), indicating unacceptable levels of false positive errors. However, on the remaining ten EVIs BRFail was <16%. Moreover, six of the EVIs had a zero BRFail, indicating perfect specificity. Consistent with previous research, individuals with ID failed the majority of EVIs at high BRFail. However, they produced BRFail similar to cognitively higher functioning patients on select EVIs based on recognition memory and unusual patterns of performance, suggesting that the high BRFail reported in the literature may reflect instrumentation artefacts. The implications of these findings for clinical and forensic assessment are discussed.


Demographic characteristics and neuropsychological profiles of the student and clinical samples.
Receiver operating characteristics curves of verbal fluency tests against various criterion PVTs in the Student sample (n ¼ 70).
Receiver operating characteristics curves of verbal fluency tests against various criterion PVTs in the clinical sample (n ¼ 52).
The emotion word fluency test as an embedded performance validity indicator – Alone and in a multivariate validity composite

August 2021

·

322 Reads

·

30 Citations

Applied Neuropsychology Child

Objective This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. Method The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. Results A demographically adjusted T-score of ≤31 on the FAS was specific (.88–.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90–.93) among students at .27–.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24–.45) and specificity (.87–.93). An EWFT raw score ≤5 was highly specific (.94–.97) but insensitive (.10–.18) to invalid performance. Failing multiple cutoffs improved specificity (.90–1.00) at variable sensitivity (.19–.45). Conclusions Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.



Boston Naming Test: Lose the Noose

April 2021

·

172 Reads

·

7 Citations

Archives of Clinical Neuropsychology

Objective Administering the noose item of the Boston Naming Test (BNT) has been questioned given the cultural, historical, and emotional salience of the noose in American culture. In response, some have modified the BNT by skipping/removing this item and giving the point as if the examinee responded correctly. It is unknown, however, whether modifying standardized administration and scoring in this manner affects clinical interpretation. In the present study, we examined the prevalence of noose item failure, whether demographic and clinical characteristics differed between those who responded correctly versus failed the item, and whether giving a point to those who failed affected clinical interpretation. Method Participants included a mixed clinical sample of 762 adults, ages 18–88 years, seen for neuropsychological evaluation at one of five sites within the USA. Results Those who failed the item (13.78%) were more likely to be female, non-White, and have primary diagnoses of major neurocognitive disorder, epilepsy, or neurodevelopmental disorder. Noose item failure was associated with lower BNT total score, fewer years of education and lower intellectual functioning, expressive vocabulary, and single word reading. Giving a point to those who failed the item resulted in descriptor category change for 17.1%, primarily for patients with poor overall BNT performance. Conclusions Only a small percentage of patients fail the noose item, but adding a point for these has an impact on score interpretation. Factors associated with poorer overall performance on the BNT, rather than specific difficulty with the noose item, likely account for the findings.


Citations (20)


... In addition, considering that different authors have proposed varying IOP-M thresholds for determining performance invalidity (for example, Erdodi et al., 2023 suggested a slightly more liberal cutoff than Giromini et al., 2020a, b), our study also aimed to provide additional information regarding potentially optimal cut scores for the IOP-M. Moreover, given that severe or domain-specific cognitive impairment could increase the likelihood of failures on PVTs (Cutler et al., 2024;Erdodi, 2023;Glassmire et al., 2019;Messa et al., 2022;Tyson et al., 2023), we also sought to analyze the diagnostic accuracy of the IOP-M at different levels of cognitive impairment to generate a reasonable estimate of the false positive rate within the IOP-M. ...

Reference:

An Inventory of Problems (IOP) Study of Symptom and Performance Validity in a Sample of Driver’s License Renewal or Reinstatement Applicants
Logical memory, visual reproduction, and verbal paired associates are effective embedded validity indicators in patients with traumatic brain injury View supplementary material Logical memory, visual reproduction, and verbal paired associates are effective embedded validity indicators in patients with traumatic brain injury

... Thus, one of the aims of our study was to determine the rate of valid performance on the IOP-M among the aforementioned groups in this ecological setting. In addition, considering that different authors have proposed varying IOP-M thresholds for determining performance invalidity (for example, Erdodi et al., 2023 suggested a slightly more liberal cutoff than Giromini et al., 2020a, b), our study also aimed to provide additional information regarding potentially optimal cut scores for the IOP-M. Moreover, given that severe or domain-specific cognitive impairment could increase the likelihood of failures on PVTs (Cutler et al., 2024;Erdodi, 2023;Glassmire et al., 2019;Messa et al., 2022;Tyson et al., 2023), we also sought to analyze the diagnostic accuracy of the IOP-M at different levels of cognitive impairment to generate a reasonable estimate of the false positive rate within the IOP-M. ...

M is For Performance Validity: The IOP-M Provides a Cost-Effective Measure of the Credibility of Memory Deficits during Neuropsychological Evaluations M is For Performance Validity: The IOP-M Provides a Cost-Effective Measure of the Credibility of Memory Deficits during Neuropsychological Evaluations

Journal of Forensic Psychology Research and Practice

... In addition, considering that different authors have proposed varying IOP-M thresholds for determining performance invalidity (for example, Erdodi et al., 2023 suggested a slightly more liberal cutoff than Giromini et al., 2020a, b), our study also aimed to provide additional information regarding potentially optimal cut scores for the IOP-M. Moreover, given that severe or domain-specific cognitive impairment could increase the likelihood of failures on PVTs (Cutler et al., 2024;Erdodi, 2023;Glassmire et al., 2019;Messa et al., 2022;Tyson et al., 2023), we also sought to analyze the diagnostic accuracy of the IOP-M at different levels of cognitive impairment to generate a reasonable estimate of the false positive rate within the IOP-M. ...

They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability

Australian Journal of Forensic Sciences

... Prior classification systems have often included an "undetermined" category delimited by the authors. For example, Holcomb et al. (2023) rated patients using the WMT and WCT. If patients scored positive on both, they were considered a potential feigner, but if they scored positive on one and not the other, they were considered indeterminate. ...

Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity

... However, at Time 2, FR was a non-significant predictor of performance validity status. While the same cutoff (≤11) had perfect specificity, it had a trivial sensitivity (.11), correctly Greve et al., 2006Greve et al., , 2009Jones, 2013;Kulas et al., 2014;Rai & Erdodi, 2021;Webber et al., 2018); WCT: Word Choice Test (raw score; Cutler et al., 2022;Erdodi, 2021;Holcomb et al., 2022;Pearson, 2009;Tyson et al., 2022;Tyson & Shahein, 2023;Zuccato et al., 2018); Rey-15: Rey Fifteen-Item Test (raw score; Ashendorf et al., 2021;Poynter et al., 2019); FR: Free recall; COM: Combination score (FR + recognition true positives − recognition false positives); Rey WRT: Rey Word Recognition Test (raw score; Bell-Sprinkel et al., 2013;Goworowski et al., 2020;Love et al., 2014;Nitch et al., 2006;Smith et al., 2014); FAS and CFL: Letter fluency tests (demographically adjusted T-scores based on norms by Heaton et al., 2004;Abeare et al., 2017Abeare et al., , 2022Boucher et al., 2023;Curtis et al., 2008;Deloria et al., 2023;Hurtubise et al., 2020;Sugarman & Axelrod, 2015); EWFT: Emotion Word Fluency Test (raw score; Abeare et al., 2022); LR: Likelihood ratio (failure rate during the first administration divided by failure rate during the second administration) Note. AUC: Area under the curve; BR Fail : Base rate of failure (percent of the sample that failed a given cutoff); PVT-3: Joint outcome of the VI-7, Word Choice Test (WCT) and TOMM-1 [Pass defined as VI-7 < 2 or VI-7 = 2, but WCT accuracy >45 and completion time < 171 s Erdodi, 2021;Holcomb et al., 2022;Tyson et al., 2022;Tyson & Shahein, 2023;Zuccato et al., 2018) and TOMM-1 > 43 Jones, 2013;Kulas et al., 2014;Rai & Erdodi, 2021;Webber et al., 2018); Fail defined as ( Babikian et al., 2006;Heinly et al., 2005;Mathias et al., 2002;Pearson, 2009;Shura et al., 2020;Young et al., 2012), Coding age-corrected scaled score ≤ 4 (Ashendorf et al., 2017;Erdodi et al., 2017b;Etherton et al., 2006;Kim et al., 2010;Trueblood, 1994), Trails A T-score ≤ 31 (Abeare et al., 2019b;Ashendorf et al., 2017;Erdodi & Lichtenstein, 2021), Grooved Pegboard dominant hand T-score ≤ 29 (Erdodi et al., 2018a;Erdodi, Seke, et al., 2017;Jinkerson et al., 2023;Link et al., 2022), Forced Choice Recognition trial of the Hopkins Verbal Learning Test-Revised raw score ≤ 11 Cutler et al., 2022), animal fluency T-score ≤ 29 Deloria et al., 2023;Hurtubise et al., 2020;Sugarman & Axelrod, 2015) and Forced Choice Recognition trial of Rey Complex Figure Test raw score ≤ 16 (Abeare, Romero, et al., 2021;Rai et al., 2019) classifying only 68% of the sample. ...

Critical Item (CR) Analysis Expands the Classification Accuracy of Performance Validity Tests Based on the Forced Choice Paradigm—Replicating Previously Introduced CR Cutoffs Within the Word Choice Test

Neuropsychology

... A similar three-way classification system based on the 10-item version of the BNT as a proxy of English proficiency was previously used as a criterion grouping method by coworkers (2017a, 2017b). In the most recent normative study (Abeare et al., 2022), 89% of NSE students scored ≥13 on the BNT-15. Ali et al. (2022c) found that a BNT-15 score of ≤9 had perfect specificity to LEP status. ...

BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges

Cognitive and Behavioral Neurology

... Overall, passing or failing validity testing has been found to be associated with varying accuracy of symptom reporting in adults (e.g. Cutler et al., 2022;Keesler et al., 2017) and children (Kirkwood et al., 2014). Symptom reports by collaterals may also be inaccurate; informants report worse functioning in patients who fail PVTs when compared to patients who pass validity indicators (Webber et al., 2022). ...

One-Minute SVT? The V-5 Is A Stronger Predictor Of Symptom Exaggeration Than Self-Reported Trauma History

Journal of Forensic Psychology Research and Practice

... However, at Time 2, FR was a non-significant predictor of performance validity status. While the same cutoff (≤11) had perfect specificity, it had a trivial sensitivity (.11), correctly Greve et al., 2006Greve et al., , 2009Jones, 2013;Kulas et al., 2014;Rai & Erdodi, 2021;Webber et al., 2018); WCT: Word Choice Test (raw score; Cutler et al., 2022;Erdodi, 2021;Holcomb et al., 2022;Pearson, 2009;Tyson et al., 2022;Tyson & Shahein, 2023;Zuccato et al., 2018); Rey-15: Rey Fifteen-Item Test (raw score; Ashendorf et al., 2021;Poynter et al., 2019); FR: Free recall; COM: Combination score (FR + recognition true positives − recognition false positives); Rey WRT: Rey Word Recognition Test (raw score; Bell-Sprinkel et al., 2013;Goworowski et al., 2020;Love et al., 2014;Nitch et al., 2006;Smith et al., 2014); FAS and CFL: Letter fluency tests (demographically adjusted T-scores based on norms by Heaton et al., 2004;Abeare et al., 2017Abeare et al., , 2022Boucher et al., 2023;Curtis et al., 2008;Deloria et al., 2023;Hurtubise et al., 2020;Sugarman & Axelrod, 2015); EWFT: Emotion Word Fluency Test (raw score; Abeare et al., 2022); LR: Likelihood ratio (failure rate during the first administration divided by failure rate during the second administration) Note. AUC: Area under the curve; BR Fail : Base rate of failure (percent of the sample that failed a given cutoff); PVT-3: Joint outcome of the VI-7, Word Choice Test (WCT) and TOMM-1 [Pass defined as VI-7 < 2 or VI-7 = 2, but WCT accuracy >45 and completion time < 171 s Erdodi, 2021;Holcomb et al., 2022;Tyson et al., 2022;Tyson & Shahein, 2023;Zuccato et al., 2018) and TOMM-1 > 43 Jones, 2013;Kulas et al., 2014;Rai & Erdodi, 2021;Webber et al., 2018); Fail defined as ( Babikian et al., 2006;Heinly et al., 2005;Mathias et al., 2002;Pearson, 2009;Shura et al., 2020;Young et al., 2012), Coding age-corrected scaled score ≤ 4 (Ashendorf et al., 2017;Erdodi et al., 2017b;Etherton et al., 2006;Kim et al., 2010;Trueblood, 1994), Trails A T-score ≤ 31 (Abeare et al., 2019b;Ashendorf et al., 2017;Erdodi & Lichtenstein, 2021), Grooved Pegboard dominant hand T-score ≤ 29 (Erdodi et al., 2018a;Erdodi, Seke, et al., 2017;Jinkerson et al., 2023;Link et al., 2022), Forced Choice Recognition trial of the Hopkins Verbal Learning Test-Revised raw score ≤ 11 Cutler et al., 2022), animal fluency T-score ≤ 29 Deloria et al., 2023;Hurtubise et al., 2020;Sugarman & Axelrod, 2015) and Forced Choice Recognition trial of Rey Complex Figure Test raw score ≤ 16 (Abeare, Romero, et al., 2021;Rai et al., 2019) classifying only 68% of the sample. ...

The emotion word fluency test as an embedded performance validity indicator – Alone and in a multivariate validity composite

Applied Neuropsychology Child

... Notably the black-and-white hand-drawn images (which themselves were initially created in the 1940s) are likely no longer familiar to contemporary young people and thus inappropriate for use (Baron, 2018). Similarly, some of the stimuli represent outdated or culturally inappropriate objects (Eloi et al., 2021), further emphasizing why the BNT may not be appropriate for use with pediatric populations. Finally, research in adult samples has demonstrated that cognitive processing speed, which may be slower in children and older adults, may mediate performance on traditional confrontation naming paradigms (Hamberger et al., 2018;Soble et al., 2016). ...

Corrigendum to: Boston Naming Test: Lose the Noose
  • Citing Article
  • April 2021

Archives of Clinical Neuropsychology

... Construct equivalence is also impacted by context and other variables irrelevant to what is being measured. For example, the Boston Naming Test features a noose item, which some clinicians refuse to administer given the cultural, historical, and emotional salience (Byrd et al., 2021;Eloi et al., 2021). The noose is highly significant in Black American culture and African American English dialect. ...

Boston Naming Test: Lose the Noose
  • Citing Article
  • April 2021

Archives of Clinical Neuropsychology