Screening for cognitive deficits after stroke: a comparison of three screening tools.

Department of Geriatric Medicine, Ullevaal University Hospital, Oslo, Norway.
Clinical Rehabilitation (Impact Factor: 2.18). 01/2009; 22(12):1095-104. DOI: 10.1177/0269215508094711
Source: PubMed

ABSTRACT To assess the concurrent validity of three screening tests for focal cognitive impairments after stroke.
Comparison of results from the screening tests with those from a more comprehensive neuropsychological battery.
Stroke rehabilitation wards of a general hospital and a rehabilitation hospital.
Forty-nine stroke patients (25-91 years, 35% women).
Screening tests were the Cognistat, the Screening Instrument for Neuropsychological Impairments in Stroke (SINS) and the Clock Drawing Test. Health professionals, blind to the results of the reference method, did the screening. Reference method was a neuropsychological assessment based on the Norwegian Basic Neuropsychological Assessment, classifying the patients as ;impaired' or ;not impaired' within the following cognitive domains: language, visuospatial function, attention and neglect, apraxia, speed in unaffected arm, and memory.
The best sensitivity (95% confidence interval) was achieved for language problems by Cognistat, naming (80%, 44-98); for visuospatial dysfunction, attention deficits and reduced speed, all by SINS visuocognitive (82%, 60-95, 72%, 39-94, and 78%, 56-93, respectively); and for memory problems by Cognistat memory (69%, 52-87). The data were insufficient to assess any subtest for apraxia. Sensitivity in detecting deficits in any domain was 82% (71-94) for the Cognistat composite score, 71% (57-85) for the SINS composite score, and 63% (49-78) for the most sensitive score of the Clock Drawing Test.
The Cognistat and the SINS may be used as screening instruments for cognitive deficits after stroke, but cannot replace a neuropsychological assessment. The Clock Drawing Test added little to the detection of cognitive deficits.


Available from: Anne-Kristine Schanke, Apr 22, 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objective: To systematically review the psychometric properties and clinical utility of cognitive screening tools post-stroke. Data sources: EMBASE, CINAHL, MEDLINE, PsychInfo. Study selection: Studies testing the accuracy of screening tools for cognitive impairment after stroke. Data extraction: Data regarding the participants, selection criteria, criterion/reference measure, cut-off score, sensitivity, specificity and positive and negative predicted values for the selected tools were extracted. Tools with sensitivity ≥ 80% and specificity ≥ 60% were selected. Clinical utility was assessed using a previously validated tool and those scoring <6 were excluded. Data synthesis: Twenty-one papers regarding 12 screening tools were selected. Only the Montreal Cognitive Assessment (MoCA) and Mini Mental State Examination (MMSE) met all psychometric and clinical utility criteria for any levels of cognitive impairment. However, the MMSE is most accurate to screen for dementia (cut-off score 23/24) and should only be used for this purpose. In addition, the following can be used to detect: • Any impairment: Addenbrooke's Cognitive Examination-Revised (ACE-R), Barrow Neurological Institute Screen for Higher Cerebral Functions (BNIS) and Cognistat. • Multiple-domain impairments: ACE-R, Telephone-MoCA or modified Telephone Interview for Cognitive Status (TICS). • Dementia: TICS; Cambridge Cognitive Examination; Rotterdam-Cambridge Cognitive Examination; Informant Questionnaire for Cognitive Decline in the Elderly (IQCODE) and short-IQCODE. The IQCODE and short-IQCODE are useful when the patient is unable to respond and an informant's view is required. Conclusion: The MoCA is the most valid and clinically feasible screening tool to identify stroke survivors with a wide range of cognitive impairments who warrant further assessment.
    Journal of Rehabilitation Medicine 01/2015; 47(3). DOI:10.2340/16501977-1930 · 1.90 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background and Purpose—Guidelines recommend screening stroke-survivors for cognitive impairments. We sought to collate published data on test accuracy of cognitive screening tools. Methods—Index test was any direct, cognitive screening assessment compared against reference standard diagnosis of (undifferentiated) multidomain cognitive impairment/dementia. We used a sensitive search statement to search multiple, cross-disciplinary databases from inception to January 2014. Titles, abstracts, and articles were screened by independent researchers. We described risk of bias using Quality Assessment of Diagnostic Accuracy Studies tool and reporting quality using Standards for Reporting of Diagnostic Accuracy guidance. Where data allowed, we pooled test accuracy using bivariate methods. Results—From 19 182 titles, we reviewed 241 articles, 35 suitable for inclusion. There was substantial heterogeneity: 25 differing screening tests; differing stroke settings (acute stroke, n=11 articles), and reference standards used (neuropsychological battery, n=21 articles). One article was graded low risk of bias; common issues were case–control methodology (n=7 articles) and missing data (n=22). We pooled data for 4 tests at various screen positive thresholds: Addenbrooke’s Cognitive Examination-Revised (<88/100): sensitivity 0.96, specificity 0.70 (2 studies); Mini Mental State Examination (<27/30): sensitivity 0.71, specificity 0.85 (12 studies); Montreal Cognitive Assessment (<26/30): sensitivity 0.95, specificity 0.45 (4 studies); MoCA (<22/30): sensitivity 0.84, specificity 0.78 (6 studies); Rotterdam-CAMCOG (<33/49): sensitivity 0.57, specificity 0.92 (2 studies). Conclusions—Commonly used cognitive screening tools have similar accuracy for detection of dementia/multidomain impairment with no clearly superior test and no evidence that screening tools with longer administration times perform better. MoCA at usual threshold offers short assessment time with high sensitivity but at cost of specificity; adapted cutoffs have improved specificity without sacrificing sensitivity. Our results must be interpreted in the context of modest study numbers: heterogeneity and potential bias.
    Stroke 09/2014; [epub ahead of print](10). DOI:10.1161/STROKEAHA.114.005842 · 6.02 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We report a lesion–symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a ‘shared’ component that loaded across all the visual speech production tasks and a ‘unique’ component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual–speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.
    01/2015; 214. DOI:10.1016/j.nicl.2015.01.015