To read the full-text of this research, you can request a copy directly from the authors.
... The results of blood culture for identification of pathogens were used as the gold standard in this study. The diagnostic performance of the PCR-MCA assay was determined and compared to the results of blood culture, using sensitivity, specificity, positive predictive value, negative predictive value (NPV), agreement rate and Kappa value, according to the Yerushalmy model (Niu et al., 2015). The sensitivity was calculated as the number of episodes in which both blood culture and PCR-MCA were positive divided by the sum of these episodes and the number of episodes in which blood culture was positive but PCR-MCA were negative. ...
Background: This study aimed to evaluate real-time polymerase chain reaction coupled with multiplex probe melting curve analysis (PCR-MCA) for pathogen detection in patients with suspected bloodstream infections (BSIs). Methods: A PCR-MCA assay was developed for simultaneous identification of 28 kinds of the most common pathogens and two resistance genes within a few hours. The diagnostic performance of the PCR-MCA assay was determined and compared to the results of blood culture. Results: A total of 2,844 consecutive new episodes of suspected BSIs in 2,763 patients were included in this study. There were 269 episodes of pathogens identified by blood culture. For all the pathogens tested, the PCR-MCA assay exhibited a sensitivity of 88.8% (239/269), specificity of 100% (2,575/2,575), and agreement of 98.9% (2,814/2,844). For the pathogens on the PCR-MCA list, the PCR-MCA results had a sensitivity of 99.2% (239/241), specificity of 100% (2,575/2,575), and agreement of 99.9% (2,814/2,816) compared with the results of blood culture. For seven samples with multiple pathogens identified simultaneously during one blood culture investigation, the PCR-MCA assay verified the results of the blood culture, with an agreement rate of 100% for each. Conclusion: The PCR-MCA assay could discover 88.8% of the pathogens in clinical practice, showing excellent diagnostic performance vs. that of blood culture for pathogen detection in patients with suspected BSIs, and would contribute to rapid diagnosis and correct antibiotic administration.
Rapid point-of-care (POC) syphilis tests based on simultaneous detection of treponemal and nontreponemal antibodies (dual POC tests) offer the opportunity to increase coverage of syphilis screening and treatment. This study aimed to conduct a multisite performance evaluation of a dual POC syphilis test in China.
Participants were recruited from patients at sexually transmitted infection clinics and high-risk groups in outreach settings in 6 sites in China. Three kinds of specimens (whole blood [WB], fingerprick blood [FB], and blood plasma [BP]) were used for evaluating sensitivity and specificity of the Dual Path Platform (DPP) Syphilis Screen and Confirm test using its treponemal and nontreponemal lines to compare Treponema pallidum particle agglutination (TPPA) assay and toluidine red unheated serum test (TRUST) as reference standards.
A total of 3134 specimens (WB 1323, FB 488, and BP 1323) from 1323 individuals were collected. The sensitivities as compared with TPPA were 96.7% for WB, 96.4% for FB, and 94.6% for BP, and the specificities were 99.3%, 99.1%, and 99.6%, respectively. The sensitivities as compared with TRUST were 87.2% for WB, 85.8% for FB, and 88.4% for BP, and the specificities were 94.4%, 96.1%, and 95.0%, respectively. For specimens with a TRUST titer of 1:4 or higher, the sensitivities were 100.0% for WB, 97.8% for FB, and 99.6% for BP.
DPP test shows good sensitivity and specificity in detecting treponemal and nontreponemal antibodies in 3 kinds of specimens. It is hoped that this assay can be considered as an alternative in the diagnosis of syphilis, particularly in resource-limited areas.
Background and Purpose—Guidelines recommend screening stroke-survivors for cognitive impairments. We sought to collate published data on test accuracy of cognitive screening tools.
Methods—Index test was any direct, cognitive screening assessment compared against reference standard diagnosis of (undifferentiated) multidomain cognitive impairment/dementia. We used a sensitive search statement to search multiple, cross-disciplinary databases from inception to January 2014. Titles, abstracts, and articles were screened by independent researchers. We described risk of bias using Quality Assessment of Diagnostic Accuracy Studies tool and reporting quality using Standards for Reporting of Diagnostic Accuracy guidance. Where data allowed, we pooled test accuracy using bivariate methods.
Results—From 19 182 titles, we reviewed 241 articles, 35 suitable for inclusion. There was substantial heterogeneity: 25 differing screening tests; differing stroke settings (acute stroke, n=11 articles), and reference standards used (neuropsychological battery, n=21 articles). One article was graded low risk of bias; common issues were case–control methodology (n=7 articles) and missing data (n=22). We pooled data for 4 tests at various screen positive thresholds: Addenbrooke’s Cognitive Examination-Revised (<88/100): sensitivity 0.96, specificity 0.70 (2 studies); Mini Mental State Examination (<27/30): sensitivity 0.71, specificity 0.85 (12 studies); Montreal Cognitive Assessment (<26/30): sensitivity 0.95, specificity 0.45 (4 studies); MoCA (<22/30): sensitivity 0.84, specificity 0.78 (6 studies); Rotterdam-CAMCOG (<33/49): sensitivity 0.57, specificity 0.92 (2 studies).
Conclusions—Commonly used cognitive screening tools have similar accuracy for detection of dementia/multidomain impairment with no clearly superior test and no evidence that screening tools with longer administration times perform better. MoCA at usual threshold offers short assessment time with high sensitivity but at cost of specificity; adapted cutoffs have improved specificity without sacrificing sensitivity. Our results must be interpreted in the context of modest study numbers: heterogeneity and potential bias.
Algorithms for the diagnosis of syphilis continue to be a source of great controversy, and numerous test interpretations have perplexed many clinicians.
We conducted a cross-sectional study of 24 124 subjects to analyze 3 syphilis testing algorithms: traditional algorithm, reverse algorithm, and the European Centre for Disease Prevention and Control (ECDC) algorithm. Every serum sample was simultaneously evaluated using the rapid plasma reagin, Treponema pallidum particle agglutination, and chemiluminescence immunoassay tests. With the results of clinical diagnoses of syphilis as a gold standard, we evaluated the diagnostic accuracy of the 3 syphilis testing algorithms. The κ coefficient was used to compare the concordance between the reverse algorithm and the ECDC algorithm.
Overall, 2749 patients in our cohort were diagnosed with syphilis. The traditional algorithm had the highest negative likelihood ratio (0.24), a missed diagnosis rate of 24.2%, and only 75.81% sensitivity. However, both the reverse and ECDC algorithms had higher diagnostic efficacy than the traditional algorithm. Their sensitivity, specificity, and accuracy were 99.38%-99.85%, 99.98%-100.00%, and 99.93%-99.96%, respectively. Moreover, the overall percentage of agreement and κ value between the reverse and the ECDC algorithms were 99.9% and 0.996, respectively.
Our research supported use of the ECDC algorithm, in which syphilis screening begins with a treponemal immunoassay that is followed by a second, different treponemal assay as a confirmatory test in high-prevalence populations. In addition, our results indicated that nontreponemal assay is unnecessary for syphilis diagnosis but can be recommended for determining serological activity and the effect of syphilis treatment.
This is the first of a series of five articles
Development and introduction of new diagnostic techniques have greatly accelerated over the past decades. The evaluation of diagnostic techniques, however, is less advanced than that of treatments. Unlike with drugs, there are generally no formal requirements for adoption of diagnostic tests in routine care. In spite of important contributions, 1 2 the methodology of diagnostic research is poorly defined compared with study designs on treatment effectiveness, or on aetiology, so it is not surprising that methodological flaws are common in diagnostic studies.3–5 Furthermore, research funds rarely cover diagnostic research starting from symptoms or tests.
Since quality of the diagnostic process largely determines quality of care, overcoming deficiencies in standards, methodology, and funding deserves high priority. This article summarises objectives of diagnostic testing and research, methodological challenges, and options for design of studies.
#### Summary points
Development of diagnostic techniques has greatly accelerated but the methodology of diagnostic research lags far behind that for evaluating treatments
Objectives of diagnostic investigations include detection or exclusion of disease; contributing to management; assessment of prognosis; monitoring clinical course; and measurement of general health or fitness
Methodological challenges include the “gold standard” problem; spectrum and selection biases; “soft” measures (subjective phenomena); observer variability and bias; complex relations; clinical impact; sample size; and rapid progress of knowledge
Diagnostic investigations collect information to clarify patients' health status, using personal characteristics, symptoms, signs, history, physical examination, laboratory tests, and additional facilities. Objectives include the following.
To estimate HIV prevalence, annual HIV incidence density, and factors associated with HIV infection among young MSM in the United States.
The 2008 National HIV Behavioral Surveillance System (NHBS), a cross-sectional survey conducted in 21 US cities.
NHBS respondents included in the analysis were MSM aged 18-24 with a valid HIV test who reported at least one male sex partner in the past year. We calculated HIV prevalence and estimated annual incidence density (number of HIV infections/total number of person-years at risk). Generalized estimating equations were used to determine factors associated with testing positive for HIV.
Of 1889 young MSM, 198 (10%) had a positive HIV test; of these, 136 (69%) did not report previously testing HIV positive when interviewed. Estimated annual HIV incidence density was 2.9%; incidence was highest for blacks. Among young MSM who did not report being HIV infected, factors associated with testing HIV positive included black race; less than high school education; using both alcohol and drugs before or during last sex; having an HIV test more than 12 months ago; and reporting a visit to a medical provider in the past year.
HIV prevalence and estimated incidence density for young MSM were high. Individual risk behaviors did not fully explain HIV risk, emphasizing the need to address sociodemographic and structural-level factors in public health interventions targeted toward young MSM.
To compare the diagnostic sensitivity and specificity of seven Cryptosporidium diagnostic assays used in the UK, results from 259 stool samples from patients with acute gastrointestinal symptoms were compared against a nominated gold standard (real-time PCR and oocyst detection). Of the 152 'true positives', 80 were Cryptosporidium hominis, 68 Cryptosporidium parvum, two Cryptosporidium felis, one Cryptosporidium ubiquitum and one Cryptosporidium meleagridis. The Cryptosporidium spp. diagnostic sensitivities of three Cryptosporidium and Giardia combination enzyme immunoassays (EIA) coupled with confirmation of positive reactions were 91.4-93.4 %, whilst the sensitivity of auramine phenol microscopy was 92.1 % and that of immunofluorescence microscopy (IFM) was 97.4 %, all with overlapping 95 % confidence intervals. However, IFM was significantly more sensitive (P = 0.01, paired test of proportions). The sensitivity of modified Ziehl-Neelsen microscopy was 75.4 %, significantly lower than those for the other tests investigated, including an immunochromatographic lateral flow assay (ICLF) (84.9 %) (P = 0.0016). Specificities were 100 % when the ICLF and EIA test algorithms included confirmation of positive reactions; however, four positive EIA reactions were not confirmed for either parasite. There was no significant difference in the detection of C. parvum and C. hominis by each assay, but the detection of other Cryptosporidium spp. requires further investigation, as the numbers of samples were small. EIAs may be considered for diagnostic testing, subject to local validation, and diagnostic algorithms must include confirmation of positive reactions.
Technical and methodological factors might affect the reported accuracies of diagnostic tests. To assess their influence on the accuracy of exercise thallium scintigraphy, the medical literature (1977 to 1986) was non-selectively searched and meta-analysis was applied to the 56 publications thus retrieved. These were analyzed for year of publication, sex and mean age of patients, percentage of patients with angina pectoris, percentage of patients with prior myocardial infarction, percentage of patients taking beta-blocking medications, and for angiographic referral (workup) bias, blinding of tests, and technical factors. The percentage of patients with myocardial infarction had the highest correlation with sensitivity (0.45, p = 0.0007). Only the inclusion of subjects with prior infarction and the percentage of men in the study group were independently and significantly (p less than 0.05) related to test sensitivity. Both the presence of workup bias and publication year adversely affected specificity (p less than 0.05). Of these two factors, publication year had the strongest association by stepwise linear regression. This analysis suggests that the reported sensitivity of thallium scintigraphy is higher and the specificity lower than that expected in clinical practice because of the presence of workup bias and the inappropriate inclusion of post-infarct patients.
To study current diagnostic test evaluation, 129 recent articles were assessed against several well-known methodological criteria. Only 68% employed a well-defined "gold standard." Test interpretation was clearly described in only 68% and was stated to be "blind" in only 40%. Approximately 20% used the terms sensitivity and specificity incorrectly. Predictive values were considered in only 31% and the influence of disease prevalence and study setting was considered in only 19%. Overall, 74% failed to demonstrate more than four of seven important characteristics and there was an increased proportion of high specificities reported in this group. Articles assessing new tests reported high sensitivities and specificities significantly more often than articles assessing existing tests. These results indicate a clear need for greater attention to accepted methodological standards on the part of researchers, reviewers, and editors.
3-5 Further› more, research funds rarely cover diagnostic research starting from symptoms or tests. Since quality of the diagnostic process largely determines quality of care, overcoming deficiencies in standards, methodology, and funding deserves high priority. This article summarises objectives of diagnos› tic testing and research, methodological challenges, and options for design of studies.
The development of genomics-based technologies is demonstrating that many common diseases are heterogeneous collections of molecularly distinct entities. Molecularly targeted therapeutics is often effective only for some subsets patients with a conventionally defined disease. We consider the problem of design of phase III randomized clinical trials for the evaluation of a molecularly targeted treatment when there is an assay predictive of which patients will be more responsive to the experimental treatment than to the control regimen. We compare the conventional randomized clinical trial design to a design based on randomizing only patients predicted to preferentially benefit from the new treatment. Trial designs are compared based on the required number of randomized patients and the expected number of patients screened for randomization eligibility. Relative efficiency depends upon the distribution of treatment effect across patient subsets, prevalence of the subset of patients who respond preferentially to the experimental treatment, and assay performance.