[Show abstract][Hide abstract] ABSTRACT: b>Objective To validate all diagnostic prediction models for ruling out pulmonary embolism that are easily applicable in primary care. Design Systematic review followed by independent external validation study to assess transportability of retrieved models to primary care medicine. Setting 300 general practices in the Netherlands. Participants Individual patient dataset of 598 patients with suspected acute pulmonary embolism in primary care. Main outcome measures Discriminative ability of all models retrieved by systematic literature search, assessed by calculation and comparison of C statistics. After stratification into groups with high and low probability of pulmonary embolism according to pre-specified model cut-offs combined with qualitative D-dimer test, sensitivity, specificity, efficiency (overall proportion of patients with low probability of pulmonary embolism), and failure rate (proportion of pulmonary embolism cases in group of patients with low probability) were calculated for all models. Results Ten published prediction models for the diagnosis of pulmonary embolism were found. Five of these models could be validated in the primary care dataset: the original Wells, modified Wells, simplified Wells, revised Geneva, and simplified revised Geneva models. Discriminative ability was comparable for all models (range of C statistic 0.75-0.80). Sensitivity ranged from 88% (simplified revised Geneva) to 96% (simplified Wells) and specificity from 48% (revised Geneva) to 53% (simplified revised Geneva). Efficiency of all models was between 43% and 48%. Differences were observed between failure rates, especially between the simplified Wells and the simplified revised Geneva models (failure rates 1.2% (95% confidence interval 0.2% to 3.3%) and 3.1% (1.4% to 5.9%), respectively; absolute difference −1.98% (−3.33% to −0.74%)). Irrespective of the diagnostic prediction model used, three patients were incorrectly classified as having low probability of pulmonary embolism; pulmonary embolism was diagnosed only after referral to secondary care. Conclusions Five diagnostic pulmonary embolism prediction models that are easily applicable in primary care were validated in this setting. Whereas efficiency was comparable for all rules, the Wells rules gave the best performance in terms of lower failure rates.
[Show abstract][Hide abstract] ABSTRACT: Prediction models are developed to aid health care providers in estimating the probability that a specific outcome or disease is present (diagnostic prediction models) or will occur in the future (prognostic prediction models), to inform their decision making. Prognostic models here also include models to predict treatment outcomes or responses; in the cancer literature often referred to as predictive models. Clinical prediction models have become abundant. Pathology measurement or results are frequently included as predictors in such prediction models, certainly in the cancer domain. Only when full information on all aspects of a prediction modeling study are clearly reported, risk of bias and potential usefulness of the prediction model can be adequately assessed. Many reviews have illustrated that the quality of reports on the development, validation, and/or adjusting (updating) of prediction models, is very poor. Hence, the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) initiative has developed a comprehensive and user-friendly checklist for the reporting of studies on, both diagnostic and prognostic, prediction models. The TRIPOD Statement intends to improve the transparency and completeness of reporting of studies that report solely on development, both development and validation, and solely on the validation (with or without updating) of diagnostic or prognostic, including predictive, models.
[Show abstract][Hide abstract] ABSTRACT: Measuring the incidence of healthcare-associated infections (HAI) is of increasing importance in current healthcare delivery systems. Administrative data algorithms, including (combinations of) diagnosis codes, are commonly used to determine the occurrence of HAI, either to support within-hospital surveillance programmes or as free-standing quality indicators. We conducted a systematic review evaluating the diagnostic accuracy of administrative data for the detection of HAI.
Systematic search of Medline, Embase, CINAHL and Cochrane for relevant studies (1995-2013). Methodological quality assessment was performed using QUADAS-2 criteria; diagnostic accuracy estimates were stratified by HAI type and key study characteristics.
57 studies were included, the majority aiming to detect surgical site or bloodstream infections. Study designs were very diverse regarding the specification of their administrative data algorithm (code selections, follow-up) and definitions of HAI presence. One-third of studies had important methodological limitations including differential or incomplete HAI ascertainment or lack of blinding of assessors. Observed sensitivity and positive predictive values of administrative data algorithms for HAI detection were very heterogeneous and generally modest at best, both for within-hospital algorithms and for formal quality indicators; accuracy was particularly poor for the identification of device-associated HAI such as central line associated bloodstream infections. The large heterogeneity in study designs across the included studies precluded formal calculation of summary diagnostic accuracy estimates in most instances.
Administrative data had limited and highly variable accuracy for the detection of HAI, and their judicious use for internal surveillance efforts and external quality assessment is recommended. If hospitals and policymakers choose to rely on administrative data for HAI surveillance, continued improvements to existing algorithms and their robust validation are imperative.
Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
BMJ Open 08/2015; 5(8). DOI:10.1136/bmjopen-2015-008424 · 2.27 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Healthcare provision is increasingly focused on the prediction of patients’ individual risk for developing a particular health outcome in planning further tests and treatments. There has been a steady increase in the development and publication of prognostic models for various maternal and fetal outcomes in obstetrics. We undertook a systematic review to give an overview of the current status of available prognostic models in obstetrics in the context of their potential advantages and the process of developing and validating models. Important aspects to consider when assessing a prognostic model are discussed and recommendations on how to proceed on this within the obstetric domain are given.
American journal of obstetrics and gynecology 06/2015; DOI:10.1016/j.ajog.2015.06.013 · 4.70 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: BACKGROUND Valid comparison between hospitals for benchmarking or pay-for-performance incentives requires accurate correction for underlying disease severity (case-mix). However, existing models are either very simplistic or require extensive manual data collection. OBJECTIVE To develop a disease severity prediction model based solely on data routinely available in electronic health records for risk-adjustment in mechanically ventilated patients. DESIGN Retrospective cohort study. PARTICIPANTS Mechanically ventilated patients from a single tertiary medical center (2006-2012). METHODS Predictors were extracted from electronic data repositories (demographic characteristics, laboratory tests, medications, microbiology results, procedure codes, and comorbidities) and assessed for feasibility and generalizability of data collection. Models for in-hospital mortality of increasing complexity were built using logistic regression. Estimated disease severity from these models was linked to rates of ventilator-associated events. RESULTS A total of 20,028 patients were initiated on mechanical ventilation, of whom 3,027 deceased in hospital. For models of incremental complexity, area under the receiver operating characteristic curve ranged from 0.83 to 0.88. A simple model including demographic characteristics, type of intensive care unit, time to intubation, blood culture sampling, 8 common laboratory tests, and surgical status achieved an area under the receiver operating characteristic curve of 0.87 (95% CI, 0.86-0.88) with adequate calibration. The estimated disease severity was associated with occurrence of ventilator-associated events. CONCLUSIONS Accurate estimation of disease severity in ventilated patients using electronic, routine care data was feasible using simple models. These estimates may be useful for risk-adjustment in ventilated patients. Additional research is necessary to validate and refine these models.
Infection Control and Hospital Epidemiology 04/2015; 36(07):1-9. DOI:10.1017/ice.2015.75 · 4.18 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Background
General practitioners (GP) can safely exclude pulmonary embolism (PE) using the Wells PE-rule combined with D-dimer testing.Objective
To compare the accuracy of a strategy using the Wells rule combined with either a qualitative point of care (POC) D-dimer test performed in primary care or a quantitative laboratory based D-dimer test.Methods
We used data from a prospective cohort study including 598 adults suspected of PE in primary care in the Netherlands. GPs scored the Wells rule and carried out a qualitative POC test. All patients were referred to hospital for reference testing. We obtained quantitative D-dimer-test results as performed in hospital laboratories. The primary outcome was the prevalence of venous thrombo-embolism in low-risk patients.ResultsPrevalence of PE was 12.2%. POC D-dimer-test results were available in 582 patients (97%). Quantitative test results were available in 401 patients (67%). We imputed results in 197 patients. The quantitative test and POC-test missed 1 (0.4%) and 4 patients (1.5%), respectively, with a negative strategy (Wells ≤4 points and D-dimer test negative)(p=0.20). The POC-test could exclude 23 more patients (4%)(p=0.05). The sensitivity and specificity of the Wells rule combined with a POC test was 94.5% and 51.0%, combined with a quantitative test 98.6% and 47.2%, respectively.Conclusions
Combined with the Wells PE-rule both tests are safe in excluding PE. The quantitative test seemed to be safer than the POC test, albeit not statistically significant. The specificity of the POC test was higher resulting in more patients in whom PE could be excluded.This article is protected by copyright. All rights reserved.
Journal of Thrombosis and Haemostasis 04/2015; 13(6). DOI:10.1111/jth.12951 · 5.72 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Nagaan of een screening op COPD bij thuiswonende kwetsbare ouderen met klachten van kortademigheid of verminderd inspanningsvermogen zinvol zou kunnen zijn. Daartoe bepaalden we het medicatiegebruik, het aantal ziekenhuisopnames en de sterfte onder patiënten bij wie de screening leidde tot een eerste diagnose COPD.
Een panel van deskundigen bepaalde de diagnose op basis van alle screeningsgegevens, inclusief spirometrie. Follow-upgegevens verzamelden we via de deelnemende huisartsen.
De screening werd uitgevoerd bij 386 oudere huisartspatiënten met kortademigheid of inspanningstolerantie. Bij 84 (21,8%) patiënten leidde de screening tot een niet eerder gestelde diagnose COPD. Van deze 84 waren er 15 (17,9%) binnen zes maanden na de diagnose gestart met inhalatiemedicatie of ze hadden hun bestaande medicatie aangepast, en waren er 27 (32,1%) binnen twaalf maanden opgenomen in een ziekenhuis. In de groep bij wie de screening geen COPD had aangetoond, lag dit laatste percentage significant lager (22,9%). De mortaliteit was in beide groepen vergelijkbaar.
Door kwetsbare ouderen te screenen kunnen veel nieuwe gevallen van COPD worden ontdekt. De screening heeft echter weinig consequenties voor de daaropvolgende behandeling. Een mogelijke verklaring is dat patiënten die niet zelf met hun klachten naar de huisarts stappen, waarschijnlijk toch al minder gemotiveerd zijn voor behandeling.
Huisarts en wetenschap 04/2015; 58(5):242-244. DOI:10.1007/s12445-015-0131-4
[Show abstract][Hide abstract] ABSTRACT: OBJECTIVE Manual surveillance of healthcare-associated infections is cumbersome and vulnerable to subjective interpretation. Automated systems are under development to improve efficiency and reliability of surveillance, for example by selecting high-risk patients requiring manual chart review. In this study, we aimed to validate a previously developed multivariable prediction modeling approach for detecting drain-related meningitis (DRM) in neurosurgical patients and to assess its merits compared to conventional methods of automated surveillance. METHODS Prospective cohort study in 3 hospitals assessing the accuracy and efficiency of 2 automated surveillance methods for detecting DRM, the multivariable prediction model and a classification algorithm, using manual chart review as the reference standard. All 3 methods of surveillance were performed independently. Patients receiving cerebrospinal fluid drains were included (2012-2013), except children, and patients deceased within 24 hours or with pre-existing meningitis. Data required by automated surveillance methods were extracted from routine care clinical data warehouses. RESULTS In total, DRM occurred in 37 of 366 external cerebrospinal fluid drainage episodes (12.3/1000 drain days at risk). The multivariable prediction model had good discriminatory power (area under the ROC curve 0.91-1.00 by hospital), had adequate overall calibration, and could identify high-risk patients requiring manual confirmation with 97.3% sensitivity and 52.2% positive predictive value, decreasing the workload for manual surveillance by 81%. The multivariable approach was more efficient than classification algorithms in 2 of 3 hospitals. CONCLUSIONS Automated surveillance of DRM using a multivariable prediction model in multiple hospitals considerably reduced the burden for manual chart review at near-perfect sensitivity. Infect Control Hosp Epidemiol 2015;36(1): 65-75.
Infection Control and Hospital Epidemiology 01/2015; 36(1):65-75. DOI:10.1017/ice.2014.5 · 4.18 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Background
Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed.Materials and methodsThe Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors.ResultsThe resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document.Conclusions
To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
European Journal of Clinical Investigation 01/2015; 32(2). DOI:10.1111/eci.12376 · 2.73 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) Statement includes a 22-item checklist, which aims to improve the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD Statement is explained in detail and accompanied by published examples of good reporting. The document also provides a valuable reference of issues to consider when designing, conducting, and analyzing prediction model studies. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, it is recommended that authors include a completed checklist in their submission. The TRIPOD checklist can also be downloaded from www.tripod-statement.org.
Annals of internal medicine 01/2015; 162(1):W1-W73. DOI:10.7326/M14-0698 · 17.81 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Treatment of previously untreated patients (PUPs) with severe haemophilia A is complicated by the formation of inhibitors. Prediction of PUPs with high risk is important to allow altering treatment with the intention to reduce the occurrence of inhibitors. An unselected multicentre cohort of 825 PUPs with severe haemophilia A (FVIII<0.01 IU mL−1) was used. Patients were followed until 50 exposure days (EDs) or inhibitor development. All predictors of the existing prediction model including three new potential predictors were studied using multivariable logistic regression. Model performance was quantified [area under the curve (AUC), calibration plot] and internal validation (bootstrapping) was performed. A nomogram for clinical application was developed. Of the 825 patients, 225 (28%) developed inhibitors. The predictors family history of inhibitors, F8 gene mutation and an interaction variable of dose and number of EDs of intensive treatment were independently associated with inhibitor development. Age and reason for first treatment were not associated with inhibitor development. The AUC was 0.69 (95% CI 0.65–0.72) and calibration was good. An improved prediction model for inhibitor development and a nomogram for clinical use were developed in a cohort of 825 PUPs with severe haemophilia A. Clinical applicability was improved by combining dose and duration of intensive treatment, allowing the assessment of the effects of treatment decisions on inhibitor risk and potentially modify treatment.