Karel G M Moons

University Medical Center Utrecht, Utrecht, Utrecht, Netherlands

Are you Karel G M Moons?

Claim your profile

Publications (361)1924.02 Total impact

  • Karel G M Moons, Ewoud Schuit
  • [Show abstract] [Hide abstract]
    ABSTRACT: Right ventricular pacing (RVP) is associated with an increased risk of heart failure (HF) events. However, the extent and shape of this association is hardly assessed. We quantified whether the undesired effects of RVP are confirmed in an unselected population of first bradycardia pacemaker recipients. Furthermore, we studied the shape of the association between RVP and HF death and cardiac death. Cumulative percentage RVP (%RVP) was measured in 1395 patients. Using multivariable Cox regression analysis with %RVP as time-dependant co-variate we evaluated the association between %RVP and HF- and cardiac death, both unadjusted and adjusted for confounders, including age, gender, pacemaker-indication, cardiac disease, HF at baseline, diabetes, hypertension, atrio-ventricular synchrony, usage of beta-blocking drugs, anti-arrhythmic medication, HF medication, and prior atrial fibrillation/flutter. Non-linear associations were evaluated with restricted cubic splines. During a mean follow-up of 5.8 (SD 1.1) years 104 HF deaths and 144 cardiac deaths were observed. %RVP was significantly associated with HF- and cardiac death in both unadjusted (p<0.001 and p<0.001, respectively) and adjusted analyses (p=0.046 and p=0.009, respectively). Our results show a linear association between %RVP and HF- and cardiac death. We observed a constant increase of 8% risk of HF death per 10% increase in RVP. A model incorporating various non-linear transformations of %RVP using restrictive cubic splines showed no improved model fit over linear associations. This long-term, prospective study observed a significant, though linear association between %RVP and risk of HF death and/or cardiac death in unselected bradycardia pacing recipients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
    International journal of cardiology 03/2015; 185:95-100. DOI:10.1016/j.ijcard.2015.03.053 · 6.18 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The study aims to investigate the influence of the amount of clustering (intraclass correlation [ICC]=0%, 5%, or 20%), the number of events per variable or candidate predictor (EPV=5, 10, 20, or 50), and backward variable selection on the performance of prediction models.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Individual participant data meta-analyses (IPD-MA) are increasingly used for developing and validating multivariable (diagnostic or prognostic) risk prediction models. Unfortunately, some predictors or even outcomes may not have been measured in each study and are thus systematically missing in some individual studies of the IPD-MA. As a consequence, it is no longer possible to evaluate between-study heterogeneity and to estimate study-specific predictor effects, or to include all individual studies, which severely hampers the development and validation of prediction models. Here, we describe a novel approach for imputing systematically missing data and adopt a generalized linear mixed model to allow for between-study heterogeneity. This approach can be viewed as an extension of Resche-Rigon's method (Stat Med 2013), relaxing their assumptions regarding variance components and allowing imputation of linear and nonlinear predictors. We illustrate our approach using a case study with IPD-MA of 13 studies to develop and validate a diagnostic prediction model for the presence of deep venous thrombosis. We compare the results after applying four methods for dealing with systematically missing predictors in one or more individual studies: complete case analysis where studies with systematically missing predictors are removed, traditional multiple imputation ignoring heterogeneity across studies, stratified multiple imputation accounting for heterogeneity in predictor prevalence, and multilevel multiple imputation (MLMI) fully accounting for between-study heterogeneity. We conclude that MLMI may substantially improve the estimation of between-study heterogeneity parameters and allow for imputation of systematically missing predictors in IPD-MA aimed at the development and validation of prediction models. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
    Statistics in Medicine 02/2015; DOI:10.1002/sim.6451 · 2.04 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) Statement includes a 22-item checklist, which aims to improve the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD Statement is explained in detail and accompanied by published examples of good reporting. The document also provides a valuable reference of issues to consider when designing, conducting, and analyzing prediction model studies. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, it is recommended that authors include a completed checklist in their submission. The TRIPOD checklist can also be downloaded from www.tripod-statement.org.
    Annals of internal medicine 01/2015; 162(1):W1-W73. DOI:10.7326/M14-0698 · 16.10 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Prediction models are developed to aid healthcare providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision-making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed.Materials and methodsThe Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) initiative developed a set of recommendations for the reporting of studies developing, validating or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, healthcare professionals and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors.ResultsThe resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document.Conclusions To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
    European Journal of Clinical Investigation 01/2015; 32(2). DOI:10.1111/eci.12376 · 3.37 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: OBJECTIVE Manual surveillance of healthcare-associated infections is cumbersome and vulnerable to subjective interpretation. Automated systems are under development to improve efficiency and reliability of surveillance, for example by selecting high-risk patients requiring manual chart review. In this study, we aimed to validate a previously developed multivariable prediction modeling approach for detecting drain-related meningitis (DRM) in neurosurgical patients and to assess its merits compared to conventional methods of automated surveillance. METHODS Prospective cohort study in 3 hospitals assessing the accuracy and efficiency of 2 automated surveillance methods for detecting DRM, the multivariable prediction model and a classification algorithm, using manual chart review as the reference standard. All 3 methods of surveillance were performed independently. Patients receiving cerebrospinal fluid drains were included (2012-2013), except children, and patients deceased within 24 hours or with pre-existing meningitis. Data required by automated surveillance methods were extracted from routine care clinical data warehouses. RESULTS In total, DRM occurred in 37 of 366 external cerebrospinal fluid drainage episodes (12.3/1000 drain days at risk). The multivariable prediction model had good discriminatory power (area under the ROC curve 0.91-1.00 by hospital), had adequate overall calibration, and could identify high-risk patients requiring manual confirmation with 97.3% sensitivity and 52.2% positive predictive value, decreasing the workload for manual surveillance by 81%. The multivariable approach was more efficient than classification algorithms in 2 of 3 hospitals. CONCLUSIONS Automated surveillance of DRM using a multivariable prediction model in multiple hospitals considerably reduced the burden for manual chart review at near-perfect sensitivity. Infect Control Hosp Epidemiol 2015;36(1): 65-75.
    Infection Control and Hospital Epidemiology 01/2015; 36(1):65-75. DOI:10.1017/ice.2014.5 · 4.02 Impact Factor
  • Chest 01/2015; 147(1):e22. DOI:10.1378/chest.14-2064 · 7.13 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Clinicians have difficulty predicting need for hospitalization of children with acute asthma exacerbations. The objective of this study was to develop and internally validate a multivariable asthma prediction rule (APR) to inform hospitalization decision making in children aged 5-17 years with acute asthma exacerbations. Between April 2008 and February 2013 we enrolled a prospective cohort of patients aged 5-17 years with asthma who presented to our pediatric emergency department with acute exacerbations. Predictors for APR modeling included 15 demographic characteristics, asthma chronic control measures, and pulmonary examination findings in participants at the time of triage and before treatment. The primary outcome variable for APR modeling was need for hospitalization (length of stay >24 h for those admitted to hospital or relapse for those discharged). A secondary outcome was the hospitalization decision of the clinical team. We used penalized maximum likelihood multiple logistic regression modeling to examine the adjusted association of each predictor variable with the outcome. Backward step-down variable selection techniques were used to yield reduced-form models. Data from 928 of 933 participants were used for prediction rule modeling, with median [interquartile range] age 8.8 [6.9, 11.2] years, 61% male, and 59% African-American race. Both full (penalized) and reduced-form models for each outcome calibrated well, with bootstrap-corrected c-indices of 0.74 and 0.73 for need for hospitalization and 0.81 in each case for hospitalization decision. The APR predicts the need for hospitalization of children with acute asthma exacerbations using predictor variables available at the time of presentation to an emergency department. Copyright © 2014 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Reduced exercise tolerance and dyspnea are common in older people, and heart failure (HF) and chronic obstructive pulmonary disease (COPD) are the main causes. We want to determine the prevalence of previously unrecognized HF, COPD, and other chronic diseases in frail older people using a near-home targeted screening strategy.
    The Journal of the American Board of Family Medicine 11/2014; 27(6):811-821. DOI:10.3122/jabfm.2014.06.140045 · 1.85 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Untreated postoperative urinary retention can result in permanent lower urinary tract dysfuncion and can be prevented by timely bladder catheterization. The author hypothesized that the incidence of postoperative bladder catheterization can be decreased by using the patient's own maximum bladder capacity (MBC) instead of a fixed bladder volume of 500 ml as a threshold for catheterization.
    Anesthesiology 11/2014; DOI:10.1097/ALN.0000000000000507 · 6.17 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To assess differences between three different decision-making approaches in the method of panel diagnosis as reference standard in diagnostic research.
    Journal of Clinical Epidemiology 11/2014; 68(4). DOI:10.1016/j.jclinepi.2014.09.020 · 5.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives To determine whether the Wells clinical prediction rule for pulmonary embolism (PE), which produces a point score based on clinical features and the likelihood of diagnoses other than PE, combined with normal D-dimer testing can be used to exclude PE in older unhospitalized adults.DesignProspective cohort study.SettingPrimary care and nursing homes.ParticipantsOlder adults (≥60) clinically suspected of having a PE (N = 294, mean age 76, 44% residing in a nursing home).MeasurementsThe presence of PE was confirmed using a composite reference standard including computed tomography and 3-month follow-up. The proportion of individuals with an unlikely risk of PE was calculated according to the Wells rule (≤4 points) plus a normal qualitative point-of-care D-dimer test (efficiency) and the presence of symptomatic PE during 3 months of follow-up within these patients (failure rate).ResultsPulmonary embolism occurred in 83 participants (28%). Eighty-five participants had an unlikely risk according to the Wells rule and a normal D-dimer test (efficiency 29%), five of whom experienced a nonfatal PE during 3 months of follow-up (failure rate = 5.9%, 95% confidence interval (CI) = 2.5–13%). According to a refitted diagnostic strategy for older adults, 69 had a low risk of PE (24%), two of whom had PE (failure rate = 2.9%, 95% CI = 0.8–10%).Conclusion The use of the well-known and widely used Wells rule (original or refitted) does not guarantee safe exclusion of PE in older unhospitalized adults with suspected PE. This may lead to discussion among professionals as to whether the original or revised Wells rule is useful for elderly outpatients.
    Journal of the American Geriatrics Society 11/2014; 62(11). DOI:10.1111/jgs.13080 · 4.22 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We aimed to validate the Oudega diagnostic decision rule-which was developed and validated among younger aged primary care patients-to rule-out deep vein thrombosis (DVT) in frail older outpatients.
    Family Practice 10/2014; 32(1). DOI:10.1093/fampra/cmu068 · 1.84 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In a large cluster-randomized trial on the impact of a prediction model, presenting the calculated risk of postoperative nausea and vomiting (PONV) on-screen (assistive approach) increased the administration of risk-dependent PONV prophylaxis by anaesthetists. This change in therapeutic decision-making did not improve the patient outcome; that is, the incidence of PONV. The present study aimed to quantify the effects of adding a specific therapeutic recommendation to the predicted risk (directive approach) on PONV prophylaxis decision-making and the incidence of PONV.
    BJA British Journal of Anaesthesia 10/2014; 114(2). DOI:10.1093/bja/aeu321 · 4.35 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Carl Moons and colleagues provide a checklist and background explanation for critically appraising and extracting data from systematic reviews of prognostic and diagnostic prediction modelling studies. Please see later in the article for the Editors' Summary.
    PLoS Medicine 10/2014; 11(10):e1001744. DOI:10.1371/journal.pmed.1001744 · 15.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objective Various cardiovascular prediction models have been developed for patients with type 2 diabetes. Their predictive performance in new patients is mostly not investigated. This study aims to quantify the predictive performance of all cardiovascular prediction models developed specifically for diabetes patients. Design and methods Follow-up data of 453, 1174 and 584 type 2 diabetes patients without pre-existing cardiovascular disease (CVD) in the EPIC-NL, EPIC-Potsdam and Secondary Manifestations of ARTerial disease cohorts, respectively, were used to validate 10 prediction models to estimate risk of CVD or coronary heart disease (CHD). Discrimination was assessed by the c-statistic for time-to-event data. Calibration was assessed by calibration plots, the Hosmer-Lemeshow goodness-of-fit statistic and expected to observed ratios. Results There was a large variation in performance of CVD and CHD scores between different cohorts. Discrimination was moderate for all 10 prediction models, with c-statistics ranging from 0.54 (95% CI 0.46 to 0.63) to 0.76 (95% CI 0.67 to 0.84). Calibration of the original models was poor. After simple recalibration to the disease incidence of the target populations, predicted and observed risks were close. Expected to observed ratios of the recalibrated models ranged from 1.06 (95% CI 0.81 to 1.40) to 1.55 (95% CI 0.95 to 2.54), mainly driven by an overestimation of risk in high-risk patients. Conclusions All 10 evaluated models had a comparable and moderate discriminative ability. The recalibrated, but not the original, prediction models provided accurate risk estimates. These models can assist clinicians in identifying type 2 diabetes patients who are at low or high risk of developing CVD.
    Heart (British Cardiac Society) 09/2014; 101(3). DOI:10.1136/heartjnl-2014-306068 · 6.02 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Pulmonary embolism (PE) often presents with nonspecific symptoms and may be an easily missed diagnosis. When the differential diagnosis includes PE, an empirical list of frequently occurring alternative diagnoses could support the GP in diagnostic decision making.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Aims/hypothesis Type 1 diabetes is associated with a higher risk of major vascular complications and death. A reliable method that predicted these outcomes early in the disease process would help in risk classification. We therefore developed such a prognostic model and quantified its performance in independent cohorts. Methods Data were analysed from 1,973 participants with type 1 diabetes followed for 7 years in the EURODIAB Prospective Complications Study. Strong prognostic factors for major outcomes were combined in a Weibull regression model. The performance of the model was tested in three different prospective cohorts: the Pittsburgh Epidemiology of Diabetes Complications study (EDC, n=554), the Finnish Diabetic Nephropathy study (FinnDiane, n=2,999) and the Coronary Artery Calcification in Type 1 Diabetes study (CACTI, n=580). Major outcomes included major CHD, stroke, end-stage renal failure, amputations, blindness and all-cause death. Results A total of 95 EURODIAB patients with type 1 diabetes developed major outcomes during follow-up. Prognostic factors were age, HbA1c, WHR, albumin/creatinine ratio and HDL-cholesterol level. The discriminative ability of the model was adequate, with a concordance statistic (C-statistic) of 0.74. Discrimination was similar or even better in the independent cohorts, the C-statistics being: EDC, 0.79; FinnDiane, 0.82; and CACTI, 0.73. Conclusions/interpretation Our prognostic model, which uses easily accessible clinical features can discriminate between type 1 diabetes patients who have a good or a poor prognosis. Such a prognostic model may be helpful in clinical practice and for risk stratification in clinical trials.
    Diabetologia 09/2014; 57(11). DOI:10.1007/s00125-014-3358-x · 6.88 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objectives It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from “different but related” samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. Study Design and Setting We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. Results We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. Conclusion The proposed framework enhances the interpretation of findings at external validation of prediction models.
    Journal of Clinical Epidemiology 08/2014; DOI:10.1016/j.jclinepi.2014.06.018 · 5.48 Impact Factor

Publication Stats

10k Citations
1,924.02 Total Impact Points


  • 2001–2015
    • University Medical Center Utrecht
      • • Julius Center for Health Sciences and Primary Care
      • • Department of Obstetrics
      • • Department of Psychiatry
      • • Department of Anesthesiology
      Utrecht, Utrecht, Netherlands
  • 2014
    • JBR
      Zeist, Utrecht, Netherlands
  • 2013
    • Gelderse Vallei Hospital
      Ede, Gelderland, Netherlands
  • 1998–2011
    • Universiteit Utrecht
      • • Department of Methodology and Statistics
      • • Department of Epidemiology
      Utrecht, Utrecht, Netherlands
  • 2009
    • Medical Research Council (UK)
      • MRC Clinical Trials Unit
      London, ENG, United Kingdom
    • Kitasato University
      Edo, Tōkyō, Japan
  • 2008
    • Wageningen University
      • Division of Human Nutrition
      Wageningen, Provincie Gelderland, Netherlands
  • 2002–2003
    • Erasmus MC
      Rotterdam, South Holland, Netherlands
  • 1997–1999
    • Erasmus Universiteit Rotterdam
      Rotterdam, South Holland, Netherlands