*IOBA (Eye Institute), University of Valladolid, Valladolid, Spain
Retina (Philadelphia, Pa.) (Impact Factor: 3.18). 04/2013; DOI: 10.1097/IAE.0b013e31828991ea
Source: PubMed

ABSTRACT PURPOSE:: To externally validate the accuracy of previously published formulas for predicting proliferative vitreoretinopathy development after retinal detachment surgery. METHODS:: Clinical variables from consecutive retinal detachment patients (n = 1,047) were collected from the Retina 1 Project conducted in 17 Spanish and Portuguese centers. These data were used for external validation of four previously published formulas, F1 to F4. Receiver-operating characteristic curves were used to validate the quality of formulas, and measures of discrimination, precision, and calibration were calculated for each. Concordance among the formulas was determined by Cohen kappa index. RESULTS:: The areas under the receiver-operating characteristic curves were as follows: F1, 0.5809; F2, 0.5398; F3, 0.5964; and F4, 0.4617. F1 had the highest accuracy, 74.21%. Almost 19% of proliferative vitreoretinopathy cases were correctly classified by F1 compared with 13%, 15%, and 10% for F2, F3, and F4, respectively. There was moderate concordance between F2 and F3 but little between the other formulas. CONCLUSION:: After external validation, none of the formulas were accurate enough for routine clinical use. To increase its usefulness, other factors besides the clinical ones considered here should be incorporated into future formulas for predicting risk of developing proliferative vitreoretinopathy.

  • Source
    British Journal of Ophthalmology 09/1973; 57(8):525-30. DOI:10.1136/bjo.57.8.525 · 2.81 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The utility of predictive models depends on their external validity, that is, their ability to maintain accuracy when applied to patients and settings different from those on which the models were developed. We report a simulation study that compared the external validity of standard logistic regression (LR1), logistic regression with piecewise-linear and quadratic terms (LR2), classification trees, and neural networks (NNETs). We developed predictive models on data simulated from a specified population and on data from perturbed forms of the population not representative of the original distribution. All models were tested on new data generated from the population. The performance of LR2 was superior to that of the other model types when the models were developed on data sampled from the population (mean receiver operating characteristic [ROC] areas 0.769, 0.741, 0.724, and 0.682, for LR2, LR1, NNETs, and trees, respectively) and when they were developed on nonrepresentative data (mean ROC areas 0.734, 0.713, 0.703, and 0.667). However, when the models developed using nonrepresentative data were compared with models developed from data sampled from the population, LR2 had the greatest loss in performance. Our results highlight the necessity of external validation to test the transportability of predictive models.
    Journal of Clinical Epidemiology 09/2003; 56(8):721-9. DOI:10.1016/S0895-4356(03)00120-3 · 5.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Prediction models tend to perform better on data on which the model was constructed than on new data. This difference in performance is an indication of the optimism in the apparent performance in the derivation set. For internal model validation, bootstrapping methods are recommended to provide bias-corrected estimates of model performance. Results are often accepted without sufficient regard to the importance of external validation. This report illustrates the limitations of internal validation to determine generalizability of a diagnostic prediction model to future settings. A prediction model for the presence of serious bacterial infections in children with fever without source was derived and validated internally using bootstrap resampling techniques. Subsequently, the model was validated externally. In the derivation set (n=376), nine predictors were identified. The apparent area under the receiver operating characteristic curve (95% confidence interval) of the model was 0.83 (0.78-0.87) and 0.76 (0.67-0.85) after bootstrap correction. In the validation set (n=179) the performance was 0.57 (0.47-0.67). For relatively small data sets, internal validation of prediction models by bootstrap techniques may not be sufficient and indicative for the model's performance in future patients. External validation is essential before implementing prediction models in clinical practice.
    Journal of Clinical Epidemiology 10/2003; 56(9):826-32. · 5.48 Impact Factor