Predicting kidney transplant survival using tree-based modeling.
ABSTRACT Predicting the outcome of kidney transplantation is clinically important and computationally challenging. The goal of this project was to develop the models predicting probability of kidney allograft survival at 1, 3, 5, 7, and 10 years. Kidney transplant data from the United States Renal Data System (January 1, 1990, to December 31, 1999, with the follow-up through December 31, 2000) were used (n = 92,844). Independent variables included recipient demographic and anthropometric data, end-stage renal disease course, comorbidity information, donor data, and transplant procedure variables. Tree-based models predicting the probability of the allograft survival were generated using roughly two-thirds of the data (training set), with the remaining one-third left aside to be used for models validation (testing set). The prediction of the probability of graft survival in the independent testing dataset achieved a good correlation with the observed survival (r = 0.94, r = 0.98, r = 0.99, r = 0.93, and r = 0.98) and relatively high areas under the receiving operator characteristic curve (0.63, 0.64, 0.71, 0.82, and 0.90) for 1-, 3-, 5-, 7-, and 10-year survival prediction, respectively. The models predicting the probability of 1-, 3-, 5-, 7-, and 10-year allograft survival have been validated on the independent dataset and demonstrated performance that may suggest implementation in clinical decision support system.
- [show abstract] [hide abstract]
ABSTRACT: Patients nearing end-stage renal disease (ESRD) increasingly choose pre-emptive renal transplant (PRT) to avoid pre-transplant dialysis and to minimize ESRD. Compared with long-term dialysis, PRT has been shown to increase allograft survival. However, the merit of short-term dialysis is not well characterized, and it may be the better medical choice in some patients. The goal of the study was to characterize the relationship between the duration of dialysis vs allograft and patient survival. We performed a retrospective nationwide cohort study of all kidney transplants (Tx) between January 1, 1990 and December 31, 1999, with a follow-up period through December 31, 2000. Participants were identified using the United States Renal Data System (USRDS), which tracks all ESRD cases in the nation including patients on dialysis and with kidney Tx. Patients with the history of more than one kidney Tx were excluded. Allograft survival and recipient survival were the primary outcomes of this study. Duration of ESRD as a continuous variable as well as divided into categories (14 days, 15-60 days, 61-180 days, 181-365 days, 1-2 years, 2-3 years, 3-5 years and >5 years) was the primary risk factor of interest. Models were adjusted for multiple donor and recipient factors, including demographics and co-morbidities, as well as for Tx procedure characteristics. A total of 81,130 patient records were used for analysis (age 44.1+/-14.3 years, 61% males, 24% black, 29% diabetic, pre-transplant ESRD duration 27.1+/-26.4 months, 26% living donors). ESRD duration, as a continuous variable, is associated with a modest increase in the risk of graft failure over time [hazard ratio (HR) 1.02 per year of ESRD duration, P<0.001]. When ESRD is studied as a categorical variable (duration of 0-14 days vs longer durations), the increased risk of allograft failure reached statistical significance only when the time on dialysis was > or =181 days. The duration of ESRD was a significant risk for recipient death (HR 1.04 per year, P<0.001); however, mortality risk reached statistical significance only when the patient had been on dialysis for > or =1 year. This study of USRDS records suggests that a short (<6 months) dialysis course has no detrimental effect on graft and patient survival, and should not be deferred if medically indicated.Nephrology Dialysis Transplantation 01/2005; 20(1):167-75. · 3.37 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: Delayed graft function (DGF) is one of the most important complications in the post-transplant period, having an adverse effect on both the immediate and long-term graft survival. In this study, an artificial neural network was used to predict the occurrence of DGF and compared with traditional logistical regression models for prediction of DGF. A total of 304 cadaveric renal transplants performed at the Jewish Hospital, Louisville were included in the study. Covariate analysis by artificial neural networks and traditional logistical regression were done to predict the occurrence of DGF. The incidence of DGF in this study was 38%. Logistic regression analysis was more sensitive to prediction of no DGF (91 vs 70%), while the neural network was more sensitive to prediction of yes for DGF (56 vs 37%). Overall prediction accuracy for both logistic regression and the neural network was 64 and 63%, respectively. Logistic regression was 36.5% sensitive and 90.7% specific. The neural network was 63.5% sensitive and 64.8% specific. The only covariate with a P < 0.001 was the transplant of a white donor kidney to a black recipient. Cox proportional hazard regression was used to test for the negative effect of DGF on long-term graft survival. One year graft survival in patients without DGF was 92 +/- 2% vs 81 +/- 3% in patients with DGF. The 5-year graft survival was not affected by DGF in this study. Artificial neural networks may be used for prediction of DGF in cadaveric renal transplants. This method is more sensitive but less specific than logistic regression methods.Nephrology Dialysis Transplantation 12/2003; 18(12):2655-9. · 3.37 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: Health status can be an important outcome in studies on patients with end-stage renal disease (ESRD). In these studies, adjustment for prognostic factors, such as comorbidity, often has to be made. None of the comorbidity indices that are commonly used in research on ESRD patients has been validated for studies on health status. This study evaluated three existing indices (Khan, Davies, and Charlson) and four indices specifically developed for use in studies on health status. In a large prospective multi-center study (NECOSAD-2), new ESRD patients were included (n = 1041). Comorbidity was assessed at the start of dialysis. Health status was assessed with the physical and mental component summary score of the SF-36 (PCS and MCS), the symptoms dimension of the KDQOL-SF, and the Karnofsky Scale. Patient data were randomly allocated to a modeling or a testing set. The new indices were developed in the modeling set. The three existing indices explained about the same percentage of variance in the PCS (7 to 8%), MCS (1 to 3%), symptoms (2 to 4%), and Karnofsky (10 to 12%). The new indices performed better than the existing indices in the modeling population (13% PCS, 10% MCS, 10% symptoms, 18% Karnofsky), but not in the testing population (8% PCS, 1% MCS, 3% symptoms, 8% Karnofsky). Individual comorbidities explained more variance in PCS (10 to 15%), MCS (1 to 7%), symptoms (6 to 11%), and Karnofsky (11 to 18%) than comorbidity indices. The Khan, Davies, and the Charlson indices will adjust to the same extent for the potential confounding effect of comorbidity in studies with health status as an outcome. Separate comorbidity diagnoses will adjust best for comorbidity.Journal of the American Society of Nephrology 03/2003; 14(2):478-85. · 8.99 Impact Factor