A Hybrid Center for Medicaid and Medicare Service Mortality Model in 3 Diagnoses

Veterans Health Administration Inpatient Evaluation Center, Office of Quality and Safety, Washington, DC, USA.
Medical care (Impact Factor: 3.23). 06/2012; 50(6):520-6. DOI: 10.1097/MLR.0b013e318245a5f2
Source: PubMed


Reliance on administrative data sources and a cohort with restricted age range (Medicare 65 y and above) may limit conclusions drawn from public reporting of 30-day mortality rates in 3 diagnoses [acute myocardial infarction (AMI), congestive heart failure (CHF), pneumonia (PNA)] from Center for Medicaid and Medicare Services.
We categorized patients with diagnostic codes for AMI, CHF, and PNA admitted to 138 Veterans Administration hospitals (2006-2009) into 2 groups (less than 65 y or ALL), then applied 3 different models that predicted 30-day mortality [Center for Medicaid and Medicare Services administrative (ADM), ADM+laboratory data (PLUS), and clinical (CLIN)] to each age/diagnosis group. C statistic (CSTAT) and Hosmer Lemeshow Goodness of Fit measured discrimination and calibration. Pearson correlation coefficient (r) compared relationship between the hospitals' risk-standardized mortality rates (RSMRs) calculated with different models. Hospitals were rated as significantly different (SD) when confidence intervals (bootstrapping) omitted National RSMR.
The ≥ 65-year models included 57%-67% of all patients (78%-82% deaths). The PLUS models improved discrimination and calibration across diagnoses and age groups (CSTAT-CHF/65 y and above: 0.67 vs. 0. 773 vs. 0.761; ADM/PLUS/CLIN; Hosmer Lemeshow Goodness of Fit significant 4/6 ADM vs. 2/6 PLUS). Correlation of RSMR was good between ADM and PLUS (r-AMI 0.859; CHF 0.821; PNA 0.750), and 65 years and above and ALL (r>0.90). SD ratings changed in 1%-12% of hospitals (greatest change in PNA).
Performance measurement systems should include laboratory data, which improve model performance. Changes in SD ratings suggest caution in using a single metric to label hospital performance.

Download full-text


Available from: Anne Evelyn Bajabulile Sales, Mar 02, 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Studies about nurse staffing and patient outcomes often lack adequate risk adjustment because of limited access to patient information. The aim of this study was to examine the impact of patient-level risk adjustment on the associations of unit-level nurse staffing and 30-day inpatient mortality. This retrospective cross-sectional study included 284,097 patients discharged during 2007-2008 from 446 acute care nursing units at 128 Veterans Affairs medical centers. The association of nurse staffing with 30-day mortality was assessed using hierarchical logistic models under three levels of risk-adjustment conditions: using no patient information (low), using patient demographics and diagnoses (moderate), or using patient demographics and diagnoses plus physiological measures (high). Discriminability of the models improved as the level of risk adjustment increased. The c-statistics for models of low, moderate, and high risk adjustment were 0.64, 0.74, and 0.88 for non-ICU patients and 0.66, 0.76, and 0.88 for ICU patients. For non-ICU patients, higher RN skill mix was associated with lower 30-day mortality across all three levels of risk adjustment. For ICU patients, higher total nursing hours per patient day was strongly associated with higher mortality with moderate risk adjustment (p = .0002), but this counterintuitive association was not significant with low or high risk adjustment. Inadequate risk adjustment may lead to biased estimates about nurse staffing and patient outcomes. Combining physiological measures with commonly used administrative data is a promising risk-adjustment approach to reduce potential biases.
    Nursing research 07/2013; 62(4):226-232. DOI:10.1097/NNR.0b013e318295810c · 1.36 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Electronic health records databases are increasingly used for identifying cohort populations, covariates, or outcomes, but discerning such clinical 'phenotypes' accurately is an ongoing challenge. We developed a flexible method using overlapping (Venn diagram) queries. Here we describe this approach to find patients hospitalized with acute congestive heart failure (CHF), a sampling strategy for one-by-one 'gold standard' chart review, and calculation of positive predictive value (PPV) and sensitivities, with SEs, across different definitions. We used retrospective queries of hospitalizations (2002-2011) in the Indiana Network for Patient Care with any CHF ICD-9 diagnoses, a primary diagnosis, an echocardiogram performed, a B-natriuretic peptide (BNP) drawn, or BNP >500 pg/mL. We used a hybrid between proportional sampling by Venn zone and over-sampling non-overlapping zones. The acute CHF (presence/absence) outcome was based on expert chart review using a priori criteria. Among 79 091 hospitalizations, we reviewed 908. A query for any ICD-9 code for CHF had PPV 42.8% (SE 1.5%) for acute CHF and sensitivity 94.3% (1.3%). Primary diagnosis of 428 and BNP >500 pg/mL had PPV 90.4% (SE 2.4%) and sensitivity 28.8% (1.1%). PPV was <10% when there was no echocardiogram, no BNP, and no primary diagnosis. 'False positive' hospitalizations were for other heart disease, lung disease, or other reasons. This novel method successfully allowed flexible application and validation of queries for patients hospitalized with acute CHF.
    Journal of the American Medical Informatics Association 10/2013; 21(2). DOI:10.1136/amiajnl-2013-001942 · 3.50 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives People receiving healthcare from multiple payers (eg, Medicare and the Veterans Health Administration [VA]) have fragmented health records. How the use of more complete data affects hospital profiling has not been examined. Study Design Retrospective cohort study. Methods We examined 30-day mortality following acute myocardial infarction at 104 VA hospitals for veterans 66 years and older from 2006 through 2010 who were also Medicare beneficiaries. Using VA-only data versus combined VA/Medicare data, we calculated 2 risk-standardized mortality rates (RSMRs): 1 based on observed mortality (O/E) and the other from CMS' Hospital Compare program, based on model-predicted mortality (P/E). We also categorized hospital outlier status based on RSMR relative to overall VA mortality: average, better than average, and worse than average. We tested whether hospitals whose patients received more of their care through Medicare would look relatively better when including those data in risk adjustment, rather than including VA data alone. Results Thirty-day mortality was 14.8%. Adding Medicare data caused both RSMR measures to significantly increase in about half the hospitals and decrease in the other half. O/E RSMR increased in 53 hospitals, on average, by 2.2%, and decreased in 51 hospitals by -2.6%. P/E RSMR increased, on average, by 1.2% in 56 hospitals, and decreased in the others by -1.3%. Outlier designation changed for 4 hospitals using O/E measure, but for no hospitals using P/E measure. Conclusions VA hospitals vary in their patients' use of Medicare-covered care and completeness of health records based on VA data alone. Using combined VA/Medicare data provides modestly different hospital profiles compared with those using VA-alone data.
    The American journal of managed care 02/2015; 21(2):129-38. · 2.26 Impact Factor