Measuring Quality for Public Reporting of Health Provider Quality: Making It Meaningful to Patients

Health Policy Research Institute, University of California-Irvine, 100 Theory, Suite 110, Irvine, CA 92697-5800, USA.
American Journal of Public Health (Impact Factor: 4.55). 12/2009; 100(2):264-9. DOI: 10.2105/AJPH.2008.153759
Source: PubMed

ABSTRACT Public quality reports of hospitals, health plans, and physicians are being used to promote efficiency and quality in the health care system. Shrinkage estimators have been proposed as superior measures of quality to be used in these reports because they offer more conservative and stable quality ranking of providers than traditional, nonshrinkage estimators. Adopting the perspective of a patient faced with choosing a local provider on the basis of publicly provided information, we examine the advantages and disadvantages of shrinkage and nonshrinkage estimators and contrast the information made available by them. We demonstrate that 2 properties of shrinkage estimators make them less useful than nonshrinkage estimators for patients making choices in their area of residence.

Download full-text


Available from: Laurent Glance, Sep 26, 2015
17 Reads
  • Source
    • "In thinking about shrinkage, the notation of Morris (1983) and more recently Dimick et al. (2009) and Mukamel et al. (2010) is helpful. One can describe the extent of shrinkage by writing P 5 lO1(1 À l)E where for each hospital, we solve for l after being given P, O, and E. This is a descriptive tool and not the mathematical formula used to create the shrinkage (see Gelman et al. 1997, chapter 14 for details). "
    [Show abstract] [Hide abstract]
    ABSTRACT: We ask whether Medicare's Hospital Compare random effects model correctly assesses acute myocardial infarction (AMI) hospital mortality rates when there is a volume-outcome relationship. Medicare claims on 208,157 AMI patients admitted in 3,629 acute care hospitals throughout the United States. We compared average-adjusted mortality using logistic regression with average adjusted mortality based on the Hospital Compare random effects model. We then fit random effects models with the same patient variables as in Medicare's Hospital Compare mortality model but also included terms for hospital Medicare AMI volume and another model that additionally included other hospital characteristics. Hospital Compare's average adjusted mortality significantly underestimates average observed death rates in small volume hospitals. Placing hospital volume in the Hospital Compare model significantly improved predictions. The Hospital Compare random effects model underestimates the typically poorer performance of low-volume hospitals. Placing hospital volume in the Hospital Compare model, and possibly other important hospital characteristics, appears indicated when using a random effects model to predict outcomes. Care must be taken to insure the proper method of reporting such models, especially if hospital characteristics are included in the random effects model.
    Health Services Research 10/2010; 45(5 Pt 1):1148-67. DOI:10.1111/j.1475-6773.2010.01130.x · 2.78 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The mortality outcome of mechanical ventilation, a key intervention in the critically ill, has been variously reported to be determined by intensive care patient volume. We determined the volume-(mortality)-outcome relationship of mechanically ventilated patients whose records were contributed to the Australian and New Zealand Intensive Care Society Adult Patient Database. Retrospective cohort study of 208,810 index patient admissions from 136 Australian and New Zealand intensive care units in the same number of hospitals over the course of 1995-2009. The patient-volume effect on hospital mortality, overall and at the level of patient (nonsurgical, elective surgical, and emergency surgical) and intensive care unit (rural/regional, metropolitan, tertiary, and private) descriptors, was determined by random-effects logistic regression adjusting for illness severity and demographic and geographical predictors. Annualized patient volume was modeled both as a categorical (deciles) and, with calendar year, a continuous variable using fractional polynomials. The patients were of mean age of 59 yrs (SD, 19 yrs), Acute Physiology and Chronic Health Evaluation III score 66 (32), and 39.4% female, with a hospital mortality of 22.4%. Overall and at both the patient and intensive care unit descriptor levels, no progressive decline in mortality was demonstrated across the annual patient volume range (12-932). Over the whole database, mortality odds ratio for the last volume decile (801-932 patients) was 1.26 (95% confidence interval, 1.06-1.50; p = .009) compared with the first volume decile (12-101 patients). Calendar year mortality decreases were evident (odds ratio, 0.96; 95% confidence interval, 0.94-0.98; p = .0001). Using fractional polynomials, modest curvilinear mortality increases (range, 5%-8%) across the volume range were noted over the whole database for nonsurgical patients and at the tertiary intensive care unit level. No inverse volume-(mortality)-outcome relationship was apparent for ventilated patients in the Australian and New Zealand Intensive Care Society database. Mechanisms for mortality increments with patient volume were not identified but warrant further study.
    Critical care medicine 11/2011; 40(3):800-12. DOI:10.1097/CCM.0b013e318236f2af · 6.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: All hospitals in New York State (NYS) are required to report surgical site infections (SSIs) occurring after coronary artery bypass graft surgery. This report describes the risk adjustment method used by NYS for reporting hospital SSI rates, and additional methods used to explore remaining differences in infection rates. All patients undergoing coronary artery bypass graft surgery in NYS in 2008 were monitored for chest SSI following the National Healthcare Safety Network protocol. The NYS Cardiac Surgery Reporting System and a survey of hospital infection prevention practices provided additional risk information. Models were developed to standardize hospital-specific infection rates and to assess additional risk factors and practices. The National Healthcare Safety Network risk score based on duration of surgery, American Society of Anesthesiologists score, and wound class were not highly predictive of chest SSIs. The addition of diabetes, obesity, end-stage renal disease, sex, chronic obstructive pulmonary disease, and Medicaid payer to the model improved the discrimination between procedures that resulted in SSI and those that did not by 25%. Hospital-reported infection prevention practices were not significantly related to SSI rates. Additional risk factors collected using a secondary database improved the prediction of SSIs, however, there remained unexplained variation in rates between hospitals.
    American journal of infection control 11/2011; 40(1):22-8. DOI:10.1016/j.ajic.2011.06.015 · 2.21 Impact Factor
Show more