Choosing The Best Hospital: The Limitations Of Public Quality Reporting

Tufts University School of Medicine, Baystate Medical Center, Springfield, Massachusetts, USA.
Health Affairs (Impact Factor: 4.97). 11/2008; 27(6):1680-7. DOI: 10.1377/hlthaff.27.6.1680
Source: PubMed


The call for accountability in health care quality has fueled the development of consumer-oriented Web sites that provide hospital ratings. Taking the consumer perspective, we compared five Web sites to assess the level of agreement in their rankings of local hospitals for four diagnoses. The sites assessed different measures of structure, process, and outcomes and did not use consistent patient definitions or reporting periods. Consequently, they failed to agree on hospital rankings within any diagnosis, even when using the same metric (such as mortality). In their current state, rating services appear likely to confuse, rather than inform, consumers.

Download full-text


Available from: Evan M Benjamin, Oct 03, 2015
35 Reads
    • "; 2) Schaefer and Schwarz (2010); 3) Emmert and Meier (2013); 4) Emmert et al. (2013a); 5) Reimann and Strech (2010); 6) Rothberg et al. (2008) "
    [Show abstract] [Hide abstract]
    ABSTRACT: The importance of consumer-driven quality reporting initiatives, such as provider rating websites, is on the rise in many industrialized countries like the USA, the UK and Germany. Therefore, this essay covers the issue of online rating websites of healthcare providers as internet-based social networking platforms that facilitate peer-to-peer information exchange and subjective patient experience assessments. Since research on these information tools is in its infancy, this essay uses an explorative approach to outline the five most common views on provider rating websites that appear in the scholarly and public debate. Based on an in-depth literature review of the international evidence, the essay reveals that provider rating websites prove to become a major performance indicator for healthcare managers and a useful tool for individual decision-making in provider choice. Besides a thorough reflection on the most common public misconceptions, the value of this essay lies in the provision of significant recommendations for all stakeholders for the future enhancement of provider rating websites.
    Challenges and Opportunities in Health Care Management, Edited by Sebastian Gurtner, Katja Soyez, 12/2014: chapter A Review of Scientific Evidence for Public Perspectives on Online Rating Websites of Healthcare Providers: pages 279-290; Springer.
  • Source
    • "Accountability of caregivers and health authorities to the community is internationally considered of paramount importance [28]. In spite of known limitations, public reporting of comparative information about the quality of health care, often derived from administrative data, is frequently put forward as an important quality improvement tool, which attempts to stimulate caregivers to grade up the provision of services and to reassure patients by demonstrating accountability [29-31]. In this context, ensuring data quality is a continuous challenge especially if the same data are used for reimbursement and for measuring quality [31,32]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In-hospital case-fatality rates in patients, admitted for acute myocardial infarction (AMI-CFRs), are internationally used as a quality indicator. Attempting to encourage the hospitals to assume responsibility, the Belgian Ministry of Health decided to stimulate initiatives of quality improvement by means of a limited set of indicators, among which AMI-CFR, to be routinely analyzed. In this study we aimed, by determining the existence of inter-hospital differences in AMI-CFR, (1) to evaluate to which extent Belgian discharge records allow the assessment of quality of care in the field of AMI, and (2) to identify starting points for quality improvement. Hospital discharge records from all the Belgian short-term general hospitals in the period 2002-2005. The study population (N = 46,287) included patients aged 18 years and older, hospitalized for AMI. No unique patient identifier being present, we tried to track transferred patients. We assessed data quality through a comparison of MCD with data from two registers for acute coronary events and through transfer and sensitivity analyses. We compared AMI-CFRs across hospitals, using multivariable logistic regression models. In the main model hospitals, Charlson's co-morbidity index, age, gender and shock constituted the covariates. We carried out two types of analyses: a first one wherein transferred-out cases were excluded, to avoid double counting of patients when computing rates, and a second one with exclusion of all transferred cases, to allow the study of patients admitted into, treated in and discharged from the same hospital. We identified problems regarding both the CFR's numerator and denominator.Sensitivity analyses revealed differential coding and/or case management practices. In the model with exclusion of transfer-out cases, the main determinants of AMI-CFR were cardiogenic shock (OR(adj) 23.0; 95% CI [20.9;25.2]), and five-year age groups OR(adj) 1.23; 95% CI [1.11;1.36]). Sizable inter-hospital and inter-type of hospital differences {(OR(comunity vs tertiary hospitals)1.36; 95% CI [1.34;1.39]) and (OR(intermediary vs tertiary hospitals)1.36; 95% CI [1.34;1.39])}, and nonconformities to guidelines for treatment were observed. Despite established data quality shortcomings, the magnitude of the observed differences and the nonconformities constitute leads to quality improvement. However, to measure progress, ways to improve and routinely monitor data quality should be developed.
    BMC Health Services Research 12/2010; 10:334. DOI:10.1186/1472-6963-10-334 · 1.71 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Research-oriented cancer hospitals in the United States treat and study patients with a range of diseases. Measures of disease specific research productivity, and comparison to overall productivity, are currently lacking. Different institutions are specialized in research of particular diseases. To report disease specific productivity of American cancer hospitals, and propose a summary measure. We conducted a retrospective observational survey of the 50 highest ranked cancer hospitals in the 2013 US News and World Report rankings. We performed an automated search of PubMed and for published reports and registrations of clinical trials (respectively) addressing specific cancers between 2008 and 2013. We calculated the summed impact factor for the publications. We generated a summary measure of productivity based on the number of Phase II clinical trials registered and the impact factor of Phase II clinical trials published for each institution and disease pair. We generated rankings based on this summary measure. We identified 6076 registered trials and 6516 published trials with a combined impact factor of 44280.4, involving 32 different diseases over the 50 institutions. Using a summary measure based on registered and published clinical trails, we ranked institutions in specific diseases. As expected, different institutions were highly ranked in disease-specific productivity for different diseases. 43 institutions appeared in the top 10 ranks for at least 1 disease (vs 10 in the overall list), while 6 different institutions were ranked number 1 in at least 1 disease (vs 1 in the overall list). Research productivity varies considerably among the sample. Overall cancer productivity conceals great variation between diseases. Disease specific rankings identify sites of high academic productivity, which may be of interest to physicians, patients and researchers.
    PLoS ONE 03/2015; 10(3):e0121233. DOI:10.1371/journal.pone.0121233 · 3.23 Impact Factor
Show more