Article

MARKETWATCH Choosing The Best Hospital: The Limitations Of Public Quality Reporting

Tufts University School of Medicine, Baystate Medical Center, Springfield, Massachusetts, USA.
Health Affairs (Impact Factor: 4.64). 11/2008; 27(6):1680-7. DOI: 10.1377/hlthaff.27.6.1680
Source: PubMed

ABSTRACT The call for accountability in health care quality has fueled the development of consumer-oriented Web sites that provide hospital ratings. Taking the consumer perspective, we compared five Web sites to assess the level of agreement in their rankings of local hospitals for four diagnoses. The sites assessed different measures of structure, process, and outcomes and did not use consistent patient definitions or reporting periods. Consequently, they failed to agree on hospital rankings within any diagnosis, even when using the same metric (such as mortality). In their current state, rating services appear likely to confuse, rather than inform, consumers.

Full-text

Available from: Evan M Benjamin, May 28, 2015
0 Followers
 · 
89 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: To determine if the Value-Based Purchasing Performance Scoring system correlates with hospital acquired condition quality indicators. This study utilizes the following secondary data sources: the American Hospital Association (AHA) annual survey and the Centers for Medicare and Medicaid (CMS) Value-Based Purchasing and Hospital Acquired Conditions databases. Zero-inflated negative binomial regression was used to examine the effect of CMS total performance score on counts of hospital acquired conditions. Hospital structure variables including size, ownership, teaching status, payer mix, case mix, and location were utilized as control variables. The secondary data sources were merged into a single database using Stata 10. Total performance scores, which are used to determine if hospitals should receive incentive money, do not correlate well with quality outcome in the form of hospital acquired conditions. Value-based purchasing does not appear to correlate with improved quality and patient safety as indicated by Hospital Acquired Condition (HAC) scores. This leads us to believe that either the total performance score does not measure what it should, or the quality outcome measurements do not reflect the quality of the total performance scores measure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
    Health Policy 10/2014; 118(3). DOI:10.1016/j.healthpol.2014.10.003 · 1.73 Impact Factor
  • BMJ quality & safety 02/2015; 24(2):95-9. DOI:10.1136/bmjqs-2015-003934 · 3.28 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Research-oriented cancer hospitals in the United States treat and study patients with a range of diseases. Measures of disease specific research productivity, and comparison to overall productivity, are currently lacking. Different institutions are specialized in research of particular diseases. To report disease specific productivity of American cancer hospitals, and propose a summary measure. We conducted a retrospective observational survey of the 50 highest ranked cancer hospitals in the 2013 US News and World Report rankings. We performed an automated search of PubMed and Clinicaltrials.gov for published reports and registrations of clinical trials (respectively) addressing specific cancers between 2008 and 2013. We calculated the summed impact factor for the publications. We generated a summary measure of productivity based on the number of Phase II clinical trials registered and the impact factor of Phase II clinical trials published for each institution and disease pair. We generated rankings based on this summary measure. We identified 6076 registered trials and 6516 published trials with a combined impact factor of 44280.4, involving 32 different diseases over the 50 institutions. Using a summary measure based on registered and published clinical trails, we ranked institutions in specific diseases. As expected, different institutions were highly ranked in disease-specific productivity for different diseases. 43 institutions appeared in the top 10 ranks for at least 1 disease (vs 10 in the overall list), while 6 different institutions were ranked number 1 in at least 1 disease (vs 1 in the overall list). Research productivity varies considerably among the sample. Overall cancer productivity conceals great variation between diseases. Disease specific rankings identify sites of high academic productivity, which may be of interest to physicians, patients and researchers.
    PLoS ONE 03/2015; 10(3):e0121233. DOI:10.1371/journal.pone.0121233 · 3.53 Impact Factor