ArticleLiterature Review

Offenlegen oder nicht? Chancen und Risiken der Veröffentlichung von medizinischen Qualitätsvergleichen

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Wie in anderen Ländern wird auch in Deutschland zunehmend diskutiert, ob die Ergebnisse systematischer Qualitätsmessungen in Qualitätsvergleichen veröffentlicht und damit der allgemeinen Öffentlichkeit zugänglich gemacht werden sollen. Mit einem solchen offenen Vergleich von Qualitätsdaten sind hohe positive Erwartungen verbunden. Patienten und Zuweiser könnten sich über die Versorgungsqualität der Leistungsanbieter informieren und Anbieter mit hoher Qualität auswählen. Dies würde zu Verschiebungen in Leistungsmengen zwischen Krankenhäusern und langfristig zu einem Qualitätswettbewerb führen. Eine Analyse der vorliegenden empirischen Untersuchungen ergibt jedoch ein sehr gemischtes, und zum Teil paradoxes Bild der Wirkungen von öffentlichen Qualitätsvergleichen: Während die „allgemeine Öffentlichkeit” ebenso wie Patienten zunehmend Einsicht in Qualitätsdaten fordern, werden publizierte Informationen bisher jedoch kaum verstanden und noch seltener genutzt. Leistungsanbieter hingegen reagieren vielschichtig auf dokumentierte Qualitätsdaten. Hier scheinen veröffentlichte Qualitätsvergleiche einerseits neue Bemühungen um Qualitätsverbesserungen zu initiieren. Gleichzeitig müssen zum Teil massive Selektionseffekte und wachsende Unterschiede in der Qualität der Versorgung zwischen sozialen Gruppen befürchtet und als Risiko für das Gesamtsystem erkannt werden. Dennoch mag die Offenlegung in der derzeitigen Situation wichtig sein, um Vertrauen zwischen Krankenhäusern und der Bevölkerung und eine Atmosphäre der Offenheit und Verantwortlichkeit herzustellen.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The effects of quality data being published was examined in several studies from the United States and Great Britain. Several systematic reviews and articles address these questions [3][4][5][6][7][8]. There is a discrepancy between the importance that the general population would attach to the published quality data, and the consequences that the consumers and patients would experience from the data. ...
Article
Full-text available
Background The release of quality data from acute care hospitals to the general public is based on the aim to inform the public, to provide transparency and to foster quality-based competition among providers. Due to the expected mechanisms of action and possibly the adverse consequences of public quality comparison, it is a controversial topic. The perspective of physicians and nurses is of particular importance in this context. They are mainly responsible for the collection of quality-control data, and are directly confronted with the results of public comparison. The research focus of this qualitative study was to discover what the views and opinions of the Swiss physicians and nurses were regarding these issues. It was investigated as to how the two professional groups appraised the opportunities as well as the risks of the release of quality data in Switzerland. Methods A qualitative approach was chosen to answer the research question. For data collection, four focus groups were conducted with physicians and nurses who were employed in Swiss acute care hospitals. Qualitative content analysis was applied to the data. Results The results revealed that both occupational groups had a very critical and negative attitude regarding the recent developments. The perceived risks were dominating their view. In summary, their main concerns were: the reduction of complexity, the one-sided focus on measurable quality variables, risk selection, the threat of data manipulation and the abuse of published information by the media. An additional concern was that the impression is given that the complex construct of quality can be reduced to a few key figures, and it that it is constructed from a false message which then influences society and politics. This critical attitude is associated with the different value system and the professional self-concept that both physicians and nurses have, in comparison to the underlying principles of a market-based economy and the economic orientation of health care business. Conclusions The critical and negative attitude of Swiss physicians and nurses must, under all conditions, be heeded to and investigated regarding its impact on work motivation and identification with the profession. At the same time, the two professional groups are obligated to reflect upon their critical attitude and take a proactive role in the development of appropriate quality indicators for the publication of quality data in Switzerland.
Chapter
In diesem Kapitel steht der Beitrag von Qualitätsvergleichen und Qualitätsberichten zur Qualitätssicherung im Gesundheitswesen im Vordergrund. Zunächst werden Zielsetzungen und Steuerungsmöglichkeiten sowie methodische Anforderungen an faire Qualitätsvergleiche thematisiert. Darauf aufbauend werden Instrumente und Initiativen der verpflichtenden und freiwilligen Qualitätsberichterstattung vorgestellt. In den letzten Jahren hat sich die insbesondere die Nutzung von Routinedaten zu Berichts- oder Vergleichszwecken weiter entwickelt. Abschließend wird mit Benchmarking eine Managementmethode vorgestellt, mit der über die vergleichende Darstellung von Qualitäts- und Leistungsdaten hinaus die Qualitätsverbesserung vor Ort direkt unterstützt werden kann.
Chapter
In diesem Kapitel steht der Beitrag von Qualitätsvergleichen und Qualitätsberichten zur Qualitätssicherung im Gesundheitswesen im Vordergrund. Zunächst werden Zielsetzungen und Steuerungsmöglichkeiten sowie methodische Anforderungen an faire Qualitätsvergleiche thematisiert. Darauf aufbauend werden Instrumente und Initiativen der verpflichtenden und freiwilligen Qualitätsberichterstattung vorgestellt. In den letzten Jahren hat sich die Nutzung von Routinedaten zu Berichts- oder Vergleichszwecken weiter etabliert. Abschließend wird mit dem Benchmarking eine Managementmethode vorgestellt, mit der über die vergleichende Darstellung von Qualitäts- und Leistungsdaten hinaus die Qualitätsverbesserung vor Ort direkt unterstützt werden kann.
Chapter
In Kap. 14 stehen Zielsetzung und Methodik von Qualitätsvergleichen im Vordergrund. Wir werden uns zunächst mit der Frage beschäftigen, welchen Beitrag Qualitätsvergleiche zur Qualitätssicherung im Gesundheitswesen leisten können, welche Zielsetzungen und Steuerungsmöglichkeiten damit verbunden werden und welche methodische Anforderungen an faire Qualitätsvergleiche zu stellen sind. Darauf aufbauend werden Instrumente und Initiativen der verpflichtenden und freiwilligen Qualitätsberichterstattung vorgestellt. Neben der direkten Erhebung und Messung von Qualitätsdaten hat sich in den letzten Jahren die Nutzung von Routinedaten zu Berichts- oder Vergleichszwecken etabliert. In Ergänzung dazu wird abschließend das Konzept des Benchmarking als Managementmethode behandelt. Benchmarking zielt nicht nur auf die vergleichende Darstellung von Qualitäts- und Leistungsdaten, sondern möchte Lernprozesse anstoßen und die Qualitätsverbesserung vor Ort unterstützen.
Chapter
Auslöser für die wachsende Bedeutung des klinischen Risikomanagements war der IOM-Report (USA) aus dem Jahre 1999, der aufgrund der Schätzungen an vermeidbaren Todesfällen und Patientenschäden einen dringenden Handlungsbedarf signalisierte. Weltweit entstanden Initiativen und Organisationen zur Verbesserung der Patientensicherheit, wobei u. a. auf bestehende Kompetenzmodelle zurückgegriffen wurde und zugleich neue Konzepte, insbesondere zur Verbesserung organisationaler Abläufe und interpersonaler Kompetenzen, wie Führung und Teamarbeit entstanden. Die Rolle des Patienten hat sich ebenfalls deutlich gewandelt, hin zu einem zunehmend informierten und aktiven Patienten, der an den Entscheidungen über seine Krankheit im Sinne eines „shared decision making“ teilhaben möchte. Dies hat auch Auswirkungen auf die Ausgestaltung eines patientenzentrierten klinischen Risikomanagements, in dem der Patient einen aktiven Part einnehmen kann.
Chapter
Die Qualität im Krankenhaus wird heute als das entscheidende Alleinstellungsmerkmal in der stationären Gesundheitsversorgung angesehen. Eine Qualitätssicherung (QS) soll dabei über den Weg der Veröffentlichung von Qualitätsdaten zu mehr Wettbewerb, damit zu höherer Effizienz und zur Kostenreduktion im Krankenhaus beitragen.1 Die Instrumente, die hierzu genutzt werden, sind neben der landesweiten externen Qualitätssicherung auch die internen Qualitätssicherung smaßnahmen der Krankenhäuser, wie beispielsweise Zertifizierungsprozesse.
Article
Einleitung Ein relativ junges Instrument der Qualitätssicherung im deutschen Gesundheitswesen ist die Qualitätsberichterstattung. Dabei handelt es sich um die Veröffentlichung von Qualitäts- und Leistungsdaten mit dem Ziel der systematischen Qualitätsverbesserung von Gesundheitsleistungen. Diese kann freiwillig, gesetzlich verpflichtend oder unkontrolliert erfolgen. Im Gegensatz zur Gesundheitsberichterstattung zielt die Qualitätsberichterstattung auf Steuerungswirkungen im Gesundheitssystem, die im Folgenden beleuchtet werden.
Article
Introduction: Patient questionnaires are a frequently used instrument within the framework of quality management in in- and outpatient care. Often such questionnaires enable a comparison of care suppliers with the consequence that one turns out to be visibly better or poorer than another. This process, in turn, makes it necessary to check whether differences found upon evaluation of the questionnaires are not merely the result of different compositions of the questioned populations. Although frequently demanded, such adjustments are not usually made. The present article describes the choice of variables for adjustments and the statistical procedures for a relatively homogeneous sample of breast cancer patients. In addition, the utility and limitations of adjustments are discussed. Methods: On the basis of questionnaire data from 3 840 breast cancer patients of 52 breast cancer centres in North Rhine-Westphalia collected during 2010, we examined which patient characteristics can be employed for the adjustment of satisfaction ratings and to what extent the observed values for the centres differed from the expected results. Independent variables taken into consideration were age, educational level, native language, stage, grading, ASA classification, afffected breast, type of operation, insurance status, partnership status as well as time between operation and receipt of the filled out questionnaire. Results: The variance revealed by the independent variables is low. The expected values showed minimal differences which can be attributed to the high homogeneity of the patients collectives and the centres. Conclusion: The use of adjustments remains limited in the study population. The variance of the independent variables revealed by the adjustors is small. Finally, in our opinion, no clear recommmendation for or against case-mix adjustments can be made in patient populations such as the one examined here. Thus, even when small, effects for a more correct reporting of patient questionnaires are faced with unresolvable methodological challenges. Also of importance but an as yet only rarely discussed factor is the factual intepretation of the association of patient characteristics with a better or poorer evaluation of questionnaires. An adjustment for the respective characteristics would eliminate these findings and not make any contribution to an improvement in health care.
Article
Full-text available
In the last decade assessing the quality of healthcare has become increasingly important across the world. Switzerland lacks a detailed overview of how quality management is implemented and of its effects on medical procedures and patients' concerns. This study aimed to examine the systematics of quality management in Switzerland by assessing the providers and collected parameters of current quality initiatives. In summer 2011 we contacted all of the medical societies in Switzerland, the Federal Office of Public Health, the Swiss Medical Association (FMH) and the head of Swiss medical insurance providers, to obtain detailed information on current quality initiatives. All quality initiatives featuring standardised parameter assessment were included. Of the current 45 initiatives, 19 were powered by medical societies, five by hospitals, 11 by non-medical societies, two by the government, two by insurance companies or related institutions and six by unspecified institutions. In all, 24 medical registers, five seals of quality, five circles of quality, two self-assessment tools, seven superior entities, one checklist and one combined project existed. The cost of treatment was evaluated by four initiatives. A data report was released by 24 quality initiatives. The wide variety and the large number of 45 recorded quality initiatives provides a promising basis for effective healthcare quality management in Switzerland. However, an independent national supervisory authority should be appointed to provide an effective review of all quality initiatives and their transparency and coordination.
Article
Full-text available
Patients on certain waiting lists in the UK National Health Service (NHS) are now offered the choice of persevering with their home hospital or switching to another hospital where they will be treated on a guaranteed date. Such decisions require knowledge of performance. We used facilitated focus groups to investigate the views of patients and members of the public on publication of information about the performance of healthcare providers. Six groups with a total of 50 participants met in six different locations in England. Participants felt that independent monitoring of healthcare performance is necessary, but they were ambivalent about the value of performance indicators and hospital rankings. They tended to distrust government information and preferred the presentational style of ‘Dr Foster’, a commercial information provider, because it gave more detailed locally relevant information. Many participants felt the NHS did not offer much scope for choice of provider. If public access to performance information is to succeed in informing referral decisions and raising quality standards, the public and general practitioners will need education on how to interpret and use the data.
Article
Full-text available
OBJECTIVES: This research examined whether judgments about a hospital's risk-adjusted mortality performance are affected by the severity-adjustment method. METHODS: Data came from 100 acute care hospitals nationwide and 11880 adults admitted in 1991 for acute myocardial infarction. Ten severity measures were used in separate multivariable logistic models predicting in-hospital death. Observed-to-expected death rates and z scores were calculated with each severity measure for each hospital. RESULTS: Unadjusted mortality rates for the 100 hospitals ranged from 4.8% to 26.4%. For 32 hospitals, observed mortality rates differed significantly from expected rates for 1 or more, but not for all 10, severity measures. Agreement between pairs of severity measures on whether hospitals were flagged as statistical mortality outliers ranged from fair to good. Severity measures based on medical records frequently disagreed with measures based on discharge abstracts. CONCLUSIONS: Although the 10 severity measures agreed about relative hospital performance more often than would be expected by chance, assessments of individual hospital mortality rates varied by different severity-adjustment methods.
Article
Full-text available
Mortality rates are commonly used to judge hospital performance. In comparing death rates across hospitals, it is important to control for differences in patient severity. Various severity tools are now actively marketed in the United States. This study asked whether one would identify different hospitals as having higher- or lower-than-expected death rates using different severity measures. We applied 11 widely-used severity measures to the same database containing 9407 medically-treated stroke patients from 94 hospitals, with 916 (9.7%) in-hospital deaths. Unadjusted hospital mortality rates ranged from 0 to 24.4%. For 27 hospitals, observed mortality rates differed significantly from expected rates when judged by one or more, but not all 11, severity methods. The agreement between pairs of severity methods for identifying the worst 10% or best 50% of hospitals was fair to good. Efforts to evaluate hospital performance based on severity-adjusted, in-hospital death rates for stroke patients are likely to be sensitive to how severity is measured.
Article
Full-text available
This study explores consumers' comprehension of quality indicators appearing in health care report cards. Content analyses of focus group transcripts show differences in understanding individual quality indicators and among three populations: privately insured; Medicaid; and uninsured. Several rounds of coding and analysis assess: the degree of comprehension; what important ideas are not understood; and what exactly is not understood about the indicator (inter-rater reliability exceeded 94 percent). Thus, this study is an educational diagnosis of the comprehension of currently disseminated quality indicators. Fifteen focus groups (5 per insurance type) were conducted with a total of 104 participants. Findings show that consumers with differing access to and experiences with care have different levels of comprehension. Indicators are not well understood and are interpreted in unintended ways. Implications and strategies for communicating and disseminating quality information are discussed.
Article
Full-text available
To compare crude and adjusted in-hospital mortality rates after prostatectomy between hospitals using routinely collected hospital discharge data and to illustrate the value and limitations of using comparative mortality rates as a surrogate measure of quality of care. Mortality rates for non-teaching hospitals (n = 21) were compared to a single notional group of teaching hospitals. Patients age, disease (comorbidity), length of stay, emergency admission, and hospital location were identified using ICD-9-CM coded Victorian hospital morbidity data from public hospitals collected between 1987/88 and 1994/95. Comparisons between hospitals were based on crude and adjusted odds ratios (OR) and 95% confidence intervals (CI) derived using univariate and multivariate logistic regression. Model fit was evaluated using receiver operating characteristic curve i.e. statistic, Somer's D, Tau-a, and R2. The overall crude mortality rates between hospitals achieved borderline significance (alpha2=31.31; d.f.=21; P=0.06); these differences were no longer significant after adjustment (chi2=25.68; P=0.21). On crude analysis of mortality rates, four hospitals were initially identified as 'low' outlier hospitals; after adjustment, none of these remained outside the 95% CI, whereas a new hospital emerged as a 'high' outlier (OR=4.56; P= 0.05). The adjusted ORs between hospitals compared to the reference varied from 0.21 to 5.54, ratio = 26.38. The model provided a good fit to the data (c=0.89; Somer's D= (0.78; Tau-a = 0.013; R2= 0.24). Regression adjustment of routinely collected data on prostatectomy from the Victorian Inpatient Minimum Database reduced variance associated with age and correlates of illness severity. Reduction of confounding in this way is a move in the direction of exploring differences in quality of care between hospitals. Collection of such information over time, together with refinement of data collection would provide indicators of change in quality of care that could be explored in more detail as appropriate in the clinical setting.
Article
Full-text available
Public disclosure of information about the quality of health plans, hospitals, and doctors continues to be controversial. The US experience of the past decade suggests that sophisticated quality measures and reporting systems that disclose information on quality have improved the process and outcomes of care in limited ways in some settings, but these efforts have not led to the "consumer choice" market envisaged. Important reasons for this failure include limited salience of objective measures to consumers, the complexity of the task of interpretation, and insufficient use of quality results by organised purchasers and insurers to inform contracting and pricing decisions. Nevertheless, public disclosure may motivate quality managers and providers to undertake changes that improve the delivery of care. Efforts to measure and report information about quality should remain public, but may be most effective if they are targeted to the needs of institutional and individual providers of care.
Article
Full-text available
To examine the impact of the publication of clinical outcomes data on NHS Trusts in Scotland to inform the development of similar schemes elsewhere. Case studies including semistructured interviews and a review of background statistics. Eight Scottish NHS acute trusts. 48 trust staff comprising chief executives, medical directors, stroke consultants, breast cancer consultants, nurse managers, and junior doctors. Staff views on the benefits and drawbacks of clinical outcome indicators provided by the clinical resource and audit group (CRAG) and perceptions of the impact of these data on clinical practice and continuous improvement of quality. The CRAG indicators had a low profile in the trusts and were rarely cited as informing internal quality improvement or used externally to identify best practice. The indicators were mainly used to support applications for further funding and service development. The poor effect was attributable to a lack of professional belief in the indicators, arising from perceived problems around quality of data and time lag between collection and presentation of data; limited dissemination; weak incentives to take action; a predilection for process rather than outcome indicators; and a belief that informal information is often more useful than quantitative data in the assessment of clinical performance. Those responsible for developing clinical indicator programmes should develop robust datasets. They should also encourage a working environment and incentives such that these data are used to improve continuously.
Article
Full-text available
Health care "report cards" have attracted significant consumer interest, particularly publicly available Internet health care quality rating systems. However, the ability of these ratings to discriminate between hospitals is not known. To determine whether hospital ratings for acute myocardial infarction (AMI) mortality from a prominent Internet hospital rating system accurately discriminate between hospitals' performance based on process of care and outcomes. Data from the Cooperative Cardiovascular Project, a retrospective systematic medical record review of 141 914 Medicare fee-for-service beneficiaries 65 years or older hospitalized with AMI at 3363 US acute care hospitals during a 4- to 8-month period between January 1994 and February 1996 were compared with ratings obtained from HealthGrades.com (1-star: worse outcomes than predicted, 5-star: better outcomes than predicted) based on 1994-1997 Medicare data. Quality indicators of AMI care, including use of acute reperfusion therapy, aspirin, beta-blockers, angiotensin-converting enzyme inhibitors; 30-day mortality. Patients treated at higher-rated hospitals were significantly more likely to receive aspirin (admission: 75.4% 5-star vs 66.4% 1-star, P for trend =.001; discharge: 79.7% 5-star vs 68.0% 1-star, P =.001) and beta-blockers (admission: 54.8% 5-star vs 35.7% 1-star, P =.001; discharge: 63.3% 5-star vs 52.1% 1-star, P =.001), but not angiotensin-converting enzyme inhibitors (59.6% 5-star vs 57.4% 1-star, P =.40). Acute reperfusion therapy rates were highest for patients treated at 2-star hospitals (60.6%) and lowest for 5-star hospitals (53.6% 5-star, P =.008). Risk-standardized 30-day mortality rates were lower for patients treated at higher-rated than lower-rated hospitals (21.9% 1-star vs 15.9% 5-star, P =.001). However, there was marked heterogeneity within rating groups and substantial overlap of individual hospitals across rating strata for mortality and process of care; only 3.1% of comparisons between 1-star and 5-star hospitals had statistically lower risk-standardized 30-day mortality rates in 5-star hospitals. Similar findings were observed in comparisons of 30-day mortality rates between individual hospitals in all other rating groups and when comparisons were restricted to hospitals with a minimum of 30 cases during the study period. Hospital ratings published by a prominent Internet health care quality rating system identified groups of hospitals that, in the aggregate, differed in their quality of care and outcomes. However, the ratings poorly discriminated between any 2 individual hospitals' process of care or mortality rates during the study period. Limitations in discrimination may undermine the value of health care quality ratings for patients or payers and may lead to misperceptions of hospitals' performance.
Article
Full-text available
A key strategy for driving improvements in health care quality is providing comparative quality information to consumers. This strategy will not work, and could even be counterproductive, unless (1) consumers are convinced that quality problems are real and consequential and that quality can be improved; (2) purchasers and policymakers make sure that quality reporting is standardized and universal; (3) consumers are given quality information that is relevant and easy to understand and use; (4) the dissemination of quality information is improved; and (5) purchasers reward quality improvements and providers create the information and organizational infrastructure to achieve them.
Article
Full-text available
Patients on certain waiting lists in the UK National Health Service (NHS) are now offered the choice of persevering with their home hospital or switching to another hospital where they will be treated on a guaranteed date. Such decisions require knowledge of performance. We used facilitated focus groups to investigate the views of patients and members of the public on publication of information about the performance of healthcare providers. Six groups with a total of 50 participants met in six different locations in England. Participants felt that independent monitoring of healthcare performance is necessary, but they were ambivalent about the value of performance indicators and hospital rankings. They tended to distrust government information and preferred the presentational style of 'Dr Foster', a commercial information provider, because it gave more detailed locally relevant information. Many participants felt the NHS did not offer much scope for choice of provider. If public access to performance information is to succeed in informing referral decisions and raising quality standards, the public and general practitioners will need education on how to interpret and use the data.
Article
Full-text available
Many countries publicly report data on the quality of health care. Because surgical patients often have time to plan their care they are ideal candidates to use such data. We examined the adequacy of publicly reported data about surgical quality in California. We used data specific to California because this state is the most populous in the United States and more surgery is done here than in any other state. We defined surgical procedures as those invasive procedures listed by the National Center for Health Statistics.1
Article
Full-text available
Health care report cards publicly report information about physician, hospital, and health plan quality in an attempt to improve that quality. Reporting quality information publicly is presumed to motivate quality improvement through 2 main mechanisms. First, public quality information allows patients, referring physicians, and health care purchasers to preferentially select high-quality physicians. Second, public report cards may motivate physicians to compete on quality and, by providing feedback and by identifying areas for quality improvement initiatives, help physicians to do so. Despite these plausible mechanisms of quality improvement, the value of publicly reporting quality information is largely undemonstrated and public reporting may have unintended and negative consequences on health care. These unintended consequences include causing physicians to avoid sick patients in an attempt to improve their quality ranking, encouraging physicians to achieve "target rates" for health care interventions even when it may be inappropriate among some patients, and discounting patient preferences and clinical judgment. Public reporting of quality information promotes a spirit of openness that may be valuable for enhancing trust of the health professions, but its ability to improve health remains undemonstrated, and public reporting may inadvertently reduce, rather than improve, quality. Given these limitations, it may be necessary to reassess the role of public quality reporting in quality improvement.
Article
Full-text available
Public reporting and pay for performance are intended to accelerate improvements in hospital care, yet little is known about the benefits of these methods of providing incentives for improving care. We measured changes in adherence to 10 individual and 4 composite measures of quality over a period of 2 years at 613 hospitals that voluntarily reported information about the quality of care through a national public-reporting initiative, including 207 facilities that simultaneously participated in a pay-for-performance demonstration project funded by the Centers for Medicare and Medicaid Services; we then compared the pay-for-performance hospitals with the 406 hospitals with public reporting only (control hospitals). We used multivariable modeling to estimate the improvement attributable to financial incentives after adjusting for baseline performance and other hospital characteristics. As compared with the control group, pay-for-performance hospitals showed greater improvement in all composite measures of quality, including measures of care for heart failure, acute myocardial infarction, and pneumonia and a composite of 10 measures. Baseline performance was inversely associated with improvement; in pay-for-performance hospitals, the improvement in the composite of all 10 measures was 16.1% for hospitals in the lowest quintile of baseline performance and 1.9% for those in the highest quintile (P<0.001). After adjustments were made for differences in baseline performance and other hospital characteristics, pay for performance was associated with improvements ranging from 2.6 to 4.1% over the 2-year period. Hospitals engaged in both public reporting and pay for performance achieved modestly greater improvements in quality than did hospitals engaged only in public reporting. Additional research is required to determine whether different incentives would stimulate more improvement and whether the benefits of these programs outweigh their costs.
Article
Context.— Publicly released performance reports ("report cards") are expected to foster competition on the basis of quality. Proponents frequently cite the need to inform patient choice of physicians and hospitals as a central element of this strategy.Objective.— To examine the awareness and use of a statewide consumer guide that provides risk-adjusted, in-hospital mortality ratings of hospitals that provide cardiac surgery.Design.— Telephone survey conducted in 1996.Setting.— Pennsylvania, where since 1992, the Pennsylvania Consumer Guide to Coronary Artery Bypass Graft [CABG] Surgery has provided risk-adjusted mortality ratings of all cardiac surgeons and hospitals in the state.Participants.— A total of 474 (70%) of 673 eligible patients who had undergone CABG surgery during the previous year at 1 of 4 hospitals listed in the Consumer Guide as having average mortality rates between 1% and 5% were successfully contacted.Main Outcome Measures.— Patients' awareness of the Consumer Guide, their knowledge of its ratings, their degree of interest in the report, and barriers to its use.Results.— Ninety-three patients (20%) were aware of the Consumer Guide, but only 56 (12%) knew about it before surgery. Among these 56 patients, 18 reported knowing the hospital rating and 7 reported knowing the surgeon rating, 11 said hospital and/or surgeon ratings had a moderate or major impact on their decision making, but only 4 were able to specify either or both correctly. When the Consumer Guide was described to all patients, 264 (56%) were "very" or "somewhat" interested in seeing a copy, and 273 (58%) reported that they probably or definitely would change surgeons if they learned that their surgeon had a higher than expected mortality rate in the previous year. A short time window for decision making and a limited awareness of alternative hospitals within a reasonable distance of home were identified as important barriers to use.Conclusions.— Only 12% of patients surveyed reported awareness of a prominent report on cardiac surgery mortality before undergoing cardiac surgery. Fewer than 1% knew the correct rating of their surgeon or hospital and reported that it had a moderate or major impact on their selection of provider. Efforts to aid patient decision making with performance reports are unlikely to succeed without a tailored and intensive program for dissemination and patient education.
Article
Objectives. Severity-adjusted death rates for coronary artery bypass graft (CABG) surgery by provider are published throughout the country. Whether five severity measures rated severity differently for identical patients was examined in this study. Methods. Two severity measures rate patients using clinical data taken from the first two hospital days (MedisGroups, physiology scores); three use diagnoses and other information coded on standard, computerized hospital discharge abstracts (Disease Staging, Patient Management Categories, all patient refined diagnosis related groups). The database contained 7,764 coronary artery bypass graft patients from 38 hospitals with 3.2% in-hospital deaths. Logistic regression was performed to predict deaths from age, age squared, sex, and severity scores, and c statistics from these regressions were used to indicate model discrimination. Odds ratios of death predicted by different severity measures were compared. Results. Code-based measures had better c statistics than clinical measures: all patient refined diagnosis related groups, c = 0.83 (95% C.I. 0.81, 0.86) versus MedisGroups, c = 0.73 (95% C.I. 0.70, 0.76). Code-based measures predicted very different odds of dying than clinical measures for more than 30% of patients. Diagnosis codes indicting postoperative, life-threatening conditions may contribute to the superior predictive power of code-basedmeasures. Conclusions. Clinical and code-based severity measures predicted different odds of dying for many coronary artery bypass graft patients. Although code-based measures had better statistical performance, this may reflect their reliance on diagnosis codes for life-threatening conditions occurring late in the hospitalization, possibly as complications of care. This compromises their utility for drawing inferences about quality of care based on severity-adjusted coronary artery bypass graft death rates.
Article
Zielsetzung: Die BQS Bundesgeschäftsstelle Qualitätssicherung gGmbH hat die Ergebnisberichterstattung von allen im Internet verfügbaren Qualitätsberichten der Krankenhäuser analysiert. Ziel war es, Informationen über die Nutzung von BQS-Qualitätsindikatoren für die öffentliche Qualitätsberichterstattung zu gewinnen und Schlussfolgerungen für die weitere Verfahrensoptimierung abzuleiten. Methodik: Analysiert wurde der Abschnitt E3 der Qualitätsberichte zum Berichtsjahr 2004 aus 1935 Krankenhäusern, die im Internet auf www.g-qb.de zwischen dem 1.10.2005 und dem 17.11.2005 verfügbar waren. Untersucht wurde, wie viele und welche Leistungsbereiche und Kennzahlen für die Ergebnisdarstellung ausgewählt wurden und in welchem Ausmaß das Qualitätssicherungsverfahren der BQS als Datenquelle genutzt wurde. Ergebnisse: 28,6 % der deutschen Krankenhäuser haben Informationen zur Indikations-, Prozess- und Ergebnisqualität ihrer Einrichtung veröffentlicht. Im Mittel wurde aus 5,5 Leistungsbereichen und mit 30,3 Kennzahlen berichtet. 92,7 % der veröffentlichten Ergebnisse beziehen sich auf BQS-Leistungsbereiche. Bei 73,4 % aller verwendeten Kennzahlen handelt es sich um BQS-Qualitätskennzahlen. Schlussfolgerungen:Die selektive Auswahl von Kennzahlen nach nicht transparenten Kriterien, die Verwendung von für die Ergebnisdarstellung ungeeigneten Kennzahlen, die fehlende Darstellung der Daten- und Berechnungsgrundlagen verwendeter Kennzahlen und fehlende Information zur Richtigkeit der publizierten Daten schränkt die Vergleichbarkeit der veröffentlichten Ergebnisse ein. Zukünftig muss gewährleistet werden, dass für alle verwendeten Kennzahlen in den Qualitätsberichten eine einheitliche Darstellung der Berechnungsgrundlagen öffentlich zugänglich wird. Ein zentrales Register für Qualitätsindikatoren und Qualitätskennzahlen ist dazu erforderlich. Ein Set von für die öffentliche Qualitätsberichterstattung geeigneten Qualitätskennzahlen sollte für relevante Versorgungsbereiche definiert werden. Informationen zur Vollständigkeit und Richtigkeit der veröffentlichten Ergebnisse sollten in den Qualitätsberichten ersichtlich sein. Für die Weiterentwicklung der Krankenhaus-Qualitätsberichte ist es außerdem erforderlich, die Anforderungen aller Zielgruppen an die Berichte - einschließlich der Krankenhäuser selbst - systematisch zu analysieren.
Article
Context Information about the performance of hospitals, health professionals, and health care organizations has been made public in the United States for more than a decade. The expected gains of public disclosure have not been made clear, and both the benefits and potential risks have received minimal empirical investigation.Objective To summarize the empirical evidence concerning public disclosure of performance data, relate the results to the potential gains, and identify areas requiring further research.Data Sources A literature search was conducted on MEDLINE and EMBASE databases for articles published between January 1986 and October 1999 in peer-reviewed journals. Review of citations, public documents, and expert advice was conducted to identify studies not found in the electronic databases.Study Selection Descriptive, observational, or experimental evaluations of US reporting systems were selected for inclusion.Data Extraction Included studies were organized based on use of public data by consumers, purchasers, physicians, and hospitals; impact on quality of care outcomes; and costs.Data Synthesis Seven US reporting systems have been the subject of published empirical evaluations. Descriptive and observational methods predominate. Consumers and purchasers rarely search out the information and do not understand or trust it; it has a small, although increasing, impact on their decision making. Physicians are skeptical about such data and only a small proportion makes use of it. Hospitals appear to be most responsive to the data. In a limited number of studies, the publication of performance data has been associated with an improvement in health outcomes.Conclusions There are several potential gains from the public disclosure of performance data, but use of the information by provider organizations for quality improvement may be the most productive area for further research.
Article
Externally-reported assessments of hospital quality are in increasing demand, as consumers, purchasers, providers, and public policy makers express growing interest in public disclosure of performance information. This article presents an analysis of a groundbreaking program in Massachusetts to measure and disseminate comparative quality information about patients' hospital experiences. The article emphasizes the reporting structure that was developed to address the project's dual goals of improving quality of care delivered statewide while also advancing public accountability. Numerous trade-offs were encountered in developing reports that would satisfy a range of purchaser and provider constituencies. The final results was a reporting framework that emphasized preserving detail to ensure visibility for each participating hospital's strengths as well as its priority improvement areas. By avoiding oversimplification of the results, the measurement project helped to support a broad range of successful improvement activity statewide. Key words: hospital performance measurement, hospital quality improvement, patient surveys, public accountability, report cards
Article
Payers and policymakers are increasingly examining hospital mortality rates as indicators of hospital quality. To be meaningful, these death rates must be adjusted for patient severity. This research examined whether judgments about an individual hospital's risk-adjusted mortality is affected by the severity adjustment method. Data came from 105 acute care hospitals nationwide that use the Medis-Groups severity measure. The study population was 18,016 adults hospitalized in 1991 for pneumonia. Multivariable logistic models to predict in-hospital death were computed separately for 14 severity methods, controlling for patient age, sex, and diagnosis-related group (DRG). For each hospital, observed-to-expected death rates and z scores were calculated for each severity method. The overall in-hospital death rate was 9.6%. Unadjusted mortality rates for the 105 hospitals ranged from 1.4% to 19.6%. After adjusting for age, sex, DRG, and severity, 73 facilities had observed mortality rates that did not differ significantly from expected rates according to all 14 severity methods; two had rates significantly higher than expected for all 14 severity methods. For 30 hospitals, observed mortality rates differed significantly from expected rates when judged by one or more but not all 14 severity methods. Kappa analysis showed fair to excellent agreement between severity methods. The 14 severity methods agreed about relative hospital performance more often than expected by chance, but perceptions of individual hospitals' mortality rates varied using different severity adjustment methods for almost one third of facilities. Judgments about individual hospital performance using different severity adjustment approaches may reach different conclusions.
Article
Patients and health insurances are increasingly interested in the quality of care provided by hospitals. Quality indicators are often used to evaluate the quality of inpatient treatment. Most of these evaluations require the collection of additional data. The patient safety indicators (PSI) introduced by the Agency for Healthcare Research and Quality (AHRQ) are precisely validated and exclusively depend on routine data. The original PSI definitions were transferable to the classifications of diagnosis, procedures and DRG used in Germany, and applied to routine data of 2.3 million cases from more than 200 hospitals. The comparison of the results to the US references reveals high concordance between the rates and demonstrates that PSI can be applied to detect critical incidents of patient care. For PSI-based hospital benchmarking further development of appropriate methods of risk adjustment is necessary.
Article
Rationale, aims and objectives Many objective measures rating quality of doctors, hospitals, and medical groups are publicly reported. Surgical patients may have more opportunity to use quality measures than other types of patients to guide their choice of provider. If surgical patients are able to choose higher quality providers, overall surgical quality might increase. Objective To determine what procedure-specific measures of surgical quality are available to consumers facing surgery in California and what new measures will be available by 2005. Methods We searched for and surveyed organizations publicly reporting data on health care quality in California. We asked about current quality measures and new measures set for public release by 2005. Included measures had to be procedure-specific and results separated by hospital. The main outcome measures were the number of quality measures; conceptual aspect of quality measured; and type of risk-adjustment used. Results Eighteen organizations publicly report any health care quality measures in California. These organizations report 333 measures, of which 32 (10%) are procedure-specific measures of surgical quality. There is at least one quality measure for 21 different procedures; these procedures account for 14% of all major operations. Three new measures will be released by 2005. Conclusions Californians facing surgery have limited information regarding quality of their care; few new measures are planned. Eighty-six per cent of patients would find no quality measures related to planned procedures. Public release of performance data is unlikely to improve the quality of health care unless the number and comprehensiveness of measures increase dramatically.
Article
OBJECTIVE: To determine whether assessments of illness severity, defined as risk for in-hospital death, varied across four severity measures. DESIGN: Retrospective cohort study. SETTING: 100 hospitals using the MedisGroups severity measure. PATIENTS: 11 880 adults managed medically for acute myocardial infarction; 1574 in-hospital deaths (13.2%). MEASUREMENTS: For each patient, probability of death was predicted four times, each time by using patient age and sex and one of four common severity measures: 1) admission MedisGroups scores for probability of death scores; 2) scores based on values for 17 physiologic variables at time of admission; 3) Disease Staging's probability-of-mortality model; and 4) All Patient Refined Diagnosis Related Groups (APR-DRGs). Patients were ranked according to probability of death as predicted by each severity measure, and rankings were compared across measures. The presence or absence of each of six clinical findings considered to indicate poor prognosis in patients with myocardial infarction (congestive heart failure, pulmonary edema, coma, low systolic blood pressure, low left ventricular ejection fraction, and high blood urea nitrogen level) was determined for patients ranked differently by different severity measures. RESULTS: MedisGroups and the physiology score gave 94.7% of patients similar rankings. Disease Staging, MedisGroups, and the physiology score gave only 78% of patients similar rankings. MedisGroups and APR-DRGs gave 80% of patients similar rankings. Patients whose illnesses were more severe according to MedisGroups and the physiology score were more likely to have the six clinical findings than were patients whose illnesses were more severe according to Disease Staging and APR-DRGs. CONCLUSIONS: Some pairs of severity measures assigned very different severity levels to more than 20% of patients. Evaluations of patient outcomes need to be sensitive to the severity measures used for risk adjustment.
Article
OBJECTIVES: Severity-adjusted death rates for coronary artery bypass graft (CABG) surgery by provider are published throughout the country. Whether five severity measures rated severity differently for identical patients was examined in this study. METHODS: Two severity measures rate patients using clinical data taken from the first two hospital days (MedisGroups, physiology scores); three use diagnoses and other information coded on standard, computerized hospital discharge abstracts (Disease Staging, Patient Management Categories, all patient refined diagnosis related groups). The database contained 7,764 coronary artery bypass graft patients from 38 hospitals with 3.2% in-hospital deaths. Logistic regression was performed to predict deaths from age, age squared, sex, and severity scores, and c statistics from these regressions were used to indicate model discrimination. Odds ratios of death predicted by different severity measures were compared. RESULTS: Code-based measures had better c statistics than clinical measures: all patient refined diagnosis related groups, c = 0.83 (95% C.I. 0.81, 0.86) versus MedisGroups, c = 0.73 (95% C.I. 0.70, 0.76). Code-based measures predicted very different odds of dying than clinical measures for more than 30% of patients. Diagnosis codes indicting postoperative, life-threatening conditions may contribute to the superior predictive power of code-based measures. CONCLUSIONS: Clinical and code-based severity measures predicted different odds of dying for many coronary artery bypass graft patients. Although code-based measures had better statistical performance, this may reflect their reliance on diagnosis codes for life-threatening conditions occurring late in the hospitalization, possibly as complications of care. This compromises their utility for drawing inferences about quality of care based on severity-adjusted coronary artery bypass graft death rates.
Article
DATA SOURCES/STUDY SETTING: Data on admissions to 80 hospitals nationwide in the 1992 MedisGroups Comparative Database. STUDY DESIGN: For each of 14 severity measures, LOS was regressed on patient age/sex, DRG, and severity score. Regressions were performed on trimmed and untrimmed data. R-squared was used to evaluate model performance. For each severity measure for each hospital, we calculated the expected LOS and the z-score, a measure of the deviation of observed from expected LOS. We ranked hospitals by z-scores. DATA EXTRACTION: All patients admitted for initial surgical repair of a hip fracture, defined by DRG, diagnosis, and procedure codes. PRINCIPAL FINDINGS: The 5,664 patients had a mean (s.d.) LOS of 11.9 (8.9) days. Cross-validated R-squared values from the multivariable regressions (trimmed data) ranged from 0.041 (Comorbidity Index) to 0.165 (APR-DRGs). Using untrimmed data, observed average LOS for hospitals ranged from 7.6 to 23.9 days. The 14 severity measures showed excellent agreement in ranking hospitals based on z-scores. No severity measure explained the differences between hospitals with the shortest and longest LOS. CONCLUSIONS: Hospitals differed widely in their mean LOS for hip fracture patients, and severity adjustment did little to explain these differences.
Article
OBJECTIVE: To see whether severity-adjusted predictions of likelihoods of in-hospital death for stroke patients differed among severity measures. METHODS: The study sample was 9,407 stroke patients from 94 hospitals, with 916 (9.7%) in-hospital deaths. Probability of death was calculated for each patient using logistic regression with age-sex and each of five severity measures as the independent variables: admission MedisGroups probability-of-death scores; scores based on 17 physiologic variables on admission; Disease Staging's probability-of-mortality model; the Seventy Score of Patient Management Categories (PMCs); and the All Patient-Refined Diagnosis Groups (APR-DRGs). For each patient, the odds of death predicted by the severity measures were compared. The frequencies of seven clinical indicators of poor prognosis in stroke were examined for patients with very different odds of death predicted by different severity measures. Odds ratios were considered very different when the odds of death predicted by one severity measure was less than 0.5 or greater than 2.0 of that predicted by a second measure. RESULTS: MedisGroups and the physiology scores predicted similar odds of death for 82.2% of the patients. MedisGroups and PMCs disagreed the most, with very different odds predicted for 61.6% of patients. Patients viewed as more severely III by MedisGroups and the physiology score were more likely to have the clinical stroke findings than were patients seen as sicker by the other severity measures. This suggests that MedisGroups and the physiology score are more clinically credible. CONCLUSIONS: Some pairs of severity measures ranked over 60% of patients very differently by predicted probability of death. Studies of severity-adjusted stroke outcomes may produce different results depending on which severity measure is used for risk adjustment.
Article
Health care report cards' public disclosure of patient health outcomes at the level of the individual physician or hospital or bothmay address important informational asymmetries in markets for health care, but they may also give doctors and hospitals incentives to decline to treat more difficult, severely ill patients. Whether report cards are good for patients and for society depends on whether their financial and health benefits outweigh their costs in terms of the quantity, quality, and appropriateness of medical treatment that they induce. Using national data on Medicare patients at risk for cardiac surgery, we find that cardiac surgery report cards in New York and Pennsylvania led both to selection behavior by providers and to improved matching of patients with hospitals. On net, this led to higher levels of resource use and to worse health outcomes, particularly for sicker patients. We conclude that, at least in the short run, these report cards decreased patient and social welfare.
Article
Reports on the comparative performance of physicians are becoming increasingly common. Little is known, however, about the credibility of these reports with target audiences or their influence on the delivery of medical services. Since 1992, Pennsylvania has published the Consumer Guide to Coronary Artery Bypass Graft Surgery, which lists annual risk-adjusted mortality rates for all hospitals and surgeons providing such surgery in the state. In 1995, we surveyed a randomly selected sample of 50 percent of Pennsylvania cardiologists and cardiac surgeons to find out whether they were aware of the guide and, if so, to determine their views on its usefulness, limitations, and influence on providers. Eighty-two percent of the cardiologists and all the cardiac surgeons were aware of the guide. Only 10 percent of these respondents reported that its mortality rates were "very important" in assessing the performance of a cardiothoracic surgeon. Less than 10 percent reported discussing the guide with more than 10 percent of their patients who were candidates for a coronary-artery bypass graft (CABG). Eighty-seven percent of the cardiologists reported that the guide had a minimal influence or none on their referral recommendations. For both groups, the most important limitations of the guide were the absence of indicators of quality other than mortality (cited by 78 percent), inadequate risk adjustment (79 percent), and the unreliability of data provided by hospitals and surgeons (53 percent). Fifty-nine percent of the cardiologists reported increased difficulty in finding surgeons willing to perform CABG surgery in severely ill patients who required it, and 63 percent of the cardiac surgeons reported that they were less willing to operate on such patients. The Consumer Guide to Coronary Artery Bypass Graft Surgery has limited credibility among cardiovascular specialists. It has little influence on referral recommendations and may introduce a barrier to care for severely ill patients. If publicly released performance reports are intended to guide the choice of providers without impeding access to medical care, a collaborative process involving physicians may enhance the credibility and usefulness of the reports.
Article
Publicly released performance reports ("report cards") are expected to foster competition on the basis of quality. Proponents frequently cite the need to inform patient choice of physicians and hospitals as a central element of this strategy. To examine the awareness and use of a statewide consumer guide that provides risk-adjusted, in-hospital mortality ratings of hospitals that provide cardiac surgery. Telephone survey conducted in 1996. Pennsylvania, where since 1992, the Pennsylvania Consumer Guide to Coronary Artery Bypass Graft [CABG] Surgery has provided risk-adjusted mortality ratings of all cardiac surgeons and hospitals in the state. A total of 474 (70%) of 673 eligible patients who had undergone CABG surgery during the previous year at 1 of 4 hospitals listed in the Consumer Guide as having average mortality rates between 1% and 5% were successfully contacted. Patients' awareness of the Consumer Guide, their knowledge of its ratings, their degree of interest in the report, and barriers to its use. Ninety-three patients (20%) were aware of the Consumer Guide, but only 56 (12%) knew about it before surgery. Among these 56 patients, 18 reported knowing the hospital rating and 7 reported knowing the surgeon rating, 11 said hospital and/or surgeon ratings had a moderate or major impact on their decision making, but only 4 were able to specify either or both correctly. When the Consumer Guide was described to all patients, 264 (56%) were "very" or "somewhat" interested in seeing a copy, and 273 (58%) reported that they probably or definitely would change surgeons if they learned that their surgeon had a higher than expected mortality rate in the previous year. A short time window for decision making and a limited awareness of alternative hospitals within a reasonable distance of home were identified as important barriers to use. Only 12% of patients surveyed reported awareness of a prominent report on cardiac surgery mortality before undergoing cardiac surgery. Fewer than 1% knew the correct rating of their surgeon or hospital and reported that it had a moderate or major impact on their selection of provider. Efforts to aid patient decision making with performance reports are unlikely to succeed without a tailored and intensive program for dissemination and patient education.
Article
This DataWatch assesses Medicare beneficiaries' understanding of the differences between their managed care and fee-for-service Medicare options. A telephone survey was used to evaluate knowledge levels among 1,673 beneficiaries residing in five Medicare markets with high managed care penetration. Half of the sample were enrolled in health maintenance organizations (HMOs) and half in the traditional Medicare program. The findings show that 30 percent of beneficiaries know almost nothing about HMOs; only 11 percent have adequate knowledge to make an informed choice; and HMO enrollees have significantly lower knowledge levels of the differences between the two delivery systems. These findings have implications for educating beneficiaries about their expanded choices.
Article
Externally-reported assessments of hospital quality are in increasing demand, as consumers, purchasers, providers, and public policy makers express growing interest in public disclosure of performance information. This article presents an analysis of a groundbreaking program in Massachusetts to measure and disseminate comparative quality information about patients' hospital experiences. The article emphasizes the reporting structure that was developed to address the project's dual goals of improving the quality of care delivered statewide while also advancing public accountability. Numerous trade-offs were encountered in developing reports that would satisfy a range of purchaser and provider constituencies. The final result was a reporting framework that emphasized preserving detail to ensure visibility for each participating hospital's strengths as well as its priority improvement areas. By avoiding oversimplification of the results, the measurement project helped to support a broad range of successful improvement activity statewide.
Article
Information about the performance of hospitals, health professionals, and health care organizations has been made public in the United States for more than a decade. The expected gains of public disclosure have not been made clear, and both the benefits and potential risks have received minimal empirical investigation. To summarize the empirical evidence concerning public disclosure of performance data, relate the results to the potential gains, and identify areas requiring further research. A literature search was conducted on MEDLINE and EMBASE databases for articles published between January 1986 and October 1999 in peer-reviewed journals. Review of citations, public documents, and expert advice was conducted to identify studies not found in the electronic databases. Descriptive, observational, or experimental evaluations of US reporting systems were selected for inclusion. Included studies were organized based on use of public data by consumers, purchasers, physicians, and hospitals; impact on quality of care outcomes; and costs. Seven US reporting systems have been the subject of published empirical evaluations. Descriptive and observational methods predominate. Consumers and purchasers rarely search out the information and do not understand or trust it; it has a small, although increasing, impact on their decision making. Physicians are skeptical about such data and only a small proportion makes use of it. Hospitals appear to be most responsive to the data. In a limited number of studies, the publication of performance data has been associated with an improvement in health outcomes. There are several potential gains from the public disclosure of performance data, but use of the information by provider organizations for quality improvement may be the most productive area for further research.
Article
The medical profession has, until recently, largely dictated standards of medical practice. If doctors completed their training and became licensed by the state they were trusted by the general public to provide clinical care with minimal obligation to show that they were achieving acceptable levels of performance. Several factors have caused this situation to change. A societal trend towards greater openness in public affairs has been fuelled by the ready availability of information in many areas of life outside of the health sector. A slow realisation of wide variation in practice standards1, 2 and occasional dramatic public evidence of deficiencies in quality of care3, 4 have led to demands by the public and government for greater openness from healthcare providers. The availability of computerised data and major advances in methods of measuring quality5 have allowed meaningful performance indicators to be developed for public scrutiny. The result has been advocacy for the use of standardised public reports on quality of care as a mechanism for improving quality and reducing costs.6–8 Publication of data about performance is not, however, new. In the 1860s Florence Nightingale highlighted the differences in mortality rates of patients in London hospitals,9 and in 1917 an American surgeon complained that fellow surgeons failed to publish their results because of fear that the public might not be impressed with the results.10 In most developed countries there is now an increasing expectation that healthcare providers should collect and report information on quality of care, that purchasers should use the information to make decisions on behalf of their population, and that the general public has a right to access that information. Organisations in the US have been publishing performance data, in the form of “report cards” or “provider profiles”, for over …
Article
The public release of health care-quality data into more formalized consumer health report cards is intended to educate consumers, improve quality of care, and increase competition in the marketplace The purpose of this review is to evaluate the evidence on the impact of consumer report cards on the behavior of consumers, providers, and purchasers. Studies were selected by conducting database searches in Medline and Healthstar to identify papers published since 1995 in peer-review journals pertaining to consumer report cards on health care. The evidence indicates that consumer report cards do not make a difference in decision making, improvement of quality, or competition. The research to date suggests that perhaps we need to rethink the entire endeavor of consumer report cards. Consumers desire information that is provider specific and may be more likely to use information on rates of errors and adverse outcomes. Purchasers may be in a better position to understand and use information about health plan quality to select high-quality plans to offer consumers and to design premium contributions to steer consumers, through price, to the highest-quality plans.
Article
It is unclear whether publicly reporting hospitals' risk-adjusted mortality leads to improvements in outcomes. To examine mortality trends during a period (1991-1997) when the Cleveland Health Quality Choice program was operational. Time series. Medicare patients hospitalized with acute myocardial infarction (AMI; n = 10,439), congestive heart failure (CHF; n = 23,505), gastrointestinal hemorrhage (GIH; n = 11,088), chronic obstructive pulmonary disease (COPD; n = 8495), pneumonia (n = 23,719), or stroke (n = 14,293). Risk-adjusted in-hospital mortality, early postdischarge mortality (between discharge and 30 days after admission), and 30-day mortality. Risk-adjusted in-hospital mortality declined significantly for all conditions except stroke and GIH, with absolute declines ranging from -2.1% for COPD to -4.8% for pneumonia. However, the mortality rate in the early postdischarge period rose significantly for all conditions except COPD, with increases ranging from 1.4% for GIH to 3.8% for stroke. As a consequence, the 30-day mortality declined significantly only for CHF (absolute decline 1.4%, 95% CI, -2.5 to -0.1%) and COPD (absolute decline 1.6%, 95% CI, -2.8-0.0%). For stroke, risk-adjusted 30-day mortality actually increased by 4.3% (95% CI, 1.8-7.1%). During Cleveland's experiment with hospital report cards, deaths shifted from in hospital to the period immediately after discharge with little or no net reduction in 30-day mortality for most conditions. Hospital profiling remains an unproven strategy for improving outcomes of care for medical conditions. Using in-hospital mortality rates to monitor trends in outcomes for hospitalized patients may lead to spurious conclusions.
Article
This study evaluates the impact on quality improvement of reporting hospital performance publicly versus privately back to the hospital. Making performance information public appears to stimulate quality improvement activities in areas where performance is reported to be low. The findings from this Wisconsin-based study indicate that there is added value to making this information public.
Article
Report cards on hospital performance are common but have uncertain impact. The objective of this study was to determine whether hospitals recognized as performance outliers experience volume changes after publication of a report card. Secondary objectives were to test whether favorable outliers attract more patients with related conditions, or from outside their catchment areas; and whether disadvantaged groups are less responsive to report cards. We used a time-series analysis using linear and autoregressive models. We studied patients admitted to nonfederal hospitals designated as outliers in reports on coronary bypass surgery (CABG) mortality in New York, acute myocardial infarction (AMI) mortality in California, and postdiskectomy complications in California. We studied observed versus expected hospital volume for topic and related conditions and procedures, by month/quarter after a report card, with and without stratification by age, race/ethnicity, insurance, and catchment area. Potential confounders included statewide prevalence, prereport hospital volume and market share, and unrelated volume. In California, low-mortality and high-mortality outliers did not experience changes in AMI volume after adjusting for autocorrelation. Low-complication outliers for lumbar diskectomy experienced slightly increased volume in autoregressive models. No other cohorts demonstrated consistent trends. In New York, low-mortality outliers experienced significantly increased CABG volume in the first month after publication, whereas high-mortality outliers experienced decreased volume in the second month. The strongest effects were among white patients and those with HMO coverage in California, and among white or other patients and those with Medicare in New York. Volume effects were modest, transient, and largely limited to white Medicare patients in New York.
Article
Can a well-designed public performance report affect the public image of hospitals? Using a pre/postdesign and telephone interviews, consumer views and reports of their use of public hospital report are examined. The findings show that the report did influence consumer views about the quality of individual hospitals in the community 2 to 4 months after the release of the report.
Article
To the Editor: We believe that the Special Communication on publicly reporting quality information by Drs Werner and Asch¹ incorporates a misleading presentation of the literature.
Article
This study builds on earlier work by assessing the long-term impact of a public hospital performance report on both consumers and hospitals. In doing so, we shed light on the relative importance of alternative assumptions about what stimulates quality improvements. The findings indicate that making performance data public results in improvements in the clinical area reported upon. An earlier investigation indicated that hospitals included in the public report believed that the report would affect their public image. Indeed, consumer surveys suggest that inclusion did affect hospitals' reputations.
Article
The publication of health outcome data--rather than merely the measurement and collection--is being given increasing consideration. Publication reflects society's increasing emphasis on a general 'right to know', as well as being a means of informing consumer choice. In theory, publication may help to promote public trust, support patient choice, and stimulate action to improve the quality of care whilst controlling costs. Drawing on a literature review, this paper overviews the strategies employed in the UK and US to publish outcome data. The focus is on outcomes, and certain related process measures, that measure the performance of hospitals or surgeons. Presenting the limited evidence that exists, we review the potential beneficial and harmful effects of publishing hospital outcome data. We also consider the risks of making incorrect inferences based on these data and the potential for dysfunctional consequences. Recognizing that the public largely mistrusts currently published health outcome data, we offer some recommendations for the future direction of strategies for publication.
Article
Public reporting of provider performance is becoming increasingly commonplace. In this chapter, we first review studies of prior public reports (or report cards) that show real but small impact on provider attempts to improve quality, on consumers' impressions of providers, and even on consumer selection of providers. Among other factors, two potential explanations for the low level of impact are that, in most early reports, the large majority of providers have been labeled "average" and consumers may have had difficulty understanding the statistical assessments. In response, some current report card producers are using or considering a variety of methods to increase the number of distinctions among providers and the ease of comprehension of the labels used. Therefore, we also consider the advantages and disadvantages of several novel approaches to analyzing and reporting provider performance.
Article
To explore the impact of statewide public reporting of hospital patient satisfaction on hospital quality improvement (QI), using Rhode Island (RI) as a case example. Primary data collected through semi-structured interviews between September 2002 and January 2003. The design is a retrospective study of hospital executives at all 11 general and two specialty hospitals in RI. Respondents were asked about hospital QI activities at several points throughout the public reporting process, as well as about hospital structure and processes to accomplish QI. Qualitative analysis of the interview data proceeded through an iterative process to identify themes and categories in the data. Data from the standardized statewide patient satisfaction survey process were used by hospitals to identify and target new QI initiatives, evaluate performance, and monitor progress. While all hospitals fully participated in the public reporting process, they varied in the stage of development of their QI activities and adoption of the statewide standardized survey for ongoing monitoring of their QI programs. Most hospitals placed responsibility for QI within each department, with results reported to top management, who were perceived as giving strong support for QI. The external environment facilitated QI efforts. Public reporting of comparative data on patient views can enhance and reinforce QI efforts in hospitals. The participation of key stakeholders facilitated successful implementation of statewide public reporting. This experience in RI offers lessons for other states or regions as they move to public reporting of hospital quality data.
Article
In the wake of findings from the Bundaberg Hospital and Forster inquiries in Queensland, periodic public release of hospital performance reports has been recommended. A process for developing and releasing such reports is being established by Queensland Health, overseen by an independent expert panel. This recommendation presupposes that public reports based on routinely collected administrative data are accurate; that the public can access, correctly interpret and act upon report contents; that reports motivate hospital clinicians and managers to improve quality of care; and that there are no unintended adverse effects of public reporting. Available research suggests that primary data sources are often inaccurate and incomplete, that reports have low predictive value in detecting "outlier" hospitals, and that users experience difficulty in accessing and interpreting reports and tend to distrust their findings.
Article
Hospital-specific variation in outcome is generally considered to be an important source of information for clinical improvement. We have measured the magnitude of this variation. We determined the revision risk in 37,642 cemented primary total knee arthroplasties inserted as a result of osteoarthritis from 1993 through 2002 at 93 hospitals in Sweden. We used 2 essentially different methods to estimate risk of revision: a fixed-effects model (Cox's proportional hazards model) and a random-effects model (shared gamma frailty model). The 2 models ranked hospitals differently. As expected, the fixed-effects model provided more dispersed estimates of hospital-specific revision rates. In contrast to the random-effects model, chance events can easily cause overly optimistic or pessimistic outcomes in the fixed-effects model. Although the revision risk varied significantly between hospitals, the overall revision risk was still low. Assessment of variation in outcome is an important instrument in the continuing effort to improve clinical care. However, regarding revision rate after knee arthroplasty, we do not believe that such analyses necessarily provide valid information on the current quality of care. We question their value as information source for seeking personal healthcare.
Article
Provider report cards feature prominently in ongoing efforts to improve patient quality. A well-known example is the cardiac surgery report-card program started in New York, which publicly compares hospital and surgeon performance. Public report cards have been associated with decreases in cardiac surgery mortality, but there is substantial disagreement over the source(s) of the improvement. This article develops a conceptual framework to explain how report-card-related responses could result in lower mortality and reviews the evidence. Existing research shows that report cards have not greatly changed referral patterns. How much providers increased their quality of care and altered their selection of patients remains unresolved, and alternative explanations have not been well studied. Future research should expand the number of states and years covered and exploit the variation in institutional features to improve our understanding of the relationship between report cards and outcomes.
Article
Patients and health insurances are increasingly interested in the quality of care provided by hospitals. Quality indicators are often used to evaluate the quality of inpatient treatment. Most of these evaluations require the collection of additional data. The patient safety indicators (PSI) introduced by the Agency for Healthcare Research and Quality (AHRQ) are precisely validated and exclusively depend on routine data. The original PSI definitions were transferable to the classifications of diagnosis, procedures and DRG used in Germany, and applied to routine data of 2.3 million cases from more than 200 hospitals. The comparison of the results to the US references reveals high concordance between the rates and demonstrates that PSI can be applied to detect critical incidents of patient care. For PSI-based hospital benchmarking further development of appropriate methods of risk adjustment is necessary.
Article
Pay for performance has been promoted as a tool for improving quality of care. In 2003, the Centers for Medicare & Medicaid Services (CMS) launched the largest pay-for-performance pilot project to date in the United States, including indicators for acute myocardial infarction. To determine if pay for performance was associated with either improved processes of care and outcomes or unintended consequences for acute myocardial infarction at hospitals participating in the CMS pilot project. An observational, patient-level analysis of 105,383 patients with acute non-ST-segment elevation myocardial infarction enrolled in the Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the American College of Cardiology/American Heart Association (ACC/AHA) Guidelines (CRUSADE) national quality-improvement initiative. Patients were treated between July 1, 2003, and June 30, 2006, at 54 hospitals in the CMS program and 446 control hospitals. The differences in the use of ACC/AHA class I guideline recommended therapies and in-hospital mortality between pay for performance and control hospitals. Among treatments subject to financial incentives, there was a slightly higher rate of improvement for 2 of 6 targeted therapies at pay-for-performance vs control hospitals (odds ratio [OR] comparing adherence scores from 2003 through 2006 at half-year intervals for aspirin at discharge, 1.31; 95% confidence interval [CI], 1.18-1.46 vs OR, 1.17; 95% CI, 1.12-1.21; P = .04) and for smoking cessation counseling (OR, 1.50; 95% CI, 1.29-1.73 vs OR, 1.28; 95% CI, 1.22-1.35; P = .05). There was no significant difference in a composite measure of the 6 CMS rewarded therapies between the 2 hospital groups (change in odds per half-year period of receiving CMS therapies: OR, 1.23; 95% CI, 1.15-1.30 vs OR, 1.17; 95% CI, 1.14-1.20; P = .16). For composite measures of acute myocardial infarction treatments not subject to incentives, rates of improvement were not significantly different (OR, 1.09; 95% CI, 1.05-1.14 vs OR, 1.08; 95% CI, 1.06-1.09; P = .49). Overall, there was no evidence that improvements in in-hospital mortality were incrementally greater at pay-for-performance sites (change in odds of in-hospital death per half-year period, 0.91; 95% CI, 0.84-0.99 vs 0.97; 95% CI, 0.94-0.99; P = .21). Among hospitals participating in a voluntary quality-improvement initiative, the pay-for-performance program was not associated with a significant incremental improvement in quality of care or outcomes for acute myocardial infarction. Conversely, we did not find evidence that pay for performance had an adverse association with improvement in processes of care that were not subject to financial incentives. Additional studies of pay for performance are needed to determine its optimal role in quality-improvement initiatives.