To explore the potential for international comparison of patient safety as part of the Health Care Quality Indicators project of the Organization for Economic Co-operation and Development (OECD) by evaluating patient safety indicators originally published by the US Agency for Healthcare Research and Quality (AHRQ).
A retrospective cross-sectional study.
Acute care hospitals in the USA, UK, Sweden, Spain, Germany, Canada and Australia in 2004 and 2005/2006.
Routine hospitalization-related administrative data from seven countries were analyzed. Using algorithms adapted to the diagnosis and procedure coding systems in place in each country, authorities in each of the participating countries reported summaries of the distribution of hospital-level and overall (national) rates for each AHRQ Patient Safety Indicator to the OECD project secretariat.
Each country's vector of national indicator rates and the vector of American patient safety indicators rates published by AHRQ (and re-estimated as part of this study) were highly correlated (0.821-0.966). However, there was substantial systematic variation in rates across countries.
This pilot study reveals that AHRQ Patient Safety Indicators can be applied to international hospital data. However, the analyses suggest that certain indicators (e.g. 'birth trauma', 'complications of anesthesia') may be too unreliable for international comparisons. Data quality varies across countries; undercoding may be a systematic problem in some countries. Efforts at international harmonization of hospital discharge data sets as well as improved accuracy of documentation should facilitate future comparative analyses of routine databases.
"For example, in the UK, PSIs are calculated using the AHRQ algorithm . The Health Care Quality Indicators project conducted by the Organization for Economic Co-operation and Development developed an international benchmark system that included 12 PSIs among 59 candidate indicators . In the present study, we demonstrated that PSIs can be calculated from DPC/PDPS data, which are easy to obtain and may be useful for international PSI comparisons. "
[Show abstract][Hide abstract] ABSTRACT: Since the late 1990s, patient safety has been an important policy issue in developed countries. To evaluate the effectiveness of the activities of patient safety, it is necessary to quantitatively assess the incidence of adverse events by types of failure mode using tangible data. The purpose of this study is to calculate patient safety indicators (PSIs) using the Japanese Diagnosis Procedure Combination/per-diem payment system (DPC/PDPS) reimbursement data and to elucidate the relationship between perioperative PSIs and hospital surgical volume.
DPC/PDPS data of the Medi-Target project managed by the All Japan Hospital Association were used. An observational study was conducted where PSIs were calculated using an algorithm proposed by the US Agency for Healthcare Research and Quality. We analyzed data of 1,383,872 patients from 188 hospitals who were discharged from January 2008 to December 2010.
Among 20 provider level PSIs, four PSIs (three perioperative PSIs and decubitus ulcer) and mortality rates of postoperative patients were related to surgical volume. Low-volume hospitals (less than 33rd percentiles surgical volume per month) had higher mortality rates (5.7%, 95% confidence interval (CI), 3.9% to 7.4%) than mid- (2.9%, 95% CI, 2.6% to 3.3%) or high-volume hospitals (2.7%, 95% CI, 2.5% to 2.9%). Low-volume hospitals had more deaths among surgical inpatients with serious treatable complications (38.5%, 95% CI, 33.7% to 43.2%) than high-volume hospitals (21.4%, 95% CI, 19.0% to 23.9%). Also Low-volume hospitals had lower proportion of difficult surgeries (54.9%, 95% CI, 50.1% to 59.8%) compared with high-volume hospitals (63.4%, 95% CI, 62.3% to 64.6%). In low-volume hospitals, limited experience may have led to insufficient care for postoperative complications.
We demonstrated that PSIs can be calculated using DPC/PDPS data and perioperative PSIs were related to hospital surgical volume. Further investigations focusing on identifying risk factors for poor PSIs and effective support to these hospitals are needed.
BMC Research Notes 02/2014; 7(1):117. DOI:10.1186/1756-0500-7-117
"It is difficult to extend our specific results to other countries because coding rules are potentially different. For instance, the number of coded diagnoses varies between countries and could constitute a bias for international comparisons . However, comparisons could be possible between countries using a selection of co-morbidities like Charlson co-morbidities . "
[Show abstract][Hide abstract] ABSTRACT: Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges.
Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities.
For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven.
Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.
BMC Health Services Research 08/2011; 11(1):194. DOI:10.1186/1472-6963-11-194 · 1.71 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.