To improve patient safety, healthcare facilities are focussing on reducing patient harm. Automated harm-detection methods using information technology show promise for efficiently measuring harm. However, there have been few systematic reviews of their effectiveness.
To perform a systematic literature review to identify, describe and evaluate effectiveness of automated inpatient harm-detection methods.
Data sources included MEDLINE and CINAHL databases indexed through August 2008, extended by bibliographic review and search of citing articles. The authors included articles reporting effectiveness of automated inpatient harm-detection methods, as compared with other detection methods. Two independent reviewers used a standardised abstraction sheet to extract data about automated and comparison harm-detection methods, patient samples and events identified. Differences were resolved by discussion.
From 176 articles, 43 articles met inclusion criteria: 39 describing field-defined methods, two using natural language processing and two using both methods. Twenty-one studies used automated methods to detect adverse drug events, 10 detected general adverse events, eight detected nosocomial infections, and four detected other specific adverse events. Compared with gold standard chart review, sensitivity and specificity of automated harm-detection methods ranged from 0.10 to 0.94 and 0.23 to 0.98, respectively. Studies used heterogeneous methods that often were flawed.
Automated methods of harm detection are feasible and some can potentially detect patient harm efficiently. However, effectiveness varied widely, and most studies had methodological weaknesses. More work is needed to develop and assess these tools before they can yield accurate estimates of harm that can be reliably interpreted and compared.
"There have been reports from English-speaking countries about the application of NLP for detecting adverse events in EMRs
. However, there have been few such reports from non-English speaking countries including Japan
[Show abstract][Hide abstract] ABSTRACT: Background
Incident reporting is the most common method for detecting adverse events in a hospital. However, under-reporting or non-reporting and delay in submission of reports are problems that prevent early detection of serious adverse events. The aim of this study was to determine whether it is possible to promptly detect serious injuries after inpatient falls by using a natural language processing method and to determine which data source is the most suitable for this purpose.
We tried to detect adverse events from narrative text data of electronic medical records by using a natural language processing method. We made syntactic category decision rules to detect inpatient falls from text data in electronic medical records. We compared how often the true fall events were recorded in various sources of data including progress notes, discharge summaries, image order entries and incident reports. We applied the rules to these data sources and compared F-measures to detect falls between these data sources with reference to the results of a manual chart review. The lag time between event occurrence and data submission and the degree of injury were compared.
We made 170 syntactic rules to detect inpatient falls by using a natural language processing method. Information on true fall events was most frequently recorded in progress notes (100%), incident reports (65.0%) and image order entries (12.5%). However, F-measure to detect falls using the rules was poor when using progress notes (0.12) and discharge summaries (0.24) compared with that when using incident reports (1.00) and image order entries (0.91). Since the results suggested that incident reports and image order entries were possible data sources for prompt detection of serious falls, we focused on a comparison of falls found by incident reports and image order entries. Injury caused by falls found by image order entries was significantly more severe than falls detected by incident reports (p<0.001), and the lag time between falls and submission of data to the hospital information system was significantly shorter in image order entries than in incident reports (p<0.001).
By using natural language processing of text data from image order entries, we could detect injurious falls within a shorter time than that by using incident reports. Concomitant use of this method might improve the shortcomings of an incident reporting system such as under-reporting or non-reporting and delayed submission of data on incidents.
BMC Health Services Research 12/2012; 12(1):448. DOI:10.1186/1472-6963-12-448 · 1.71 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Healthcare professionals, industry and policy makers have identified Health Information Exchange (HIE) as a solution to improve patient safety and overall quality of care. The potential benefits of HIE on healthcare have fostered its implementation and adoption in the United States. However,there is a dearth of publications that demonstrate HIE effectiveness. The purpose of this review was to identify and describe evidence of HIE impact on healthcare outcomes.
A database search was conducted. The inclusion criteria included original investigations in English that focused on a HIE outcome evaluation. Two independent investigators reviewed the articles. A qualitative coding approach was used to analyze the data.
Out of 207 abstracts retrieved, five articles met the inclusion criteria. Of these, 3 were randomized controlled trials, 1 involved retrospective review of data, and 1 was a prospective study. We found that HIE benefits on healthcare outcomes are still sparsely evaluated, and that among the measurements used to evaluate HIE healthcare utilization is the most widely used.
Outcomes evaluation is required to give healthcare providers and policy-makers evidence to incorporate in decision-making processes. This review showed a dearth of HIE outcomes data in the published peer reviewed literature so more research in this area is needed. Future HIE evaluations with different levels of interoperability should incorporate a framework that allows a detailed examination of HIE outcomes that are likely to positively affect care.
[Show abstract][Hide abstract] ABSTRACT: OBJECTIVES: To explore the feasibility of using statistical text classification to automatically detect extreme-risk events in clinical incident reports. METHODS: Statistical text classifiers based on Naïve Bayes and Support Vector Machine (SVM) algorithms were trained and tested on clinical incident reports to automatically detect extreme-risk events, defined by incidents that satisfy the criteria of Severity Assessment Code (SAC) level 1. For this purpose, incident reports submitted to the Advanced Incident Management System by public hospitals from one Australian region were used. The classifiers were evaluated on two datasets: (1) a set of reports with diverse incident types (n=120); (2) a set of reports associated with patient misidentification (n=166). Results were assessed using accuracy, precision, recall, F-measure, and area under the curve (AUC) of receiver operating characteristic curves. RESULTS: The classifiers performed well on both datasets. In the multi-type dataset, SVM with a linear kernel performed best, identifying 85.8% of SAC level 1 incidents (precision=0.88, recall=0.83, F-measure=0.86, AUC=0.92). In the patient misidentification dataset, 96.4% of SAC level 1 incidents were detected when SVM with linear, polynomial or radial-basis function kernel was used (precision=0.99, recall=0.94, F-measure=0.96, AUC=0.98). Naïve Bayes showed reasonable performance, detecting 80.8% of SAC level 1 incidents in the multi-type dataset and 89.8% of SAC level 1 patient misidentification incidents. Overall, higher prediction accuracy was attained on the specialized dataset, compared with the multi-type dataset. CONCLUSION: Text classification techniques can be applied effectively to automate the detection of extreme-risk events in clinical incident reports.
Journal of the American Medical Informatics Association 01/2012; 19(e1):e110-e118. DOI:10.1136/amiajnl-2011-000562 · 3.50 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.