Automated detection of infectious disease outbreaks in hospitals: a retrospective cohort study.

Division of Infectious Diseases and Health Policy Research Institute, University of California Irvine School of Medicine, Irvine, California, United States of America.
PLoS Medicine (Impact Factor: 15.25). 01/2010; 7(2):e1000238. DOI: 10.1371/journal.pmed.1000238
Source: PubMed

ABSTRACT Detection of outbreaks of hospital-acquired infections is often based on simple rules, such as the occurrence of three new cases of a single pathogen in two weeks on the same ward. These rules typically focus on only a few pathogens, and they do not account for the pathogens' underlying prevalence, the normal random variation in rates, and clusters that may occur beyond a single ward, such as those associated with specialty services. Ideally, outbreak detection programs should evaluate many pathogens, using a wide array of data sources.
We applied a space-time permutation scan statistic to microbiology data from patients admitted to a 750-bed academic medical center in 2002-2006, using WHONET-SaTScan laboratory information software from the World Health Organization (WHO) Collaborating Centre for Surveillance of Antimicrobial Resistance. We evaluated patients' first isolates for each potential pathogenic species. In order to evaluate hospital-associated infections, only pathogens first isolated >2 d after admission were included. Clusters were sought daily across the entire hospital, as well as in hospital wards, specialty services, and using similar antimicrobial susceptibility profiles. We assessed clusters that had a likelihood of occurring by chance less than once per year. For methicillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant enterococci (VRE), WHONET-SaTScan-generated clusters were compared to those previously identified by the Infection Control program, which were based on a rule-based criterion of three occurrences in two weeks in the same ward. Two hospital epidemiologists independently classified each cluster's importance. From 2002 to 2006, WHONET-SaTScan found 59 clusters involving 2-27 patients (median 4). Clusters were identified by antimicrobial resistance profile (41%), wards (29%), service (13%), and hospital-wide assessments (17%). WHONET-SaTScan rapidly detected the two previously known gram-negative pathogen clusters. Compared to rule-based thresholds, WHONET-SaTScan considered only one of 73 previously designated MRSA clusters and 0 of 87 VRE clusters as episodes statistically unlikely to have occurred by chance. WHONET-SaTScan identified six MRSA and four VRE clusters that were previously unknown. Epidemiologists considered more than 95% of the 59 detected clusters to merit consideration, with 27% warranting active investigation or intervention.
Automated statistical software identified hospital clusters that had escaped routine detection. It also classified many previously identified clusters as events likely to occur because of normal random fluctuations. This automated method has the potential to provide valuable real-time guidance both by identifying otherwise unrecognized outbreaks and by preventing the unnecessary implementation of resource-intensive infection control measures that interfere with regular patient care. Please see later in the article for the Editors' Summary.

  • [Show abstract] [Hide abstract]
    ABSTRACT: As monitoring requirements for healthcare-acquired infection increase, an efficient and accurate method for surveillance has been sought. The authors evaluated the accuracy of electronic surveillance in multiple intensive care unit settings. Data from 500 intensive care unit patients were reviewed to determine the presence of central line-associated blood stream infection (CLABSI) and catheter-associated urinary tract infection (CAUTI). An electronic surveillance report was obtained to determine whether patients had a blood-line nosocomial infection marker or a urine nosocomial infection marker. Manual review was based on Center for Disease Control criteria. An infection preventionist then reviewed all discrepant cases and made a final determination, which was used as the standard procedure. Sensitivity, specificity, false-positive rate, and false-negative rate were then calculated for electronic surveillance. In the burn population the sensitivity of electronic surveillance for CAUTI was 66.66%, specificity 96.5%, false-positive rate 3.44%, false-negative rate 33%; and for CLABSI the sensitivity was 100%, specificity 95%, false-positive rate 4.96%, false-negative rate 0%. In the nonburn population the sensitivity for CAUTI was 50%, specificity 97.9%, false-positive rate 2%, and false-negative rate 30%; and for CLABSI sensitivity was 60%, specificity 98.8%, false-positive rate 1%, and false-negative rate 60%. Burn centers may experience a higher false-positive rate for electronic surveillance of CLABSI and CAUTI than other critical care units.
    Journal of burn care & research: official publication of the American Burn Association 10/2013; · 1.54 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Clostridium difficile surveillance allows outbreaks of cases clustered in time and space to be identified and further transmission prevented. Traditionally, manual detection of groups of cases diagnosed in the same ward or hospital, often followed by retrospective reference laboratory genotyping, has been used to identify outbreaks. However, integrated healthcare databases offer the prospect of automated real-time outbreak detection based on statistically robust methods, and accounting for contacts between cases, including those distant to the ward of diagnosis. Complementary to this, rapid benchtop whole genome sequencing, and other highly discriminatory genotyping, has the potential to distinguish which cases are part of an outbreak with high precision and in clinically relevant timescales. These new technologies are likely to shape future surveillance.
    Expert Review of Anticancer Therapy 11/2013; 11(11):1193-205. · 3.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In hospitals, Clostridium difficile infection (CDI) surveillance relies on unvalidated guidelines or threshold criteria to identify outbreaks. This can result in false-positive and -negative cluster alarms. The application of statistical methods to identify and understand CDI clusters may be a useful alternative or complement to standard surveillance techniques. The objectives of this study were to investigate the utility of the temporal scan statistic for detecting CDI clusters and determine if there are significant differences in the rate of CDI cases by month, season, and year in a community hospital.
    BMC Infectious Diseases 05/2014; 14(1):254. · 3.03 Impact Factor

Full-text (2 Sources)

Available from
May 27, 2014