Automated de-identification of free-text medical records

Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
BMC Medical Informatics and Decision Making (Impact Factor: 1.5). 07/2008; 8:32. DOI: 10.1186/1472-6947-8-32
Source: PubMed

ABSTRACT Text-based patient medical records are a vital resource in medical research. In order to preserve patient confidentiality, however, the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that protected health information (PHI) be removed from medical records before they can be disseminated. Manual de-identification of large medical record databases is prohibitively expensive, time-consuming and prone to error, necessitating automatic methods for large-scale, automated de-identification.
We describe an automated Perl-based de-identification software package that is generally usable on most free-text medical records, e.g., nursing notes, discharge summaries, X-ray reports, etc. The software uses lexical look-up tables, regular expressions, and simple heuristics to locate both HIPAA PHI, and an extended PHI set that includes doctors' names and years of dates. To develop the de-identification approach, we assembled a gold standard corpus of re-identified nursing notes with real PHI replaced by realistic surrogate information. This corpus consists of 2,434 nursing notes containing 334,000 words and a total of 1,779 instances of PHI taken from 163 randomly selected patient records. This gold standard corpus was used to refine the algorithm and measure its sensitivity. To test the algorithm on data not used in its development, we constructed a second test corpus of 1,836 nursing notes containing 296,400 words. The algorithm's false negative rate was evaluated using this test corpus.
Performance evaluation of the de-identification software on the development corpus yielded an overall recall of 0.967, precision value of 0.749, and fallout value of approximately 0.002. On the test corpus, a total of 90 instances of false negatives were found, or 27 per 100,000 word count, with an estimated recall of 0.943. Only one full date and one age over 89 were missed. No patient names were missed in either corpus.
We have developed a pattern-matching de-identification system based on dictionary look-ups, regular expressions, and heuristics. Evaluation based on two different sets of nursing notes collected from a U.S. hospital suggests that, in terms of recall, the software out-performs a single human de-identifier (0.81) and performs at least as well as a consensus of two human de-identifiers (0.94). The system is currently tuned to de-identify PHI in nursing notes and discharge summaries but is sufficiently generalized and can be customized to handle text files of any format. Although the accuracy of the algorithm is high, it is probably insufficient to be used to publicly disseminate medical data. The open-source de-identification software and the gold standard re-identified corpus of medical records have therefore been made available to researchers via the PhysioNet website to encourage improvements in the algorithm.

Download full-text


Available from: Li-wei H Lehman, Sep 14, 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The current study aims to fill the gap in available healthcare de-identification resources by creating a new sharable dataset with realistic Protected Health Information (PHI) without reducing the value of the data for de-identification research. By releasing the annotated gold standard corpus with Data Use Agreement we would like to encourage other Computational Linguists to experiment with our data and develop new machine learning models for de-identification. This paper describes: (1) the modifications required by the Institutional Review Board before sharing the de-identification gold standard corpus; (2) our efforts to keep the PHI as realistic as possible; (3) and the tests to show the effectiveness of these efforts in preserving the value of the modified data set for machine learning model development.
    Journal of Biomedical Informatics 08/2014; 50. DOI:10.1016/j.jbi.2014.01.014 · 2.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With the adoption of ITs, a large amount patient-related documents is compiled by healthcare organisations. Quite often, this data is needed to be released to third parties for research or business purposes. The inherent sensitivity of patient's information has brought to the definition of legislations to protect the privacy of individuals. To meet with these legislations, redaction or sanitization of patient-related documents is needed before their release. This is usually done manually, which is costly and time-consuming, or by means of ad-hoc solutions that just protect structured types of sensitive information (e.g. social security numbers), or that are based on removing sensitive terms, which hampers the utility of the output. In this paper, we propose an automatic sanitization method for textual medical documents that is able to protect sensitive terms and those that are semantically related, while retaining the utility of the output as much as possible. Different to redaction schemas, which are based on term removal, our method improves the utility of the protected output by replacing sensitive terms with appropriate generalisations retrieved from medical and general-purpose knowledge bases. Experiments conducted on highly sensitive documents and in coherency with current regulations on healthcare data privacy show promising results in terms of output's privacy and utility.
    Network Operations and Management Symposium (NOMS); 05/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose an innovative approach and its implementation as an expert system to achieve the semi-automatic detection of candidate attributes for scrambling sensitive data. Our approach is based on semantic rules that determine which concepts have to be scrambled, and on a linguistic component that retrieves the attributes that semantically correspond to these concepts. Because attributes cannot be considered independently from each other, we also address the challenging problem of the propagation of the scrambling process through the entire database. One main contribution of our approach is to provide a semi-automatic process for the detection of sensitive data. The underlying knowledge is made available through production rules, operationalizing the detection of the sensitive data. A validation of our approach using four different databases is provided.
    Information Resources Management Journal 01/2014; 27(4):23-44.