Article

Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

Department of Medical Informatics and Clinical Epidemiology, School of Medicine, Oregon Health & Science University, 3181 S.W. Sam Jackson Park Road, Mail Code: BICC, Portland, OR, 97239-3098, USA.
Journal of the American Medical Informatics Association (Impact Factor: 3.93). 10/2007; 15(1):32-5. DOI: 10.1197/jamia.M2434
Source: PubMed

ABSTRACT We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

1 Bookmark
 · 
58 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to connect the dots in structured background knowledge and also across scientific literature has been demonstrated as a critical aspect of knowledge discovery. It is not unreasonable therefore to expect that connecting-the-dots across massive amounts of healthcare data may also lead to new insights that could impact diagnosis, treatment and overall patient care. Of critical importance is the observation that while structured Electronic Medical Records (EMR) are useful sources of health information, it is often the unstructured clinical texts such as progress notes and discharge summaries that contain rich, updated and granular information. Hence, by coupling structured EMR data with data from unstruc-tured clinical texts, more holistic patient records, needed for connecting the dots, can be obtained. Unfortunately, free-text progress notes are fraught with a lack of proper grammatical structure, and contain liberal use of jargon and abbreviations, together with frequent misspellings. While these notes still serve their intended purpose for medical care, automatically extracting semantic information from them is a complex task. Overcoming this complexity could mean that evidence-based support for structured EMR data using unstructured clinical texts, can be provided. In this work therefore, we explore a pattern-based approach for extracting Smoker Semantic Types (SST) from unstructured clinical notes, in order to enable evidence-based resolution of SSTs asserted in structured EMRs using SSTs extracted from unstructured clinical notes. Our findings support the notion that information present in unstructured clinical text can be used to complement structured healthcare data. This is a cru-cial observation towards creating comprehensive longitudinal patient models for connecting-the-dots and providing better overall patient care.
    IEEE International Conference on Bioinformatics and Biomedicine: The First International Workshop on the Role of Semantic Web in Literature-Based Discovery (SWLBD2012), Philadelphia, Pennsylvania; 10/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Pathology reports are rich in narrative statements that encode a complex web of relations among medical concepts. These relations are routinely used by doctors to reason on diagnoses, but often require hand-crafted rules or supervised learning to extract into prespecified forms for computational disease modeling. We aim to automatically capture relations from narrative text without supervision. We design a novel framework that translates sentences into graph representations, automatically mines sentence subgraphs, reduces redundancy in mined subgraphs, and automatically generates subgraph features for subsequent classification tasks. To ensure meaningful interpretations over the sentence graphs, we use the Unified Medical Language System Metathesaurus to map token subsequences to concepts, and in turn sentence graph nodes. We test our system with multiple lymphoma classification tasks that together mimic the differential diagnosis by a pathologist. To this end, we prevent our classifiers from looking at explicit mentions or synonyms of lymphomas in the text. We compare our system with three baseline classifiers using standard n-grams, full MetaMap concepts, and filtered MetaMap concepts. Our system achieves high F-measures on multiple binary classifications of lymphoma (Burkitt lymphoma, 0.8; diffuse large B-cell lymphoma, 0.909; follicular lymphoma, 0.84; Hodgkin lymphoma, 0.912). Significance tests show that our system outperforms all three baselines. Moreover, feature analysis identifies subgraph features that contribute to improved performance; these features agree with the state-of-the-art knowledge about lymphoma classification. We also highlight how these unsupervised relation features may provide meaningful insights into lymphoma classification.
    Journal of the American Medical Informatics Association 01/2014; 21(5). DOI:10.1136/amiajnl-2013-002443 · 3.57 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The frequency and volume of newly-published scientific literature is quickly making manual maintenance of publicly-available databases of primary data unrealistic and costly. Although machine learning (ML) can be useful for developing automated approaches to identifying scientific publications containing relevant information for a database, developing such tools necessitates manually annotating an unrealistic number of documents. One approach to this problem, active learning (AL), builds classification models by iteratively identifying documents that provide the most information to a classifier. Although this approach has been shown to be effective for related problems, in the context of scientific databases curation, it falls short. We present Virk, an AL system that, while being trained, simultaneously learns a classification model and identifies documents having information of interest for a knowledge base. Our approach uses a support vector machine (SVM) classifier with input features derived from neuroscience-related publications from the primary literature. Using our approach, we were able to increase the size of the Neuron Registry, a knowledge base of neuron-related information, by a factor of 90%, a knowledge base of neuron-related information, in 3 months. Using standard biocuration methods, it would have taken between 1 and 2 years to make the same number of contributions to the Neuron Registry. Here, we describe the system pipeline in detail, and evaluate its performance against other approaches to sampling in AL.
    Frontiers in Neuroinformatics 01/2013; 7:38. DOI:10.3389/fninf.2013.00038

Preview

Download
0 Downloads
Available from