Article

An approach to improve LOINC mapping through augmentation of local test names.

Division of Biomedical Informatics, University of California, San Diego, La Jolla, CA 92093-05, USA.
Journal of Biomedical Informatics (Impact Factor: 2.13). 12/2011; 45(4):651-7. DOI: 10.1016/j.jbi.2011.12.004
Source: PubMed

ABSTRACT Mapping medical test names into a standardized vocabulary is a prerequisite to sharing test-related data between health care entities. One major barrier in this process is the inability to describe tests in sufficient detail to assign the appropriate name in Logical Observation Identifiers, Names, and Codes (LOINC®). Approaches to address mapping of test names with incomplete information have not been well described. We developed a process of "enhancing" local test names by incorporating information required for LOINC mapping into the test names themselves. When using the Regenstrief LOINC Mapping Assistant (RELMA) we found that 73/198 (37%) of "enhanced" test names were successfully mapped to LOINC, compared to 41/191 (21%) of original names (p=0.001). Our approach led to a significantly higher proportion of test names with successful mapping to LOINC, but further efforts are required to achieve more satisfactory results.

0 Bookmarks
 · 
85 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The value of standards for the representation, integration, and exchange of data, information, and knowledge across the spectrum of biomedicine and health care has been widely recognized for years. Recent initiatives further underscore the importance of standards (e.g., for certification and meaningful use of electronic health records) and establishment of systematic approaches for achieving semantic interoperability. There is accordingly a need for detailed, experience-based discussions pertaining to the adoption and implementation of the breadth of standards in biomedicine and health care. The goal of this special issue has been to provide a forum for describing advanced research and development in translating standards into practice. Each paper in this issue provides a comprehensive description of methodologies employed and challenges encountered during the process of implementing a specific standard or set of standards in a practical setting. The twenty-two papers (including one methodological review) represent a broad array of experiences with standards and are categorized into the following sections: (1) Terminology Standards, (2) Document Standards, (3) Decision Support Standards, (4) Standards-Based Infrastructure, and (5) Standards Adoption Processes.
    Journal of Biomedical Informatics 06/2012; 45(4):609-12. · 2.13 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Proliferation of health information technologies creates opportunities to improve clinical and public health, including high quality, safer care and lower costs. To maximize such potential benefits, health information technologies must readily and reliably exchange information with other systems. However, evidence from public health surveillance programs in two states suggests that operational clinical information systems often fail to use available standards, a barrier to semantic interoperability. Furthermore, analysis of existing policies incentivizing semantic interoperability suggests they have limited impact and are fragmented. In this essay, we discuss three approaches for increasing semantic interoperability to support national goals for using health information technologies. A clear, comprehensive strategy requiring collaborative efforts by clinical and public health stakeholders is suggested as a guide for the long road towards better population health data and outcomes.
    Journal of Biomedical Informatics 03/2014; · 2.13 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: OBJECTIVE: To determine whether the knowledge contained in a rich corpus of local terms mapped to LOINC (Logical Observation Identifiers Names and Codes) could be leveraged to help map local terms from other institutions. METHODS: We developed two models to test our hypothesis. The first based on supervised machine learning was created using Apache's OpenNLP Maxent and the second based on information retrieval was created using Apache's Lucene. The models were validated by a random subsampling method that was repeated 20 times and that used 80/20 splits for training and testing, respectively. We also evaluated the performance of these models on all laboratory terms from three test institutions. RESULTS: For the 20 iterations used for validation of our 80/20 splits Maxent and Lucene ranked the correct LOINC code first for between 70.5% and 71.4% and between 63.7% and 65.0% of local terms, respectively. For all laboratory terms from the three test institutions Maxent ranked the correct LOINC code first for between 73.5% and 84.6% (mean 78.9%) of local terms, whereas Lucene's performance was between 66.5% and 76.6% (mean 71.9%). Using a cut-off score of 0.46 Maxent always ranked the correct LOINC code first for over 57% of local terms. CONCLUSIONS: This study showed that a rich corpus of local terms mapped to LOINC contains collective knowledge that can help map terms from other institutions. Using freely available software tools, we developed a data-driven automated approach that operates on term descriptions from existing mappings in the corpus. Accurate and efficient automated mapping methods can help to accelerate adoption of vocabulary standards and promote widespread health information exchange.
    Journal of the American Medical Informatics Association 05/2013; · 3.57 Impact Factor