ArticlePDF AvailableLiterature Review

Abstract

Digital data are anticipated to transform medicine. However, most of today’s medical data lack interoperability: hidden in isolated databases, incompatible systems and proprietary software, the data are difficult to exchange, analyze, and interpret. This slows down medical progress, as technologies that rely on these data – artificial intelligence, big data or mobile applications – cannot be used to their full potential. In this article, we argue that interoperability is a prerequisite for the digital innovations envisioned for future medicine. We focus on four areas where interoperable data and IT systems are particularly important: (1) artificial intelligence and big data; (2) medical communication; (3) research; and (4) international cooperation. We discuss how interoperability can facilitate digital transformation in these areas to improve the health and well-being of patients worldwide.
The final publication is available at npj Digital
Medicine through https://rdcu.be/bPON2
... The success of digital medicine is contingent upon interoperability, emphasising the need for standardised clinical terminologies to bridge communication gaps between healthcare systems [18]. Clinical terminology servers play a pivotal role in this landscape by providing a standardised framework for medical vocabulary, facilitating interoperability, and ensuring accurate data representation [15]. ...
... The evolution of HL7 FHIR through its releases, including the latest R5, contributes to the adaptability and resilience of the framework, aligning it with the evolving landscape of healthcare technology [11]. The adoption of HL7 FHIR has been instrumental in addressing the challenges regarding the dependence of digital medicine on interoperability, particularly within the realm of clinical terminology [15,11]. ...
... -Terminology management: CRUD operations Effective terminology (CodeSystem, Concept, ValueSet, ConceptMap) management is assessed by implementing CRUD (Create, Read, Update, Delete) operations. This criterion ensures that the terminology server allows seamless manipulation of medical terms, reflecting changes and updates in clinical knowledge [15,16]. ...
... Despite this promise, a critical bottleneck persists: the lack of interoperability across diverse data sources [5][6][7][8] . This issue affects various healthcare settings, from large biopharmaceuticals 9 and hospital systems to small clinical trials and research networks. ...
... In this context, standardized medical coding systems play are essential for interoperability. Examples include the Anatomical Therapeutic Chemical (ATC) classification 11 and the Medical Dictionary for Regulatory Activities (MedDRA) 12 , which ensure consistent encoding and interpretation of data across studies, enabling diverse datasets to 'speak the same language' 6,7 However, even with standard coding systems, clinical trials-often considered the gold standard of clinical data-are not immune to coding missingness, variability and inconsistencies. To illustrate these challenges, we examined immunology trial data from TransCelerate 3 . ...
Preprint
Full-text available
The reuse of historical clinical trial data has significant potential to accelerate medical research and drug development. However, interoperability challenges, particularly with missing medical codes, hinders effective data integration across studies. While Large Language Models (LLMs) offer a promising solution for automated coding without labeled data, current approaches face challenges on complex coding tasks. We introduce ALIGN, a novel compositional LLM-based system for automated, zero-shot medical coding. ALIGN follows a three-step process: (1) diverse candidate code generation; (2) self-evaluation of codes and (3) confidence scoring and uncertainty estimation enabling human deferral to ensure reliability. We evaluate ALIGN on harmonizing medication terms into Anatomical Therapeutic Chemical (ATC) and medical history terms into Medical Dictionary for Regulatory Activities (MedDRA) codes extracted from 22 immunology trials. ALIGN outperformed the LLM baselines, while also providing capabilities for trustworthy deployment. For MedDRA coding, ALIGN achieved high accuracy across all levels, matching RAG and excelling at the most specific levels (87-90% for HLGT). For ATC coding, ALIGN demonstrated superior performance, particularly at lower hierarchy levels (ATC Level 4), with 72-73% overall accuracy and 86-89% accuracy for common medications, outperforming baselines by 7-22%. ALIGN's uncertainty-based deferral improved accuracy by 17% to 90% accuracy with 30% deferral, notably enhancing performance on uncommon medications. ALIGN achieves this cost-efficiently at \0.0007 and \0.02 per code for GPT-4o-mini and GPT-4o, reducing barriers to clinical adoption. ALIGN advances automated medical coding for clinical trial data, contributing to enhanced data interoperability and reusability, positioning it as a promising tool to improve clinical research and accelerate drug development.
... Structural interoperability ensures data exchanges have unaltered meaning at the data field level. Semantic interoperability enables systems to exchange information and interpret it meaningfully using defined domain models 23 . ...
Preprint
Full-text available
Introduction: Multimorbidity amplifies healthcare burdens due to the intricate requirements of patients and the pathophysiological complexities of multiple diseases. To address this, digital health technologies (DHTs) play a crucial role in effective healthcare delivery, requiring comprehensive evidence on their applications in managing multimorbidity. Therefore, this scoping review aims to identify various types of DHTs, explore their mechanisms, and emphasize the significance of utilizing DHTs within the context of multimorbidity. Methods: This scoping review follows the Preferred Reporting Items for Scoping Reviews guidelines. PubMed, Scopus, Web of Science, EMBASE, and Google Scholar were used to search articles. Data extraction focused on study characteristics, types of health technologies, mechanisms, outcomes, challenges, and facilitators. Results were presented using figures, tables, and texts. Thematic analysis was employed to describe mechanisms, impacts, challenges, and strategies related to DHTs in managing multimorbidity. Results: Digital health technology encompasses smartphone apps, wearable devices, and platforms for remote healthcare (telehealth). These technologies work through care coordination, collaboration, communication, self-management, remote monitoring, health data management, and tele-referrals. Digital health technologies improved quality of care and life, cost efficiency, acceptability of care, collaboration, streamlined healthcare delivery, reduced workload, and bridging knowledge gaps. Patients’ and healthcare providers’ resistance and skills, lack of support (technical, financial, and infrastructure), and ethical concerns (e.g., privacy) barred DHTs implementation. Arranging organization, providing technical support, employing care coordination strategies, enhancing acceptability, deploying appropriate technology, considering patient needs, and adhering with ethical principles facilitate DHTs implementation. Conclusions: Digital health technology holds significant promise in improving care for individuals with multimorbidity by enhancing coordination, self-management, and monitoring. Successful implementation requires addressing challenges such as patient resistance and infrastructure limitations through targeted strategies and investments. It is also essential to consider usability, privacy, and trustworthiness when adopting these tools.
Chapter
This chapter provides a comprehensive review of artificial intelligence (AI) applications in biomedicine, highlighting the transformative impact on various domains, from basic research to clinical practice. The author explores AI's role in medical imaging and diagnostics, drug discovery and development, genomics and precision medicine, and healthcare management and delivery. Key advancements, such as deep learning for image analysis, virtual screening for drug design, and AI-driven patient stratification, are discussed. The chapter also addresses challenges surrounding AI implementation, including data access, bias, scalability, transparency, privacy, and regulatory uncertainties. Potential solutions and policy options to address these challenges and enhance AI's benefits are proposed. The author emphasizes the importance of collaboration between AI experts and healthcare stakeholders, as well as the need for responsible AI development practices. Future directions highlight the potential for AI to transform healthcare and improve patient outcomes and need for responsible AI.
Article
Full-text available
Artificial intelligence (AI)-based methods have emerged as powerful tools to transform medical care. Although machine learning classifiers (MLCs) have already demonstrated strong performance in image-based diagnoses, analysis of diverse and massive electronic health record (EHR) data remains challenging. Here, we show that MLCs can query EHRs in a manner similar to the hypothetico-deductive reasoning used by physicians and unearth associations that previous statistical methods have not found. Our model applies an automated natural language processing system using deep learning techniques to extract clinically relevant information from EHRs. In total, 101.6 million data points from 1,362,559 pediatric patient visits presenting to a major referral center were analyzed to train and validate the framework. Our model demonstrates high diagnostic accuracy across multiple organ systems and is comparable to experienced pediatricians in diagnosing common childhood diseases. Our study provides a proof of concept for implementing an AI-based system as a means to aid physicians in tackling large amounts of data, augmenting diagnostic evaluations, and to provide clinical decision support in cases of diagnostic uncertainty or complexity. Although this impact may be most evident in areas where healthcare providers are in relative shortage, the benefits of such an AI system are likely to be universal. © 2019, This is a U.S. government work and not under copyright protection in the U.S.; foreign
Article
Full-text available
Precision medicine can utilize new techniques in order to more effectively translate research findings into clinical practice. In this article, we first explore the limitations of traditional study designs, which stem from (to name a few): massive cost for the assembly of large patient cohorts; non-representative patient data; and the astounding complexity of human biology. Second, we propose that harnessing electronic health records and mobile device biometrics coupled to longitudinal data may prove to be a solution to many of these problems by capturing a "real world" phenotype. We envision that future biomedical research utilizing more precise approaches to patient care will utilize continuous and longitudinal data sources.
Article
Full-text available
Artificial intelligence (AI) has recently surpassed human performance in several domains, and there is great hope that in healthcare, AI may allow for better prevention, detection, diagnosis, and treatment of disease. While many fear that AI will disrupt jobs and the physician–patient relationship, we believe that AI can eliminate many repetitive tasks to clear the way for human-to-human bonding and the application of emotional intelligence and judgment. We review several recent studies of AI applications in healthcare that provide a view of a future where healthcare delivery is a more unified, human experience.
Article
Audio Interview Interview with Dr. Isaac Kohane on machine learning in medicine. (16:31)Download In this view of the future of medicine, patient–provider interactions are informed and supported by massive amounts of data from interactions with similar patients. These data are collected and curated to provide the latest evidence-based assessment and recommendations.
Article
Abstract Background Semantic interoperability of eHealth services within and across countries has been the main topic in several research projects. It is a key consideration for the European Commission to overcome the complexity of making different health information systems work together. This paper describes a study within the EU-funded project ASSESS CT, which focuses on assessing the potential of SNOMED CT as core reference terminology for semantic interoperability at European level. Objective This paper presents a quantitative analysis of the results obtained in ASSESS CT to determine the fitness of SNOMED CT for semantic interoperability. Methods The quantitative analysis consists of concept coverage, term coverage and inter-annotator agreement analysis of the annotation experiments related to six European languages (English, Swedish, French, Dutch, German and Finnish) and three scenarios: (i) ADOPT, where only SNOMED CT was used by the annotators; (ii) ALTERNATIVE, where a fixed set of terminologies from UMLS, excluding SNOMED CT, was used; and (iii) ABSTAIN, where any terminologies available in the current national infrastructure of the annotators’ country were used. For each language and each scenario, we configured the different terminology settings of the annotation experiments. Results There was a positive correlation between the number of concepts in each terminology setting and their concept and term coverage values. Inter-annotator agreement is low, irrespective of the terminology setting. Conclusions No significant differences were found between the analyses for the three scenarios, but availability of SNOMED CT for the assessed language is associated with increased concept coverage. Terminology setting size and concept and term coverage correlate positively up to a limit where more concepts do not significantly impact the coverage values. The results did not confirm the hypothesis of an inverse correlation between concept coverage and IAA due to a lower amount of choices available. The overall low IAA results pose a challenge for interoperability and indicate the need for further research to assess whether consistent terminology implementation is possible across Europe, e.g., improving term coverage by adding localized versions of the selected terminologies, analysing causes of low inter-annotator agreement, and improving tooling and guidance for annotators. The much lower term coverage for the Swedish version of SNOMED CT compared to English together with the similarly high concept coverage obtained with English and Swedish SNOMED CT reflects its relevance as a hub to connect user interface terminologies and serving a variety of user needs.
Article
Traditionally, psychiatry has offered clinical insights through keen behavioral observation and a deep study of emotion. With the subsequent biological revolution in psychiatry displacing psychoanalysis, some psychiatrists were concerned that the field shifted from “brainless” to “mindless.”¹ Over the past 4 decades, behavioral expertise, once the strength of psychiatry, has diminished in importance as psychiatric research focused on pharmacology, genomics, and neuroscience, and much of psychiatric practice has become a series of brief clinical interactions focused on medication management. In research settings, assigning a diagnosis from the Diagnostic and Statistical Manual of Mental Disorders has become a surrogate for behavioral observation. In practice, few clinicians measure emotion, cognition, or behavior with any standard, validated tools.
Article
Achieving an interoperable health care system remains a top US policy priority. Despite substantial efforts to encourage interoperability, the first set of national data in 2014 suggested that hospitals' engagement levels were low.With 2015 data now available, we examined the first national trends in engagement in four domains of interoperability: finding, sending, receiving, and integrating electronic patient information from outside providers. We found small gains, with 29.7 percent of hospitals engaging in all four domains in 2015 compared to 24.5 percent in 2014. The two domains with the most progress were sending (with an increase of 8.1 percentage points) and receiving (an increase of 8.4 percentage points) information, while there was no change in integrating systems. Hospitals' use for patient care of data from outside providers was low, with only 18.7 percent of hospitals reporting that they "often" used these data. Our results reveal that hospitals' progress toward interoperability is slow and that progress is focused on moving information between hospitals, not on ensuring usability of information in clinical decisions. © 2017 Project HOPE-The People-to-People Health Foundation, Inc.