ArticlePDF AvailableLiterature Review

Why digital medicine depends on interoperability

Authors:
  • Ingress Health / Cytel
  • Berlin Institute of Health at Charité (BIH)
  • Berlin Institute of Health (BIH)

Abstract

Digital data are anticipated to transform medicine. However, most of today’s medical data lack interoperability: hidden in isolated databases, incompatible systems and proprietary software, the data are difficult to exchange, analyze, and interpret. This slows down medical progress, as technologies that rely on these data – artificial intelligence, big data or mobile applications – cannot be used to their full potential. In this article, we argue that interoperability is a prerequisite for the digital innovations envisioned for future medicine. We focus on four areas where interoperable data and IT systems are particularly important: (1) artificial intelligence and big data; (2) medical communication; (3) research; and (4) international cooperation. We discuss how interoperability can facilitate digital transformation in these areas to improve the health and well-being of patients worldwide.
The final publication is available at npj Digital
Medicine through https://rdcu.be/bPON2
... These services are vital for interoperability that is a fundamental necessity for the successful realization of Health care Information Systems. 53,54 We can achieve interoperability by following established standards such as HL7 FHIR, which supports Representational State Transfer (REST) architecture and SOA for seamless information exchange. However, it inherits the inflexibility and complexity associated with the RESTful approach. ...
... Health care information from various disciplines needs to be harmonized for analysis. 54 Ontologies provide vocabularies to integrate data from multiple health care information systems. In the INTROMAT project, we used SNOMED-CT and ICD-10 ontologies to add semantics to the data captured from sensor devices and mobile applications. ...
Article
Full-text available
This paper summarizes the information technology-related research findings after 5 years with the INTROducing Mental health through Adaptive Technology project. The aim was to improve mental healthcare by introducing new technologies for adaptive interventions in mental healthcare through interdisciplinary research and development. We focus on the challenges related to internet-delivered psychological treatments, emphasising artificial intelligence, human-computer interaction, and software engineering. We present the main research findings, the developed artefacts, and lessons learned from the project before outlining directions for future research. The main findings from this project are encapsulated in a reference architecture that is used for establishing an infrastructure for adaptive internet-delivered psychological treatment systems in clinical contexts. The infrastructure is developed by introducing an interdisciplinary design and development process inspired by domain-driven design, user-centred design, and the person based approach for intervention design. The process aligns the software development with the intervention design and illustrates their mutual dependencies. Finally, we present software artefacts produced within the project and discuss how they are related to the proposed reference architecture. Our results indicate that the proposed development process, the reference architecture and the produced software can be practical means of designing adaptive mental health care treatments in correspondence with the patients’ needs and preferences. In summary, we have created the initial version of an information technology infrastructure to support the development and deployment of Internet-delivered mental health interventions with inherent support for data sharing, data analysis, reusability of treatment content, and adaptation of intervention based on user needs and preferences.
... In patient care, data are historically generated and stored in heterogeneous data model and formats. These are often resulting from domain specific, noninteroperable or isolated applications and databases (data silos) [1]. As recent bigdata studies predict, the world-wide amount of healthcare data keeps growing in high speed, becoming extremely large, complex, with tendencies of unstructured and heterogeneous data -and so do isolated data silos. ...
... In addition, simplifying tasks has generated opportunities for organizations in management and marketing by increasing customer trust and enhancing reputation and image. However, digital transformation must ensure the interoperability and integration of IT solutions to enhance these benefits [70]. The usefulness of artificial intelligence in project management BD_PM ...
Article
Full-text available
The exponential trend of digital technologies, doubled by the mobility restrictions imposed during the COVID-19 pandemic, caused a paradigm shift in traditional economic models. Digital transformation has become increasingly common in all types of organizations and affects all activities. Organizations have adopted digital technologies to increase efficiency and effectiveness in management, marketing, and accounting. This paper aims to assess the impact of digital transformation on project management, marketing, and decision-making processes in users’ perceptions. The study begins with theoretical research on the digitalization of management and accounting information systems and conducts an empirical investigation based on a questionnaire. First, the paper assesses users’ perceptions of implementing digital technologies. The answers of 442 professionals from project management, marketing, and decision making were processed using structural equation modeling. The results show that users’ acceptance of digitalization is higher in decision making due to the significant contribution of artificial intelligence in repetitive decision making. Project management and marketing also benefit from digitalization, yet non-repetitive activities remain mainly the responsibility of the human factor.
... Medical data, in general, is very noisy and requires human oversight before integration. Cardiovascular imaging data is slightly more structured than clinical records but still lacks interoperability to a great extent (118). Several initiatives already aim at increasing interoperability among healthcare providers [e.g., European Commission (119), Health Data Research UK (120)]. ...
Article
Full-text available
A growing number of artificial intelligence (AI)-based systems are being proposed and developed in cardiology, driven by the increasing need to deal with the vast amount of clinical and imaging data with the ultimate aim of advancing patient care, diagnosis and prognostication. However, there is a critical gap between the development and clinical deployment of AI tools. A key consideration for implementing AI tools into real-life clinical practice is their “trustworthiness” by end-users. Namely, we must ensure that AI systems can be trusted and adopted by all parties involved, including clinicians and patients. Here we provide a summary of the concepts involved in developing a “trustworthy AI system.” We describe the main risks of AI applications and potential mitigation techniques for the wider application of these promising techniques in the context of cardiovascular imaging. Finally, we show why trustworthy AI concepts are important governing forces of AI development.
... When paired with evidence-based workflows, such standards can reduce the need for ad hoc mapping, improve the reliability and repeatability of the automated task, and enable reuse of automation approaches. 30,31 Understand relevant workflows Introducing automation introduces change. Whether it is one task or an entire health care workflow, automation will change the way that work is performed. ...
Article
Inefficient workflows affect many health care stakeholders including patients, caregivers, clinicians, and staff. Widespread health information technology adoption and modern computing provide opportunities for more efficient health care workflows through automation. The Office of the National Coordinator for Health Information Technology (ONC) led a multidisciplinary effort with stakeholders across health care and experts in industrial engineering, computer science, and finance to explore opportunities for automation in health care. The effort included semistructured key informant interviews, a review of relevant literature, and a workshop to understand automation lessons across nonhealth care industries that could be applied to health care. In this article, we describe considerations for advancing workflow automation in health care that were identified through these activities. We also discuss a set of six priorities and related strategies developed through the ONC-led effort and highlight the role the informatics and research communities have in advancing each priority and the strategies.
Chapter
The outbreak of the COVID-19 pandemic has resulted in social, economic, and healthcare disruption unprecedently worldwide. It has brought practically everything to a halt. Moreover, it has dramatically accelerated the adoption of digital technology resulting in a growing number of incidences of cybercrime, including social engineering to access sensitive information. Furthermore, with the reduced amount of common physical labor, some digital surveillance and security systems were proposed for keeping the trace of transmission. Nevertheless, various automatic associated tracking implementations and the lack of smartphone usage among people bring privacy and security concerns. So, this chapter mainly discusses digital security and privacy violations with their protection measures and the recently developed digital surveillance systems after COVID-19. Furthermore, this chapter will pave the way to understand the futuristic tools and aspects that could be worthwhile in tackling digital privacy and security challenges in managing such infectious diseases.
Thesis
Full-text available
As digitalization in medicine progresses and more data is captured electronically, the amount of data available increases rapidly. Despite the large amount of data available, the data for specific research questions is much smaller once filtered to match specific study criteria. This, combined with modern biomedicine requiring large numbers of patients, highlights the need to analyze data across institutions. The heterogeneity of modern medical institutions further complicates the analysis process within and across institutions and makes it difficult for researchers to extract meaningful information. The Medical Informatics Initiative (MII) was initiated by the German Ministry of Research and Education (BMBF) to tackle this problem and to harness the potential of digitalization in medicine. Data integration centers (DIC) form the heart of the MII. These organizational units established at each university hospital provide multiple services including harmonization of the data, technical infrastructure, and establishing of governance to support data sharing and use across university hospitals. The DIC implement the standardization and harmonization agreed on across the different MII task forces and committees, like the use of Fast Healthcare Interoperability Resources (FHIR) and specific medical terminologies. The DIC maintain software, which adheres to the agreed upon standards and harmonization rules and provide standardized research data repositories. Therefore, in contrast to a heterogenous hospital, software and data management can be created once and applied to all DIC, if the DIC have implemented the standardized application programming interfaces (APIs). This harmonization is the first step towards paving the way for cross-hospital data analysis. However, to make the data accessible to researchers, make it findable, allow researchers to select subsets of the large data pool for further analysis, allow them to build (federated) statistical or machine learning (ML) models as well as deploy them, additional software tools beyond the standard DIC tools specified above are necessary. Across multiple projects of the MII this thesis investigates how software tools can be designed, built, easily deployed, and integrated into DIC IT infrastructures to support local and federated analyses and statistical model development. It thus extends DIC infrastructures towards providing a cross-institutional research platform. It supports initial feasibility queries as well as data selection, extraction, analysis, and subsequent deployment of statistical models. All software components were conceptualized and implemented by the author of this thesis in collaboration with inter-disciplinary (partly across sites) teams and integrated with existing DIC infrastructures as well as evaluated with exemplary analyses. This thesis first investigated how ML models can be built on gene expression and other patient data to predict clinical outcomes. Different ML models were created, and their accuracy measured. While demonstrating how ML models could be applied to clinical data, the availability of data was found to be a limiting factor and the deployment or use of models remained elusive. This illustrated the need to make harmonized data available across institutions. To adhere to data protection laws, this thesis then investigated and extended DataSHIELD concepts and tools for federated privacy preserving analysis. A new Queue-Poll extension for DataSHIELD was designed and developed, distributed to multiple hospitals, and used for successive research projects to satisfy strict hospital firewall rules. To investigate training and deployment of ML and statistical models a prototype of KETOS, a platform for clinical decision support and machine learning as a service, was conceptualized and implemented. It allows researchers to create ML models based on standardized and harmonized data, while still providing maximum freedom in the model building process. It was conceptualized from the beginning with cross-hospital research in mind and achieved interoperability by relying on the data standardization of the DIC. In the framework of the MII and large research data repositories, a dataset for a specific analysis, model building, and later deployment usually needs to be extracted from the larger dataset requiring methods for data selection and extraction. This was addressed in two further subprojects of this thesis. The first focused on integrating large-throughput genomics data with non-genomic patient data in FHIR format. The second study created a method to select data from a large dataset (30 million FHIR resources) of FHIR formatted patient data, based on inter-criteria relationships. Having established first methods for data extraction, model analysis and deployment, to close the research lifecycle, a method was required to make data across hospitals findable. Therefore, it was investigated how cross-hospital feasibility queries can be created based on harmonized FHIR data and a platform conceptualized. Further, as part of the concept and implementation of the feasibility query platform, it was investigated how the necessary ontology for the user interface could be automatically generated using FHIR implementation guides, profiles, and a terminology server. Within the CODEX project the platform was implemented and distributed to 34 participating university hospitals. The platform performed well with large data volumes, searching through four mil. resources within 30 seconds. While the two attempts for data extraction investigated as part of this thesis were applicable to some data extraction problems, a more wholistic method for data extraction is still missing. To solve this, this thesis’ work on cohort selection of the feasibility query could be combined with a feature selection to create a cohort specific data extraction process. This thesis successfully demonstrates how software can be built, which leverages the data integration efforts of the MII to support large scale cross-hospital data sharing and analysis projects. The innovative software developed here was successfully integrated with the DIC infrastructure and highlights the importance of the standardization and harmonization efforts of the DIC and MII. All developed software was created to not only be applicable to one institution but created with cross-institutional research with the DIC at the core in mind, making a valuable contribution to the MII and the future of medical research infrastructures.
Article
Full-text available
Zusammenfassung Bei Menschen mit Seltenen Erkrankungen (SE) besteht ein besonderes Potenzial, von der Digitalisierung im Gesundheitswesen zu profitieren. Das Nationale Aktionsbündnis für Menschen mit Seltenen Erkrankungen (NAMSE) hat sich dafür eingesetzt, dass SE bei der Digitalisierung des Gesundheitswesens in Deutschland konkret berücksichtigt werden. In der Medizininformatik-Initiative (MII) des Bundesministeriums für Bildung und Forschung (BMBF) wurde das Thema aufgegriffen. Hier wird aktuell ausgehend von den Universitätskliniken eine digitale Infrastruktur für die datenschutzkonforme Mehrfachnutzung von standardisierten Versorgungs- und Forschungsdaten aufgebaut. Teil der Initiative ist seit dem Jahr 2020 das Projekt CORD-MI (Collaboration on Rare Diseases), in dem sich Universitätskliniken und weitere Partner deutschlandweit zusammengeschlossen haben, um die Patientenversorgung und die Forschung im Bereich der SE zu verbessern. In diesem Beitrag wird beleuchtet, in welcher Weise die MII die Belange der SE berücksichtigt und welche Chancen die gewonnenen „neuen Routinedaten“ bieten. Ein SE-Modul wurde in den „MII-Kerndatensatz“ aufgenommen – ein Informationsmodell, das auf dem Datenstandard „FHIR“ (Fast Healthcare Interoperability Resources) basiert. Daten, die im Rahmen von Versorgungs- und Forschungsroutinen erhoben werden, können so zukünftig zwischen den beteiligten Einrichtungen ausgetauscht werden und im Bereich SE z. B. die Diagnosefindung, die Therapiewahl und Forschungsvorhaben unterstützen. Das Projekt CORD-MI hat sich zum Ziel gesetzt, mit Hilfe exemplarischer Fragestellungen Erkenntnisse über die Versorgungssituation von Menschen mit SE zu erhalten und daraufhin Rückschlüsse für weitere notwendige Schritte im Bereich der Digitalisierung zu ziehen.
Chapter
Healthcare providers are using emerging technology to implement creative clinical services that support systems. The need for healthcare data sharing necessitates the integration of various departmental structures. However, the exchange of data inevitably imposes constraints on both security and integrity. It has estimated that hundreds of millions of personal medical data have been conceded last year and are growing exponentially. Because of the new decentralized digital technologies such as blockchain, privacy and secrecy have increased over the last few decades. We already have immense support for sectors such as banking, government, healthcare, and more. Healthcare providers have begun applying blockchain technology to allow a safer and more decentralized storing, recording, and exchanging of patient information. This newly developed blockchain technology holds incredible benefits for patient privacy and data security. We learned in this study that there are still many areas to learn about the use of blockchain technology, particularly in electronic healthcare and medicine. The literature is studied to discover benefits, risks, and issues of people and the other side of technology. The study also explored the methods that have been established and commented on their shortcomings and outlined several accessible problems and investigation avenues that are still waiting to discover.
Article
Full-text available
Artificial intelligence (AI)-based methods have emerged as powerful tools to transform medical care. Although machine learning classifiers (MLCs) have already demonstrated strong performance in image-based diagnoses, analysis of diverse and massive electronic health record (EHR) data remains challenging. Here, we show that MLCs can query EHRs in a manner similar to the hypothetico-deductive reasoning used by physicians and unearth associations that previous statistical methods have not found. Our model applies an automated natural language processing system using deep learning techniques to extract clinically relevant information from EHRs. In total, 101.6 million data points from 1,362,559 pediatric patient visits presenting to a major referral center were analyzed to train and validate the framework. Our model demonstrates high diagnostic accuracy across multiple organ systems and is comparable to experienced pediatricians in diagnosing common childhood diseases. Our study provides a proof of concept for implementing an AI-based system as a means to aid physicians in tackling large amounts of data, augmenting diagnostic evaluations, and to provide clinical decision support in cases of diagnostic uncertainty or complexity. Although this impact may be most evident in areas where healthcare providers are in relative shortage, the benefits of such an AI system are likely to be universal. © 2019, This is a U.S. government work and not under copyright protection in the U.S.; foreign
Article
Full-text available
Precision medicine can utilize new techniques in order to more effectively translate research findings into clinical practice. In this article, we first explore the limitations of traditional study designs, which stem from (to name a few): massive cost for the assembly of large patient cohorts; non-representative patient data; and the astounding complexity of human biology. Second, we propose that harnessing electronic health records and mobile device biometrics coupled to longitudinal data may prove to be a solution to many of these problems by capturing a "real world" phenotype. We envision that future biomedical research utilizing more precise approaches to patient care will utilize continuous and longitudinal data sources.
Article
Full-text available
Artificial intelligence (AI) has recently surpassed human performance in several domains, and there is great hope that in healthcare, AI may allow for better prevention, detection, diagnosis, and treatment of disease. While many fear that AI will disrupt jobs and the physician–patient relationship, we believe that AI can eliminate many repetitive tasks to clear the way for human-to-human bonding and the application of emotional intelligence and judgment. We review several recent studies of AI applications in healthcare that provide a view of a future where healthcare delivery is a more unified, human experience.
Article
Audio Interview Interview with Dr. Isaac Kohane on machine learning in medicine. (16:31)Download In this view of the future of medicine, patient–provider interactions are informed and supported by massive amounts of data from interactions with similar patients. These data are collected and curated to provide the latest evidence-based assessment and recommendations.
Article
Abstract Background Semantic interoperability of eHealth services within and across countries has been the main topic in several research projects. It is a key consideration for the European Commission to overcome the complexity of making different health information systems work together. This paper describes a study within the EU-funded project ASSESS CT, which focuses on assessing the potential of SNOMED CT as core reference terminology for semantic interoperability at European level. Objective This paper presents a quantitative analysis of the results obtained in ASSESS CT to determine the fitness of SNOMED CT for semantic interoperability. Methods The quantitative analysis consists of concept coverage, term coverage and inter-annotator agreement analysis of the annotation experiments related to six European languages (English, Swedish, French, Dutch, German and Finnish) and three scenarios: (i) ADOPT, where only SNOMED CT was used by the annotators; (ii) ALTERNATIVE, where a fixed set of terminologies from UMLS, excluding SNOMED CT, was used; and (iii) ABSTAIN, where any terminologies available in the current national infrastructure of the annotators’ country were used. For each language and each scenario, we configured the different terminology settings of the annotation experiments. Results There was a positive correlation between the number of concepts in each terminology setting and their concept and term coverage values. Inter-annotator agreement is low, irrespective of the terminology setting. Conclusions No significant differences were found between the analyses for the three scenarios, but availability of SNOMED CT for the assessed language is associated with increased concept coverage. Terminology setting size and concept and term coverage correlate positively up to a limit where more concepts do not significantly impact the coverage values. The results did not confirm the hypothesis of an inverse correlation between concept coverage and IAA due to a lower amount of choices available. The overall low IAA results pose a challenge for interoperability and indicate the need for further research to assess whether consistent terminology implementation is possible across Europe, e.g., improving term coverage by adding localized versions of the selected terminologies, analysing causes of low inter-annotator agreement, and improving tooling and guidance for annotators. The much lower term coverage for the Swedish version of SNOMED CT compared to English together with the similarly high concept coverage obtained with English and Swedish SNOMED CT reflects its relevance as a hub to connect user interface terminologies and serving a variety of user needs.
Article
Traditionally, psychiatry has offered clinical insights through keen behavioral observation and a deep study of emotion. With the subsequent biological revolution in psychiatry displacing psychoanalysis, some psychiatrists were concerned that the field shifted from “brainless” to “mindless.”¹ Over the past 4 decades, behavioral expertise, once the strength of psychiatry, has diminished in importance as psychiatric research focused on pharmacology, genomics, and neuroscience, and much of psychiatric practice has become a series of brief clinical interactions focused on medication management. In research settings, assigning a diagnosis from the Diagnostic and Statistical Manual of Mental Disorders has become a surrogate for behavioral observation. In practice, few clinicians measure emotion, cognition, or behavior with any standard, validated tools.
Article
Achieving an interoperable health care system remains a top US policy priority. Despite substantial efforts to encourage interoperability, the first set of national data in 2014 suggested that hospitals' engagement levels were low.With 2015 data now available, we examined the first national trends in engagement in four domains of interoperability: finding, sending, receiving, and integrating electronic patient information from outside providers. We found small gains, with 29.7 percent of hospitals engaging in all four domains in 2015 compared to 24.5 percent in 2014. The two domains with the most progress were sending (with an increase of 8.1 percentage points) and receiving (an increase of 8.4 percentage points) information, while there was no change in integrating systems. Hospitals' use for patient care of data from outside providers was low, with only 18.7 percent of hospitals reporting that they "often" used these data. Our results reveal that hospitals' progress toward interoperability is slow and that progress is focused on moving information between hospitals, not on ensuring usability of information in clinical decisions. © 2017 Project HOPE-The People-to-People Health Foundation, Inc.