Preprint

Leadership for AI Transformation in Health Care Organization: Scoping Review (Preprint)

Authors:
To read the file of this research, you can request a copy directly from the authors.

Abstract

BACKGROUND The leaders of health care organizations are grappling with rising expenses and surging demands for health services. In response, they are increasingly embracing artificial intelligence (AI) technologies to improve patient care delivery, alleviate operational burdens, and efficiently improve health care safety and quality. OBJECTIVE In this paper, we map the current literature and synthesize insights on the role of leadership in driving AI transformation within health care organizations. METHODS We conducted a comprehensive search across several databases, including MEDLINE (via Ovid), PsycINFO (via Ovid), CINAHL (via EBSCO), Business Source Premier (via EBSCO), and Canadian Business & Current Affairs (via ProQuest), spanning articles published from 2015 to June 2023 discussing AI transformation within the health care sector. Specifically, we focused on empirical studies with a particular emphasis on leadership. We used an inductive, thematic analysis approach to qualitatively map the evidence. The findings were reported in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines. RESULTS A comprehensive review of 2813 unique abstracts led to the retrieval of 97 full-text articles, with 22 included for detailed assessment. Our literature mapping reveals that successful AI integration within healthcare organizations requires leadership engagement across technological, strategic, operational, and organizational domains. Leaders must demonstrate a blend of technical expertise, adaptive strategies, and strong interpersonal skills to navigate the dynamic healthcare landscape shaped by complex regulatory, technological, and organizational factors. CONCLUSIONS In conclusion, leading AI transformation in healthcare requires a multidimensional approach, with leadership across technological, strategic, operational, and organizational domains. Organizations should implement a comprehensive leadership development strategy, including targeted training and cross-functional collaboration, to equip leaders with the skills needed for AI integration. Additionally, when upskilling or recruiting AI talent, priority should be given to individuals with a strong mix of technical expertise, adaptive capacity, and interpersonal acumen, enabling them to navigate the unique complexities of the healthcare environment.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Importance Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification, and allocation of resources. Bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritized groups and other historically marginalized populations such as individuals with lower income. Objective To provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms to promote health and health care equity. Evidence Review The Agency for Healthcare Research and Quality and the National Institute for Minority Health and Health Disparities convened a diverse panel of experts to review evidence, hear from stakeholders, and receive community feedback. Findings The panel developed a conceptual framework to apply guiding principles across an algorithm’s life cycle, centering health and health care equity for patients and communities as the goal, within the wider context of structural racism and discrimination. Multiple stakeholders can mitigate and prevent bias at each phase of the algorithm life cycle, including problem formulation (phase 1); data selection, assessment, and management (phase 2); algorithm development, training, and validation (phase 3); deployment and integration of algorithms in intended settings (phase 4); and algorithm monitoring, maintenance, updating, or deimplementation (phase 5). Five principles should guide these efforts: (1) promote health and health care equity during all phases of the health care algorithm life cycle; (2) ensure health care algorithms and their use are transparent and explainable; (3) authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness; (4) explicitly identify health care algorithmic fairness issues and trade-offs; and (5) establish accountability for equity and fairness in outcomes from health care algorithms. Conclusions and Relevance Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithmic bias. Reforms should implement guiding principles that support promotion of health and health care equity in all phases of the algorithm life cycle as well as transparency and explainability, authentic community engagement and ethical partnerships, explicit identification of fairness issues and trade-offs, and accountability for equity and fairness.
Article
Full-text available
This comprehensive review delves into the burgeoning intersection of Artificial Intelligence (AI) and healthcare, presenting an extensive analysis of AI applications, innovations, and the associated challenges within the healthcare landscape. Examining the transformative potential of AI encompassing machine learning, deep learning, natural language processing, and computer vision, the paper surveys breakthroughs in diagnostics, predictive analytics, precision medicine, and operational enhancements within healthcare systems. Concurrently, it scrutinizes ethical considerations, algorithmic biases, interpretability, regulatory constraints, and integration complexities that impede the seamless adoption of AI in healthcare. Drawing insights from diverse sources, this review consolidates the current state of AI in healthcare, emphasizing the need for collaborative initiatives among healthcare practitioners, technologists, regulators, and ethicists to navigate challenges and unlock the holistic potential of AI for the betterment of healthcare.
Article
Full-text available
The rapid advancements in artificial intelligence (AI) have led to the development of sophisticated large language models (LLMs) such as GPT-4 and Bard. The potential implementation of LLMs in healthcare settings has already garnered considerable attention because of their diverse applications that include facilitating clinical documentation, obtaining insurance pre-authorization, summarizing research papers, or working as a chatbot to answer questions for patients about their specific data and concerns. While offering transformative potential, LLMs warrant a very cautious approach since these models are trained differently from AI-based medical technologies that are regulated already, especially within the critical context of caring for patients. The newest version, GPT-4, that was released in March, 2023, brings the potentials of this technology to support multiple medical tasks; and risks from mishandling results it provides to varying reliability to a new level. Besides being an advanced LLM, it will be able to read texts on images and analyze the context of those images. The regulation of GPT-4 and generative AI in medicine and healthcare without damaging their exciting and transformative potential is a timely and critical challenge to ensure safety, maintain ethical standards, and protect patient privacy. We argue that regulatory oversight should assure medical professionals and patients can use LLMs without causing harm or compromising their data or privacy. This paper summarizes our practical recommendations for what we can expect from regulators to bring this vision to reality.
Article
Full-text available
Background: Artificial intelligence (AI) and digital health technological innovations from startup companies used in clinical practice can yield better health outcomes, reduce health care costs, and improve patients' experience. However, the integration, translation, and adoption of these technologies into clinical practice are plagued with many challenges and are lagging. Furthermore, explanations of the impediments to clinical translation are largely unknown and have not been systematically studied from the perspective of AI and digital health care startup founders and executives. Objective: The aim of this paper is to describe the barriers to integrating early-stage technologies in clinical practice and health care systems from the perspectives of digital health and health care AI founders and executives. Methods: A stakeholder focus group workshop was conducted with a sample of 10 early-stage digital health and health care AI founders and executives. Digital health, health care AI, digital health-focused venture capitalists, and physician executives were represented. Using an inductive thematic analysis approach, transcripts were organized, queried, and analyzed for thematic convergence. Results: We identified the following four categories of barriers in the integration of early-stage digital health innovations into clinical practice and health care systems: (1) lack of knowledge of health system technology procurement protocols and best practices, (2) demanding regulatory and validation requirements, (3) challenges within the health system technology procurement process, and (4) disadvantages of early-stage digital health companies compared to large technology conglomerates. Recommendations from the study participants were also synthesized to create a road map to mitigate the barriers to integrating early-stage or novel digital health technologies in clinical practice. Conclusions: Early-stage digital health and health care AI entrepreneurs identified numerous barriers to integrating digital health solutions into clinical practice. Mitigation initiatives should create opportunities for early-stage digital health technology companies and health care providers to interact, develop relationships, and use evidence-based research and best practices during health care technology procurement and evaluation processes.
Conference Paper
Full-text available
The application of Artifcial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefts and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal conse- quences of AI have mobilized both HCI and AI researchers towards researching human-centered artifcial intelligence (HCAI). How- ever, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defning as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human- centered artifcial intelligence’ or ‘human-centered machine learn- ing’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new defnition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.
Article
Full-text available
Background The National Health Service (NHS) aspires to be a world leader of Artificial Intelligence (AI) in healthcare, however, there are several barriers facing translation and implementation. A key enabler of AI within the NHS is the education and engagement of doctors, however evidence suggests that there is an overall lack of awareness of and engagement with AI. Research aim This qualitative study explores the experiences and views of doctor developers working with AI within the NHS exploring; their role within medical AI discourse, their views on the implementation of AI more widely and how they consider the engagement of doctors with AI technologies may increase in the future. Methods This study involved eleven semi-structured, one-to-one interviews conducted with doctors working with AI in English healthcare. Data was subjected to thematic analysis. Results The findings demonstrate that there is an unstructured pathway for doctors to enter the field of AI. The doctors described the various challenges they had experienced during their career, with many arising from the differing demands of operating in a commercial and technological environment. The perceived awareness and engagement among frontline doctors was low, with two prominent barriers being the hype surrounding AI and a lack of protected time. The engagement of doctors is vital for both the development and adoption of AI. Conclusions AI offers big potential within the medical field but is still in its infancy. For the NHS to leverage the benefits of AI, it must educate and empower current and future doctors. This can be achieved through; informative education within the medical undergraduate curriculum, protecting time for current doctors to develop understanding and providing flexible opportunities for NHS doctors to explore this field.
Article
Full-text available
Background With large volumes of longitudinal data in electronic medical records from diverse patients, primary care is primed for disruption by artificial intelligence (AI) technology. With AI applications in primary care still at an early stage in Canada and most countries, there is a unique opportunity to engage key stakeholders in exploring how AI would be used and what implementation would look like. Objective To identify the barriers that patients, providers, and health leaders perceive in relation to implementing AI in primary care and strategies to overcome them. Design 12 virtual deliberative dialogues. Dialogue data were thematically analyzed using a combination of rapid ethnographic assessment and interpretive description techniques. Setting Virtual sessions. Participants Participants from eight provinces in Canada, including 22 primary care service users, 21 interprofessional providers, and 5 health system leaders Results The barriers that emerged from the deliberative dialogue sessions were grouped into four themes: (1) system and data readiness, (2) the potential for bias and inequity, (3) the regulation of AI and big data, and (4) the importance of people as technology enablers. Strategies to overcome the barriers in each of these themes were highlighted, where participatory co-design and iterative implementation were voiced most strongly by participants. Limitations Only five health system leaders were included in the study and no self-identifying Indigenous people. This is a limitation as both groups may have provided unique perspectives to the study objective. Conclusions These findings provide insight into the barriers and facilitators associated with implementing AI in primary care settings from different perspectives. This will be vital as decisions regarding the future of AI in this space is shaped.
Article
Full-text available
Motivation: The price of medical treatment continues to rise due to (i) an increasing population; (ii) an aging human growth; (iii) disease prevalence; (iv) a rise in the frequency of patients that utilize health care services; and (v) increase in the price. Objective: Artificial Intelligence (AI) is already well-known for its superiority in various healthcare applications, including the segmentation of lesions in images, speech recognition, smartphone personal assistants, navigation, ride-sharing apps, and many more. Our study is based on two hypotheses: (i) AI offers more economic solutions compared to conventional methods; (ii) AI treatment offers stronger economics compared to AI diagnosis. This novel study aims to evaluate AI technology in the context of healthcare costs, namely in the areas of diagnosis and treatment, and then compare it to the traditional or non-AI-based approaches. Methodology: PRISMA was used to select the best 200 studies for AI in healthcare with a primary focus on cost reduction, especially towards diagnosis and treatment. We defined the diagnosis and treatment architectures, investigated their characteristics, and categorized the roles that AI plays in the diagnostic and therapeutic paradigms. We experimented with various combinations of different assumptions by integrating AI and then comparing it against conventional costs. Lastly, we dwell on three powerful future concepts of AI, namely, pruning, bias, explainability, and regulatory approvals of AI systems. Conclusions: The model shows tremendous cost savings using AI tools in diagnosis and treatment. The economics of AI can be improved by incorporating pruning, reduction in AI bias, explainability, and regulatory approvals.
Article
Full-text available
There is a large proliferation of complex data-driven artificial intelligence (AI) applications in many aspects of our daily lives, but their implementation in healthcare is still limited. This scoping review takes a theoretical approach to examine the barriers and facilitators based on empirical data from existing implementations. We searched the major databases of relevant scientific publications for articles related to AI in clinical settings, published between 2015 and 2021. Based on the theoretical constructs of the Consolidated Framework for Implementation Research (CFIR), we used a deductive, followed by an inductive, approach to extract facilitators and barriers. After screening 2784 studies, 19 studies were included in this review. Most of the cited facilitators were related to engagement with and management of the implementation process, while the most cited barriers dealt with the intervention’s generalizability and interoperability with existing systems, as well as the inner settings’ data quality and availability. We noted per-study imbalances related to the reporting of the theoretic domains. Our findings suggest a greater need for implementation science expertise in AI implementation projects, to improve both the implementation process and the quality of scientific reporting.
Article
Full-text available
Objectives This study used the Technology-Organization-Environment (TOE) framework to identify the factors involved in the decisions made by integrated medical and healthcare organizations to adopt artificial intelligence (AI) elderly care service resources. Method This study identified the Decision-making Trial and Evaluation Laboratory-Interpretive Structural Modeling (DEMATEL-ISM) method was used to construct a multilayer recursive structural model and to analyze the interrelationships between the levels. A MICMAC quadrant diagram was used for a cluster analysis. Results The ISM recursive structural model was divided into a total of seven layers. The bottom layer contained the four factors of High risk of data leakage (T1), Lack of awareness of the value and benefits of AI healthcare technology (T5), Lack of management leadership support (O1), and Government policies (E1). Having a low dependency but high driving force, these factors are the root causes of adoption by healthcare organizations. The topmost layer contained the most direct factors, which had a high dependency but the low driving force, influencing adoption: Competitive pressures (E2), Lack of patient trust (E5), and Lack of excellent partnerships (E7). Healthcare organizations are more concerned with technology and their environments when deciding to adopt intelligent healthcare resources. Conclusion The combination of the three methods of DEMATEL-ISM-MICMAC construction models provides new ideas for smart healthcare services for hospitals. The DEMATEL method favors the construction dimension of the micro-model, while the ISM method favors the construction dimension of the macro-model. Combining these two methods may reduce the loss of information within the system, simplify the matrix calculation workload, and improve the efficiency of operations while decomposing the complex problems into several sub-problems in a more comprehensive and detailed way. Conducting cluster analysis of the adoption determinants utilizing MICMAC quadrant diagrams may provide strong methodological guidance and decision-making recommendations for government departments, senior decision-makers in healthcare organizations, and policy-makers in associations in the senior care industry.
Article
Full-text available
Background Artificial intelligence (AI) is often heralded as a potential disruptor that will transform the practice of medicine. The amount of data collected and available in health care, coupled with advances in computational power, has contributed to advances in AI and an exponential growth of publications. However, the development of AI applications does not guarantee their adoption into routine practice. There is a risk that despite the resources invested, benefits for patients, staff, and society will not be realized if AI implementation is not better understood. Objective The aim of this study was to explore how the implementation of AI in health care practice has been described and researched in the literature by answering 3 questions: What are the characteristics of research on implementation of AI in practice? What types and applications of AI systems are described? What characteristics of the implementation process for AI systems are discernible? Methods A scoping review was conducted of MEDLINE (PubMed), Scopus, Web of Science, CINAHL, and PsycINFO databases to identify empirical studies of AI implementation in health care since 2011, in addition to snowball sampling of selected reference lists. Using Rayyan software, we screened titles and abstracts and selected full-text articles. Data from the included articles were charted and summarized. Results Of the 9218 records retrieved, 45 (0.49%) articles were included. The articles cover diverse clinical settings and disciplines; most (32/45, 71%) were published recently, were from high-income countries (33/45, 73%), and were intended for care providers (25/45, 56%). AI systems are predominantly intended for clinical care, particularly clinical care pertaining to patient-provider encounters. More than half (24/45, 53%) possess no action autonomy but rather support human decision-making. The focus of most research was on establishing the effectiveness of interventions (16/45, 35%) or related to technical and computational aspects of AI systems (11/45, 24%). Focus on the specifics of implementation processes does not yet seem to be a priority in research, and the use of frameworks to guide implementation is rare. Conclusions Our current empirical knowledge derives from implementations of AI systems with low action autonomy and approaches common to implementations of other types of information systems. To develop a specific and empirically based implementation framework, further research is needed on the more disruptive types of AI systems being implemented in routine care and on aspects unique to AI implementation in health care, such as building trust, addressing transparency issues, developing explainable and interpretable solutions, and addressing ethical concerns around privacy and data protection.
Article
Full-text available
Background Artificial intelligence (AI) for healthcare presents potential solutions to some of the challenges faced by health systems around the world. However, it is well established in implementation and innovation research that novel technologies are often resisted by healthcare leaders, which contributes to their slow and variable uptake. Although research on various stakeholders’ perspectives on AI implementation has been undertaken, very few studies have investigated leaders’ perspectives on the issue of AI implementation in healthcare. It is essential to understand the perspectives of healthcare leaders, because they have a key role in the implementation process of new technologies in healthcare. The aim of this study was to explore challenges perceived by leaders in a regional Swedish healthcare setting concerning the implementation of AI in healthcare. Methods The study takes an explorative qualitative approach. Individual, semi-structured interviews were conducted from October 2020 to May 2021 with 26 healthcare leaders. The analysis was performed using qualitative content analysis, with an inductive approach. Results The analysis yielded three categories, representing three types of challenge perceived to be linked with the implementation of AI in healthcare: 1) Conditions external to the healthcare system; 2) Capacity for strategic change management; 3) Transformation of healthcare professions and healthcare practice. Conclusions In conclusion, healthcare leaders highlighted several implementation challenges in relation to AI within and beyond the healthcare system in general and their organisations in particular. The challenges comprised conditions external to the healthcare system, internal capacity for strategic change management, along with transformation of healthcare professions and healthcare practice. The results point to the need to develop implementation strategies across healthcare organisations to address challenges to AI-specific capacity building. Laws and policies are needed to regulate the design and execution of effective AI implementation strategies. There is a need to invest time and resources in implementation processes, with collaboration across healthcare, county councils, and industry partnerships.
Article
Full-text available
Aim: This research was planned to identify nurse managers' opinions on artificial intelligence and robot nurses. Background: As the concepts of artificial intelligence and robot nurses are becoming widespread in Turkey, nurse managers are expected to guide and cooperate with nurses in the future in regards to these technologies. Methods: The sample of the study consisted of 326 manager nurses, who were reached via the online questionnaire during the period of September-November 2021. Nurse Managers Information Form and Question Form on Artificial Intelligence and Robot Nurses were used to collect data. Data in this cross-sectional descriptive study was collected between September 2021 and November 2021 by the online survey method. The descriptive statistics of the data were analyzed with numbers and percentages. The difference between the knowledge of artificial intelligence and robot nurses and demographic characteristics was analyzed with the Chi-square test. Results: According to the findings, 66.9% of the nurse managers reported having heard the concepts of artificial intelligence and robot nurses previously. 67.2% stated that they thought that robot nurses would benefit the nursing profession, but 86.2% voiced disbelief that robots would replace nurses. Conclusions: The majority of the participating nurse managers reported that artificial intelligence and robot nurses would not replace nurses but would be beneficial for nurses and would reduce their workload. Implications for nursing management: It should be ensured that the nurse managers plan the areas in the hospital where artificial intelligence and robot nurses will be used and determine the possible risks. Awareness should be increased with in-service trainings and patient safety and ethical problems regarding the use of artificial intelligence and robot nurses should be identified.
Article
Full-text available
Purpose This paper focuses on how the implementation of artificial intelligence algorithms (AI) challenges and changes the existing communication practice in radiology seen from a psychological communicative and clinical radiologist’s perspective. Method Based on thematic literature search across radiology, management, and information system technology research of AI implementation and robotics, we applied social- and cognitive psychological concepts in order to analyse and interpret these potential communication challenges that the introduction of AI potentially imposes. Results and discussion We found that scepticism towards AI implementation is a well-documented reaction among medical professionals in general. We related this scepticism to the AI’s potential transforming effect on the practice of communication in radiology. We found that the traditional communication practice to include and collaborate with AI is insufficiently developed. We propose using the multidisciplinary team meetings as an example of that at least two psychological mechanisms in this insufficiently developed communication practice can be both crucial barriers towards and drivers of the AI implementation, these mechanisms are: 1) (loss of) sense of agency, meaning the experience of being in control in one’s job, and 2) (a threatened) self-image of being the expert when interacting with AI. Conclusion AI implementation potentially transforms the existing professional and social positions of radiologists and other medical professionals in general which in multidisciplinary team meetings can hinder the intended use and benefit of the technology. We therefore recommend an increased focus on psychological and leadership processes in order to avoid these consequences and call for a development of co-creating communication practices with AI.
Article
Full-text available
Background Although advanced analytical techniques falling under the umbrella heading of artificial intelligence (AI) may improve health care, the use of AI in health raises safety and ethical concerns. There are currently no internationally recognized governance mechanisms (policies, ethical standards, evaluation, and regulation) for developing and using AI technologies in health care. A lack of international consensus creates technical and social barriers to the use of health AI while potentially hampering market competition. Objective The aim of this study is to review current health data and AI governance mechanisms being developed or used by Global Digital Health Partnership (GDHP) member countries that commissioned this research, identify commonalities and gaps in approaches, identify examples of best practices, and understand the rationale for policies. Methods Data were collected through a scoping review of academic literature and a thematic analysis of policy documents published by selected GDHP member countries. The findings from this data collection and the literature were used to inform semistructured interviews with key senior policy makers from GDHP member countries exploring their countries’ experience of AI-driven technologies in health care and associated governance and inform a focus group with professionals working in international health and technology to discuss the themes and proposed policy recommendations. Policy recommendations were developed based on the aggregated research findings. Results As this is an empirical research paper, we primarily focused on reporting the results of the interviews and the focus group. Semistructured interviews (n=10) and a focus group (n=6) revealed 4 core areas for international collaborations: leadership and oversight, a whole systems approach covering the entire AI pipeline from data collection to model deployment and use, standards and regulatory processes, and engagement with stakeholders and the public. There was a broad range of maturity in health AI activity among the participants, with varying data infrastructure, application of standards across the AI life cycle, and strategic approaches to both development and deployment. A demand for further consistency at the international level and policies was identified to support a robust innovation pipeline. In total, 13 policy recommendations were developed to support GDHP member countries in overcoming core AI governance barriers and establishing common ground for international collaboration. Conclusions AI-driven technology research and development for health care outpaces the creation of supporting AI governance globally. International collaboration and coordination on AI governance for health care is needed to ensure coherent solutions and allow countries to support and benefit from each other’s work. International bodies and initiatives have a leading role to play in the international conversation, including the production of tools and sharing of practical approaches to the use of AI-driven technologies for health care.
Article
Full-text available
Achieving The United Nations sustainable developments goals by 2030 will be a challenge. Researchers around the world are working toward this aim across the breadth of healthcare. Technology, and more especially artificial intelligence, has the ability to propel us forwards and support these goals but requires careful application. Artificial intelligence shows promise within healthcare and there has been fast development in ophthalmology, cardiology, diabetes, and oncology. Healthcare is starting to learn from commercial industry leaders who utilize fast and continuous testing algorithms to gain efficiency and find the optimum solutions. This article provides examples of how commercial industry is benefitting from utilizing AI and improving service delivery. The article then provides a specific example in eye health on how machine learning algorithms can be purposed to drive service delivery in a resource-limited setting by utilizing the novel study designs in response adaptive randomization. We then aim to provide six key considerations for researchers who wish to begin working with AI technology which include collaboration, adopting a fast-fail culture and developing a capacity in ethics and data science.
Article
Full-text available
Background Significant efforts have been made to develop artificial intelligence (AI) solutions for health care improvement. Despite the enthusiasm, health care professionals still struggle to implement AI in their daily practice. Objective This paper aims to identify the implementation frameworks used to understand the application of AI in health care practice. Methods A scoping review was conducted using the Cochrane, Evidence Based Medicine Reviews, Embase, MEDLINE, and PsycINFO databases to identify publications that reported frameworks, models, and theories concerning AI implementation in health care. This review focused on studies published in English and investigating AI implementation in health care since 2000. A total of 2541 unique publications were retrieved from the databases and screened on titles and abstracts by 2 independent reviewers. Selected articles were thematically analyzed against the Nilsen taxonomy of implementation frameworks, and the Greenhalgh framework for the nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) of health care technologies. Results In total, 7 articles met all eligibility criteria for inclusion in the review, and 2 articles included formal frameworks that directly addressed AI implementation, whereas the other articles provided limited descriptions of elements influencing implementation. Collectively, the 7 articles identified elements that aligned with all the NASSS domains, but no single article comprehensively considered the factors known to influence technology implementation. New domains were identified, including dependency on data input and existing processes, shared decision-making, the role of human oversight, and ethics of population impact and inequality, suggesting that existing frameworks do not fully consider the unique needs of AI implementation. Conclusions This literature review demonstrates that understanding how to implement AI in health care practice is still in its early stages of development. Our findings suggest that further research is needed to provide the knowledge necessary to develop implementation frameworks to guide the future implementation of AI in clinical practice and highlight the opportunity to draw on existing knowledge from the field of implementation science.
Article
Full-text available
Artificial intelligence is revolutionizing-and strengthening-modern healthcare through technologies that can predict, grasp, learn, and act, whether it's employed to identify new relationships between genetic codes or to control surgery-assisting robots. It can detect minor patterns that humans would completely overlook. This study explores and discusses the various modern applications of AI in the health sector. Particularly, the study focuses on three most emerging areas of AI-powered healthcare: AI-led drug discovery, clinical trials, and patient care. The findings suggest that pharmaceutical firms have benefited from AI in healthcare by speeding up their drug discovery process and automating target identification. Artificial Intelligence (AI) can help also to eliminate time-consuming data monitoring methods. The findings also indicate that AI-assisted clinical trials are capable of handling massive volumes of data and producing highly accurate results. Medical AI companies develop systems that assist patients at every level. Patients' medical data is also analyzed by clinical intelligence, which provides insights to assist them improve their quality of life.
Article
Full-text available
Artificial Intelligence (AI) is the notion of machines mimicking complex cognitive functions usually associated with humans, such as reasoning, predicting, planning, and problem-solving. With constantly growing repositories of data, improving algorithmic sophistication and faster computing resources, AI is becoming increasingly integrated into everyday use. In healthcare, AI represents an opportunity to increase safety, improve quality, and reduce the burden on increasingly overstretched systems. As applications expand, the need for responsible oversight and governance becomes even more important. Artificial intelligence in the delivery of healthcare carries new opportunities and challenges, including the need for greater transparency, the impact AI tools may have on a larger number of patients and families, and potential biases that may be introduced by the way an AI platform was developed and built. This study provides practical guidance in the development and implementation of AI applications in healthcare, with a focus on risk identification, management, and mitigation.
Article
Full-text available
Aim To develop a consensus paper on the central points of an international invitational think-tank on nursing and artificial intelligence (AI). Methods We established the Nursing and Artificial Intelligence Leadership (NAIL) Collaborative, comprising interdisciplinary experts in AI development, biomedical ethics, AI in primary care, AI legal aspects, philosophy of AI in health, nursing practice, implementation science, leaders in health informatics practice and international health informatics groups, a representative of patients and the public, and the Chair of the ITU/WHO Focus Group on Artificial Intelligence for Health. The NAIL Collaborative convened at a 3-day invitational think tank in autumn 2019. Activities included a pre-event survey, expert presentations and working sessions to identify priority areas for action, opportunities and recommendations to address these. In this paper, we summarize the key discussion points and notes from the aforementioned activities. Implications for nursing Nursing's limited current engagement with discourses on AI and health posts a risk that the profession is not part of the conversations that have potentially significant impacts on nursing practice. Conclusion There are numerous gaps and a timely need for the nursing profession to be among the leaders and drivers of conversations around AI in health systems. Impact We outline crucial gaps where focused effort is required for nursing to take a leadership role in shaping AI use in health systems. Three priorities were identified that need to be addressed in the near future: (a) Nurses must understand the relationship between the data they collect and AI technologies they use; (b) Nurses need to be meaningfully involved in all stages of AI: from development to implementation; and (c) There is a substantial untapped and an unexplored potential for nursing to contribute to the development of AI technologies for global health and humanitarian efforts.
Article
Full-text available
Objective To identify the extent to which administrative tasks carried out by primary care staff in general practice could be automated. Design A mixed-method design including ethnographic case studies, focus groups, interviews and an online survey of automation experts. Setting Three urban and three rural general practice health centres in England selected for differences in list size and organisational characteristics. Participants Observation and interviews with 65 primary care staff in the following job roles: administrator, manager, general practitioner, healthcare assistant, nurse practitioner, pharmacy technician, phlebotomist, practice nurse, pharmacist, prescription clerk, receptionist, scanning clerk, secretary and medical summariser; together with a survey of 156 experts in automation technologies. Methods 330 hours of ethnographic observation and documentation of administrative tasks carried out by staff in each of the above job roles, followed by coding and classification; semistructured interviews with 10 general practitioners and 6 staff focus groups. The online survey of machine learning, artificial intelligence and robotics experts was analysed using an ordinal Gaussian process prediction model to estimate the automatability of the observed tasks. Results The model predicted that roughly 44% of administrative tasks carried out by staff in general practice are ‘mostly’ or ‘completely’ automatable using currently available technology. Discussions with practice staff underlined the need for a cautious approach to implementation. Conclusions There is considerable potential to extend the use of automation in primary care, but this will require careful implementation and ongoing evaluation.
Article
Full-text available
Objective The objective was to identify barriers and facilitators to the implementation of artificial intelligence (AI) applications in clinical radiology in The Netherlands.Materials and methodsUsing an embedded multiple case study, an exploratory, qualitative research design was followed. Data collection consisted of 24 semi-structured interviews from seven Dutch hospitals. The analysis of barriers and facilitators was guided by the recently published Non-adoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework for new medical technologies in healthcare organizations.ResultsAmong the most important facilitating factors for implementation were the following: (i) pressure for cost containment in the Dutch healthcare system, (ii) high expectations of AI’s potential added value, (iii) presence of hospital-wide innovation strategies, and (iv) presence of a “local champion.” Among the most prominent hindering factors were the following: (i) inconsistent technical performance of AI applications, (ii) unstructured implementation processes, (iii) uncertain added value for clinical practice of AI applications, and (iv) large variance in acceptance and trust of direct (the radiologists) and indirect (the referring clinicians) adopters.Conclusion In order for AI applications to contribute to the improvement of the quality and efficiency of clinical radiology, implementation processes need to be carried out in a structured manner, thereby providing evidence on the clinical added value of AI applications.Key Points• Successful implementation of AI in radiology requires collaboration between radiologists and referring clinicians.• Implementation of AI in radiology is facilitated by the presence of a local champion.• Evidence on the clinical added value of AI in radiology is needed for successful implementation.
Article
Full-text available
Healthcare involves cyclic data processing to derive meaningful, actionable decisions. Rapid increases in clinical data have added to the occupational stress of healthcare workers, affecting their ability to provide quality and effective services. Health systems have to radically rethink strategies to ensure that staff are satisfied and actively supported in their jobs. Artificial intelligence (AI) has the potential to augment provider performance. This article reviews the available literature to identify AI opportunities that can potentially transform the role of healthcare providers. To leverage AI’s full potential, policymakers, industry, healthcare providers and patients have to address a new set of challenges. Optimizing the benefits of AI will require a balanced approach that enhances accountability and transparency while facilitating innovation.
Article
Full-text available
Discussions surrounding the future of artificial intelligenc (AI) in healthcare often cause consternation among healthcare professionals. These feelings may stem from a lack of formal education on AI and how to be a leader of AI implementation in medical systems. To address this, our academic medical center hosted an educational summit exploring how to become a leader of AI in healthcare. This article presents three lessons learned from hosting this summit, thus providing guidance for developing medical curriculum on the topic of AI in healthcare.
Article
Full-text available
Background: Medical education must adapt to different health care contexts, including digitalized health care systems and a digital generation of students in a hyper-connected world. The aims of this study are to identify and synthesize the values that medical educators need to implement in the curricula and to introduce representative educational programs. Methods: An integrative review was conducted to combine data from various research designs. We searched for articles on PubMed, Scopus, Web of Science, and EBSCO ERIC between 2011 and 2017. Key search terms were "undergraduate medical education," "future," "twenty-first century," "millennium," "curriculum," "teaching," "learning," and "assessment." We screened and extracted them according to inclusion and exclusion criteria from titles and abstracts. All authors read the full texts and discussed them to reach a consensus about the themes and subthemes. Data appraisal was performed using a modified Hawker 's evaluation form. Results: Among the 7616 abstracts initially identified, 28 full-text articles were selected to reflect medical education trends and suggest suitable educational programs. The integrative themes and subthemes of future medical education are as follows: 1) a humanistic approach to patient safety that involves encouraging humanistic doctors and facilitating collaboration; 2) early experience and longitudinal integration by early exposure to patient-oriented integration and longitudinal integrated clerkships; 3) going beyond hospitals toward society by responding to changing community needs and showing respect for diversity; and 4) student-driven learning with advanced technology through active learning with individualization, social interaction, and resource accessibility. Conclusions: This review integrated the trends in undergraduate medical education in readiness for the anticipated changes in medical environments. The detailed programs introduced in this study could be useful for medical educators in the development of curricula. Further research is required to integrate the educational trends into graduate and continuing medical education, and to investigate the status or effects of innovative educational programs in each medical school or environment.
Article
Full-text available
As the efficacy of artificial intelligence (AI) in improving aspects of healthcare delivery is increasingly becoming evident, it becomes likely that AI will be incorporated in routine clinical care in the near future. This promise has led to growing focus and investment in AI medical applications both from governmental organizations and technological companies. However, concern has been expressed about the ethical and regulatory aspects of the application of AI in health care. These concerns include the possibility of biases, lack of transparency with certain AI algorithms, privacy concerns with the data used for training AI models, and safety and liability issues with AI application in clinical environments. While there has been extensive discussion about the ethics of AI in health care, there has been little dialogue or recommendations as to how to practically address these concerns in health care. In this article, we propose a governance model that aims to not only address the ethical and regulatory issues that arise out of the application of AI in health care, but also stimulate further discussion about gov-ernance of AI in health care.
Article
Full-text available
Objective: This paper explores the implications of artificial intelligence (AI) on the management of healthcare data and information and how AI technologies will affect the responsibilities and work of health information management (HIM) professionals. Methods: A literature review was conducted of both peer-reviewed literature and published opinions on current and future use of AI technology to collect, store, and use healthcare data. The authors also sought insights from key HIM leaders via semi-structured interviews conducted both on the phone and by email. Results: The following HIM practices are impacted by AI technologies: 1) Automated medical coding and capturing AI-based information; 2) Healthcare data management and data governance; 3) Fbtient privacy and confidentiality; and 4) HIM workforce training and education. Discussion: HIM professionals must focus on improving the quality of coded data that is being used to develop AI applications. HIM professional’s ability to identify data patterns will be an important skill as automation advances, though additional skills in data analysis tools and techniques are needed. In addition, HIM professionals should consider how current patient privacy practices apply to AI application, development, and use. Conclusions: AI technology will continue to evolve as will the role of HIM professionals who are in a unique position to take on emerging roles with their depth of knowledge on the sources and origins of healthcare data. The challenge for HIM professionals is to identify leading practices for the management of healthcare data and information in an AI-enabled world.
Article
Full-text available
The complexity and rise of data in healthcare means that artificial intelligence (AI) will increasingly be applied within the field. Several types of AI are already being employed by payers and providers of care, and life sciences companies. The key categories of applications involve diagnosis and treatment recommendations, patient engagement and adherence, and administrative activities. Although there are many instances in which AI can perform healthcare tasks as well or better than humans, implementation factors will prevent large-scale automation of healthcare professional jobs for a considerable period. Ethical issues in the application of AI to healthcare are also discussed.
Article
Full-text available
Background: Since the advent of artificial intelligence (AI) in 1955, the applications of AI have increased over the years within a rapidly changing digital landscape where public expectations are on the rise, fed by social media, industry leaders, and medical practitioners. However, there has been little interest in AI in medical education until the last two decades, with only a recent increase in the number of publications and citations in the field. To our knowledge, thus far, a limited number of articles have discussed or reviewed the current use of AI in medical education. Objective: This study aims to review the current applications of AI in medical education as well as the challenges of implementing AI in medical education. Methods: Medline (Ovid), EBSCOhost Education Resources Information Center (ERIC) and Education Source, and Web of Science were searched with explicit inclusion and exclusion criteria. Full text of the selected articles was analyzed using the Extension of Technology Acceptance Model and the Diffusions of Innovations theory. Data were subsequently pooled together and analyzed quantitatively. Results: A total of 37 articles were identified. Three primary uses of AI in medical education were identified: learning support (n=32), assessment of students' learning (n=4), and curriculum review (n=1). The main reasons for use of AI are its ability to provide feedback and a guided learning pathway and to decrease costs. Subgroup analysis revealed that medical undergraduates are the primary target audience for AI use. In addition, 34 articles described the challenges of AI implementation in medical education; two main reasons were identified: difficulty in assessing the effectiveness of AI in medical education and technical challenges while developing AI applications. Conclusions: The primary use of AI in medical education was for learning support mainly due to its ability to provide individualized feedback. Little emphasis was placed on curriculum review and assessment of students' learning due to the lack of digitalization and sensitive nature of examinations, respectively. Big data manipulation also warrants the need to ensure data integrity. Methodological improvements are required to increase AI adoption by addressing the technical difficulties of creating an AI application and using novel methods to assess the effectiveness of AI. To better integrate AI into the medical profession, measures should be taken to introduce AI into the medical school curriculum for medical professionals to better understand AI algorithms and maximize its use.
Article
This Viewpoint discusses how regulators across the world should approach the legal and ethical challenges, including privacy, device regulation, competition, intellectual property rights, cybersecurity, and liability, raised by the medical use of large language models.
Article
Background: Artificial intelligence (AI) implementation in primary care is limited. Those set to be most impacted by AI technology in this setting should guide it's application. We organized a national deliberative dialogue with primary care stakeholders from across Canada to explore how they thought AI should be applied in primary care. Methods: We conducted 12 virtual deliberative dialogues with participants from 8 Canadian provinces to identify shared priorities for applying AI in primary care. Dialogue data were thematically analyzed using interpretive description approaches. Results: Participants thought that AI should first be applied to documentation, practice operations, and triage tasks, in hopes of improving efficiency while maintaining person-centered delivery, relationships, and access. They viewed complex AI-driven clinical decision support and proactive care tools as impactful but recognized potential risks. Appropriate training and implementation support were the most important external enablers of safe, effective, and patient-centered use of AI in primary care settings. Interpretation: Our findings offer an agenda for the future application of AI in primary care grounded in the shared values of patients and providers. We propose that, from conception, AI developers work with primary care stakeholders as codesign partners, developing tools that respond to shared priorities.
Article
With the rapid evolution of data over the last few years, many new technologies have arisen with artificial intelligent (AI) technologies at the top. Artificial intelligence (AI), with its infinite power, holds the potential to transform patient healthcare. Given the gaps revealed by the 2020 COVID-19 pandemic in healthcare systems, this research investigates the effects of using an artificial intelligence-driven public healthcare framework to enhance the decision-making process using an extended model of Shaft and Vessey (2006) cognitive fit model in healthcare organizations in Saudi Arabia. The model was validated based on empirical data collected using an online questionnaire distributed to healthcare organizations in Saudi Arabia. The main sample participants were healthcare CEOs, senior managers/managers, doctors, nurses, and other relevant healthcare practitioners under the MoH involved in the decision-making process relating to COVID-19. The measurement model was validated using SEM analyses. Empirical results largely supported the conceptual model proposed as all research hypotheses are significantly approved. This study makes several theoretical contributions. For example, it expands the theoretical horizon of Shaft and Vessey's (2006) CFT by considering new mechanisms, such as the inclusion of G2G Knowledge-based Exchange in addition to the moderation effect of Experience-based decision-making (EDBM) for enhancing the decision-making process related to the COVID-19 pandemic. More discussion regarding research limitations and future research directions are provided as well at the end of this study.
Article
Aim: To describe nurse leaders' and digital service developers' perceptions of the future role of artificial intelligence in specialized medical care. Background: Use of artificial intelligence has rapidly increased in healthcare. However, nurse leaders' and developers' perceptions of artificial intelligence and its future in specialized medical care remain under-researched. Method: Descriptive qualitative methodology was applied. Data were collected through six focus groups, and interviews with nurse leaders (n=20) and digital service developers (n=10) conducted remotely in 2021 at a university hospital in Finland. The data were subjected to inductive content analysis. Results: The data yielded 25 sub-categories, 10 categories, and three main categories of participants' perceptions. The main categories were designated AI transforming: work, care & services, and organizations. Conclusions: According to our respondents, AI will have a significant future role in specialized medical care, but it will likely reinforce, rather than replace, clinicians or traditional care. They also believe that it may have several positive consequences for clinicians' and leaders' work as well as for organizations and patients. Implications for nursing management: Nurse leaders should be familiar with the potential of artificial intelligence, but also aware of risks. Such leaders may provide betters support for development of artificial intelligence-based health services that improve clinicians' workflows.
Article
Aim: To investigate the influence of leader's innovation expectation on nurse's innovation behaviour, as well as the chain mediating effect of job control and creative self-efficacy between leader's innovation expectation and nurse's innovation behaviour. Background: The nurse's innovation behaviour is crucial in promoting medical artificial intelligence. Thus, clarifying the influencing factors of nurse's innovation behaviour has become a top priority. Methods: A cross-sectional survey was conducted on 263 Chinese nurses from tertiary hospitals and secondary hospitals in Hefei, Anhui province. Results: Leader's innovation expectation was positively related to nurse's innovation behaviour. Creative self-efficacy and job control respectively mediated the relationship between leader's innovation expectation and nurse's innovation behaviour. Furthermore, creative self-efficacy and job control played a chain mediation role between leader's innovation expectation and nurse's innovation behaviour. Conclusion: Leader's innovation expectation helps to enhance nurse's creative self-efficacy and job control, thereby enhancing nurse's enthusiasm for innovation. Implications for nursing management: Hospital managers and leaders formulate intervention measures to increase leader's innovation expectation, nurse's creative self-efficacy and job control, and encourage nurse's innovation behaviour.
Article
Artificial Intelligence (AI) is increasingly adopted within Human Resource management (HRM) due to its potential to create value for consumers, employees, and organisations. However, recent studies have found that organisations are yet to experience the anticipated benefits from AI adoption, despite investing time, effort, and resources. The existing studies in HRM have examined the applications of AI, anticipated benefits, and its impact on human workforce and organisations. The aim of this paper is to systematically review the multi-disciplinary literature stemming from International Business, Information Management, Operations Management, General Management and HRM to provide a comprehensive and objective understanding of the organisational resources required to develop AI capability in HRM. Our findings show that organisations need to look beyond technical resources, and put their emphasis on developing non-technical ones such as human skills and competencies, leadership, team co-ordination, organisational culture and innovation mindset, governance strategy, and AI-employee integration strategies, to benefit from AI adoption. Based on these findings, we contribute five research propositions to advance AI scholarship in HRM. Theoretically, we identify the organisational resources necessary to achieve business benefits by proposing the AI capability framework, integrating resource-based view and knowledge-based view theories. From a practitioner’s standpoint, our framework offers a systematic way for the managers to objectively self-assess organisational readiness and develop strategies to adopt and implement AI-enabled practices and processes in HRM.
Article
Artificial intelligence (AI) in healthcare is becoming increasingly important, given its potential to generate and analyse healthcare data to improve patient care and reduce costs and clinical risk while enhancing administrative processes within organisations. AI can introduce new sources of growth, change how people work and improve the effectiveness of their work. Consequently, implementing AI systems in healthcare can enable the optimisation of healthcare resources, facilitate a better patient experience, improve population health, reduce per capita costs, and improve the satisfaction of health professionals. Nowadays, most studies have focused on the potential benefits and barriers to implementing AI in healthcare, while only a few have explained the rational decision-making process for deploying new technologies in the healthcare system. In this study, we aim to fill this gap by investigating how AI supports the effective and efficient management of the healthcare system by examining the Humber River Hospital in Toronto using the case study methodology. To achieve the desired benefits from the process of implementing technology in healthcare, our key findings show that hospitals need to undergo a business transformation that exploits technology. Finally, we conclude that only effective knowledge of technology will enable hospitals to effectively become technological and digital.
Article
An increasing networking of IT systems as well as the use of cyber-physical systems in the industrial environment are raising the current amount of data. To process this enormous amount of data and derive conclusions companies use Artificial Intelligence (AI) more frequently. The increasing application and use of AI have a significant impact on socio-technical work systems. In particular, challenges and requirements for leaders and leadership can be identified. Accordingly, leaders and leadership are crucial for implementing and using AI successfully. This and the dynamic development of AI require further research on its impact on leaders and leadership for supporting companies with practice-proven guidelines and recommendations. For developing those a comprehensive analysis of existing literature has been conducted and will be the basis for further steps. The literature analysis’ results were grouped into four main clusters: Strategic Transformation Process, Qualification and Competencies, Culture and Human-AI Interaction. The results are presented in detail and an outlook on the further steps of research and development will be given.
Article
Artificial intelligence (AI) is poised to broadly reshape medicine, potentially improving the experiences of both clinicians and patients. We discuss key findings from a 2-year weekly effort to track and share key developments in medical AI. We cover prospective studies and advances in medical image analysis, which have reduced the gap between research and deployment. We also address several promising avenues for novel medical AI research, including non-image data sources, unconventional problem formulations and human–AI collaboration. Finally, we consider serious technical and ethical challenges in issues spanning from data scarcity to racial bias. As these challenges are addressed, AI’s potential may be realized, making healthcare more accurate, efficient and accessible for patients worldwide. AI has the potential to reshape medicine and make healthcare more accurate, efficient and accessible; this Review discusses recent progress, opportunities and challenges toward achieving this goal.
Article
This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched to support the following research question: how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be ‘ethically mindful?. A series of screening stages were carried out—for example, removing articles that focused on digital health in general (e.g. data sharing, data access, data privacy, surveillance/nudging, consent, ownership of health data, evidence of efficacy)—yielding a total of 156 papers that were included in the review. We find that ethical issues can be (a) epistemic, related to misguided, inconclusive or inscrutable evidence; (b) normative, related to unfair outcomes and transformative effectives; or (c) related to traceability. We further find that these ethical issues arise at six levels of abstraction: individual, interpersonal, group, institutional, and societal or sectoral. Finally, we outline a number of considerations for policymakers and regulators, mapping these to existing literature, and categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms. We argue that if action is not swiftly taken in this regard, a new ‘AI winter’ could occur due to chilling effects related to a loss of public trust in the benefits of AI for health care.
Chapter
Big data and machine learning are having an impact on most aspects of modern life, from entertainment, commerce, and healthcare. Netflix knows which films and series people prefer to watch, Amazon knows which items people like to buy when and where, and Google knows which symptoms and conditions people are searching for. All this data can be used for very detailed personal profiling, which may be of great value for behavioral understanding and targeting but also has potential for predicting healthcare trends. There is great optimism that the application of artificial intelligence (AI) can provide substantial improvements in all areas of healthcare from diagnostics to treatment. It is generally believed that AI tools will facilitate and enhance human work and not replace the work of physicians and other healthcare staff as such. AI is ready to support healthcare personnel with a variety of tasks from administrative workflow to clinical documentation and patient outreach as well as specialized support such as in image analysis, medical device automation, and patient monitoring. In this chapter, some of the major applications of AI in healthcare will be discussed covering both the applications that are directly associated with healthcare and those in the healthcare value chain such as drug development and ambient assisted living.
Article
Artificial intelligence (AI) was first described in 1950; however, several limitations in early models prevented widespread acceptance and application to medicine. In the early 2000s, many of these limitations were overcome by the advent of deep learning. Now AI systems are capable of analyzing complex algorithms and self-learning, we enter a new age in medicine where AI can be applied to clinical practice through risk assessment models, improving diagnostic accuracy and improving workflow efficiency. This article presents a brief historical perspective on the evolution of AI over the last several decades and the introduction and development of AI in medicine in recent years. A brief summary of the major applications of AI in gastroenterology and endoscopy are also presented, which will be reviewed in further detail by several other articles in this issue of GIE.
Article
Increasingly, artificial intelligent (AI) algorithms are being applied to automatically assist or automate decisions. Such statistical models have been criticized in the existing literature especially for producing cultural biases and for challenging our notions of knowledge. However, few studies have contributed to an essential understanding of the way in which algorithms are designed with particular truths to enable systematic decision-making. Drawing on an ethnographic study in a Scandinavian AI company, this article analyzes how truth is built through layered interpretative practices in applied AI for healthcare, and critically assesses how such practices shed light on the pragmatic notion of truth(s) in AI. The study identifies five practices that all show difficulty in modeling fuzzy patient conditions into one firm truth. The key contribution is that truth goes from being a process of discovering a more ‘right’ truth to become a process of reinventing the existing truth and healthcare practice. These findings suggest that truth in applied AI is a key devise for making predictive algorithms a viable business, and that developers are in a favorable position to make not only AI doable but also the very truth they intend to find and model. The study in this way shows how change is an inherent part of making AI systems, and that centralizing truth practices is a fruitful way of analyzing such changes and developers’ agency. We argue for analytical awareness of how AI truth practices may prompt a world that is fit to algorithms rather than a world to which algorithms are fit.
Article
Across Canada, healthcare leaders are exploring the potential of artificial intelligence and advanced analytics to transform the healthcare system. This report shares a summary of the current state of healthcare analytics across major hospitals and public healthcare agencies in Canada. We present information on the current level of investment, data governance maturity, analytics talent and tools and models being leveraged across the nation. The findings point to an opportunity for enhanced collaboration in advanced analytics and the adoption of nascent artificial intelligence technologies in healthcare. The recommendations will help drive adoption in Canada, ultimately improving the patient experience and promoting better health outcomes for Canadians.
Article
Consolidation through mergers and acquisitions is occurring across health care as a strategic move to address the disruptive forces of complexity. While consolidation is improving the overall fitness and viability of health care organizations, it is having the opposite effect on the professionals working within them who are reporting increasing rates of burnout from ongoing complexity in the health care environment. This happens in all organizations that try to respond to complexity with traditional bureaucratic leadership approaches. What is needed is to replace bureaucratic leadership with the networked approach of complexity leadership. The idea is not to "do more with less" but to "do things better." In this article, we show how to do this by applying complexity leadership to the nursing context. Complexity leadership is a framework for enabling people and organizations for adaptability. It views leaders not as managerial implementers of top-down directives but as collaborators who work together to enhance the overall adaptability and fitness of the system. From a complexity leadership perspective, the role of nurse leaders should be not only to help the system run but also to help it run better by increasing organizational adaptability.
Article
The promise of artificial intelligence (AI) in health care offers substantial opportunities to improve patient and clinical team outcomes, reduce costs, and influence population health. Current data generation greatly exceeds human cognitive capacity to effectively manage information, and AI is likely to have an important and complementary role to human cognition to support delivery of personalized health care.¹ For example, recent innovations in AI have shown high levels of accuracy in imaging and signal detection tasks and are considered among the most mature tools in this domain.