ArticleLiterature Review

What Are Patients’ Perceptions and Attitudes Regarding the Use of Artificial Intelligence in Skin Cancer Screening and Diagnosis? Narrative Review

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Background Artificial intelligence (AI) is widely used in various medical fields, including diagnostic radiology as a tool for greater efficiency, precision, and accuracy. The integration of AI as a radiological diagnostic tool has the potential to mitigate delays in diagnosis, which could, in turn, impact patients’ prognosis and treatment outcomes. The literature shows conflicting results regarding patients’ attitudes to AI as a diagnostic tool. To the best of our knowledge, no similar study has been conducted in Saudi Arabia. Objective The objectives of this study are to examine patients’ attitudes toward the use of AI as a tool in diagnostic radiology at King Khalid University Hospital, Saudi Arabia. Additionally, we sought to explore potential associations between patients’ attitudes and various sociodemographic factors. Methods This descriptive-analytical cross-sectional study was conducted in a tertiary care hospital. Data were collected from patients scheduled for radiological imaging through a validated self-administered questionnaire. The main outcome was to measure patients’ attitudes to the use of AI in radiology by calculating mean scores of 5 factors, distrust and accountability (factor 1), procedural knowledge (factor 2), personal interaction and communication (factor 3), efficiency (factor 4), and methods of providing information to patients (factor 5). Data were analyzed using the student t test, one-way analysis of variance followed by post hoc and multivariable analysis. Results A total of 382 participants (n=273, 71.5% women and n=109, 28.5% men) completed the surveys and were included in the analysis. The mean age of the respondents was 39.51 (SD 13.26) years. Participants favored physicians over AI for procedural knowledge, personal interaction, and being informed. However, the participants demonstrated a neutral attitude for distrust and accountability and for efficiency. Marital status was found to be associated with distrust and accountability, procedural knowledge, and personal interaction. Associations were also found between self-reported health status and being informed and between the field of specialization and distrust and accountability. Conclusions Patients were keen to understand the work of AI in radiology but favored personal interaction with a radiologist. Patients were impartial toward AI replacing radiologists and the efficiency of AI, which should be a consideration in future policy development and integration. Future research involving multicenter studies in different regions of Saudi Arabia is required.
Article
Full-text available
Background Cervical cancer is the fourth most frequent cancer among women, with 90% of cervical cancer-related deaths occurring in low- and middle-income countries like Cameroon. Visual inspection with acetic acid is often used in low-resource settings to screen for cervical cancer; however, its accuracy can be limited. To address this issue, the Swiss Federal Institute of Technology Lausanne and the University Hospitals of Geneva are collaborating to develop an automated smartphone-based image classifier that serves as a computer aided diagnosis tool for cancerous lesions. The primary objective of this study is to explore the acceptability and perspectives of women in Dschang regarding the usage of a screening tool for cervical cancer relying on artificial intelligence. A secondary objective is to understand the preferred form and type of information women would like to receive regarding this artificial intelligence-based screening tool. Methods A qualitative methodology was employed to gain better insight into the women’s perspectives. Participants, aged between 30 and 49 were invited from both rural and urban regions and semi-structured interviews using a pre-tested interview guide were conducted. The focus groups were divided on the basis of level of education, as well as HPV status. The interviews were audio-recorded, transcribed, and coded using the ATLAS.ti software. Results A total of 32 participants took part in the six focus groups, and 38% of participants had a primary level of education. The perspectives identified were classified using an adapted version of the Technology Acceptance Model. Key factors influencing the acceptability of artificial intelligence include privacy concerns, perceived usefulness, and trust in the competence of providers, accuracy of the tool as well as the potential negative impact of smartphones. Conclusion The results suggest that an artificial intelligence-based screening tool for cervical cancer is mostly acceptable to the women in Dschang. By ensuring patient confidentiality and by providing clear explanations, acceptance can be fostered in the community and uptake of cervical cancer screening can be improved. Trial registration Ethical Cantonal Board of Geneva, Switzerland (CCER, N°2017–0110 and CER-amendment n°4) and Cameroonian National Ethics Committee for Human Health Research (N°2022/12/1518/CE/CNERSH/SP). NCT: 03757299.
Article
Full-text available
Background Evidence-based practice (EBP) involves making clinical decisions based on three sources of information: evidence, clinical experience and patient preferences. Despite popularization of EBP, research has shown that there are many barriers to achieving the goals of the EBP model. The use of artificial intelligence (AI) in healthcare has been proposed as a means to improve clinical decision-making. The aim of this paper was to pinpoint key challenges pertaining to the three pillars of EBP and to investigate the potential of AI in surmounting these challenges and contributing to a more evidence-based healthcare practice. We conducted a selective review of the literature on EBP and the integration of AI in healthcare to achieve this. Challenges with the three components of EBP Clinical decision-making in line with the EBP model presents several challenges. The availability and existence of robust evidence sometimes pose limitations due to slow generation and dissemination processes, as well as the scarcity of high-quality evidence. Direct application of evidence is not always viable because studies often involve patient groups distinct from those encountered in routine healthcare. Clinicians need to rely on their clinical experience to interpret the relevance of evidence and contextualize it within the unique needs of their patients. Moreover, clinical decision-making might be influenced by cognitive and implicit biases. Achieving patient involvement and shared decision-making between clinicians and patients remains challenging in routine healthcare practice due to factors such as low levels of health literacy among patients and their reluctance to actively participate, barriers rooted in clinicians' attitudes, scepticism towards patient knowledge and ineffective communication strategies, busy healthcare environments and limited resources. AI assistance for the three components of EBP AI presents a promising solution to address several challenges inherent in the research process, from conducting studies, generating evidence, synthesizing findings, and disseminating crucial information to clinicians to implementing these findings into routine practice. AI systems have a distinct advantage over human clinicians in processing specific types of data and information. The use of AI has shown great promise in areas such as image analysis. AI presents promising avenues to enhance patient engagement by saving time for clinicians and has the potential to increase patient autonomy although there is a lack of research on this issue. Conclusion This review underscores AI's potential to augment evidence-based healthcare practices, potentially marking the emergence of EBP 2.0. However, there are also uncertainties regarding how AI will contribute to a more evidence-based healthcare. Hence, empirical research is essential to validate and substantiate various aspects of AI use in healthcare.
Article
Full-text available
This article presents global cancer statistics by world region for the year 2022 based on updated estimates from the International Agency for Research on Cancer (IARC). There were close to 20 million new cases of cancer in the year 2022 (including nonmelanoma skin cancers [NMSCs]) alongside 9.7 million deaths from cancer (including NMSC). The estimates suggest that approximately one in five men or women develop cancer in a lifetime, whereas around one in nine men and one in 12 women die from it. Lung cancer was the most frequently diagnosed cancer in 2022, responsible for almost 2.5 million new cases, or one in eight cancers worldwide (12.4% of all cancers globally), followed by cancers of the female breast (11.6%), colorectum (9.6%), prostate (7.3%), and stomach (4.9%). Lung cancer was also the leading cause of cancer death, with an estimated 1.8 million deaths (18.7%), followed by colorectal (9.3%), liver (7.8%), female breast (6.9%), and stomach (6.8%) cancers. Breast cancer and lung cancer were the most frequent cancers in women and men, respectively (both cases and deaths). Incidence rates (including NMSC) varied from four‐fold to five‐fold across world regions, from over 500 in Australia/New Zealand (507.9 per 100,000) to under 100 in Western Africa (97.1 per 100,000) among men, and from over 400 in Australia/New Zealand (410.5 per 100,000) to close to 100 in South‐Central Asia (103.3 per 100,000) among women. The authors examine the geographic variability across 20 world regions for the 10 leading cancer types, discussing recent trends, the underlying determinants, and the prospects for global cancer prevention and control. With demographics‐based predictions indicating that the number of new cases of cancer will reach 35 million by 2050, investments in prevention, including the targeting of key risk factors for cancer (including smoking, overweight and obesity, and infection), could avert millions of future cancer diagnoses and save many lives worldwide, bringing huge economic as well as societal dividends to countries over the forthcoming decades.
Article
Full-text available
Generative Artificial Intelligence (GAI) has sparked a transformative wave across various domains, including machine learning, healthcare, business, and entertainment, owing to its remarkable ability to generate lifelike data. This comprehensive survey offers a meticulous examination of the privacy and security challenges inherent to GAI. It provides five pivotal perspectives essential for a comprehensive understanding of these intricacies. The paper encompasses discussions on GAI architectures, diverse generative model types, practical applications, and recent advancements within the field. In addition, it highlights current security strategies and proposes sustainable solutions, emphasizing user, developer, institutional, and policymaker involvement.
Article
Full-text available
Background Artificial intelligence (AI) shows promising potential to enhance human decision‐making as synergistic augmented intelligence (AuI), but requires critical evaluation for skin cancer screening in a real‐world setting. Objectives To investigate the perspectives of patients and dermatologists after skin cancer screening by human, artificial and augmented intelligence. Methods A prospective comparative cohort study conducted at the University Hospital Basel included 205 patients (at high‐risk of developing melanoma, with resected or advanced disease) and 8 dermatologists. Patients underwent skin cancer screening by a dermatologist with subsequent 2D and 3D total‐body photography (TBP). Any suspicious and all melanocytic skin lesions ≥3 mm were imaged with digital dermoscopes and classified by corresponding convolutional neural networks (CNNs). Excisions were performed based on dermatologist's melanoma suspicion, study‐defined elevated CNN risk‐scores and/or melanoma suspicion by AuI. Subsequently, all patients and dermatologists were surveyed about their experience using questionnaires, including quantification of patient's safety sense following different examinations (subjective safety score (SSS): 0–10). Results Most patients believed AI could improve diagnostic performance (95.5%, n = 192/201). In total, 83.4% preferred AuI‐based skin cancer screening compared to examination by AI or dermatologist alone (3D‐TBP: 61.3%; 2D‐TBP: 22.1%, n = 199). Regarding SSS, AuI induced a significantly higher feeling of safety than AI (mean‐SSS (mSSS): 9.5 vs. 7.7, p < 0.0001) or dermatologist screening alone (mSSS: 9.5 vs. 9.1, p = 0.001). Most dermatologists expressed high trust in AI examination results (3D‐TBP: 90.2%; 2D‐TBP: 96.1%, n = 205). In 68.3% of the examinations, dermatologists felt that diagnostic accuracy improved through additional AI‐assessment (n = 140/205). Especially beginners (<2 years' dermoscopic experience; 61.8%, n = 94/152) felt AI facilitated their clinical work compared to experts (>5 years' dermoscopic experience; 20.9%, n = 9/43). Contrarily, in divergent risk assessments, only 1.5% of dermatologists trusted a benign CNN‐classification more than personal malignancy suspicion (n = 3/205). Conclusions While patients already prefer AuI with 3D‐TBP for melanoma recognition, dermatologists continue to rely largely on their own decision‐making despite high confidence in AI‐results. Trial Registration ClinicalTrials.gov (NCT04605822).
Article
Full-text available
Background Understanding women’s perspectives can help to create an effective and acceptable artificial intelligence (AI) implementation for triaging mammograms, ensuring a high proportion of screening-detected cancer. This study aimed to explore Swedish women’s perceptions and attitudes towards the use of AI in mammography. Method Semistructured interviews were conducted with 16 women recruited in the spring of 2023 at Capio S:t Görans Hospital, Sweden, during an ongoing clinical trial of AI in screening (ScreenTrustCAD, NCT 04778670) with Philips equipment. The interview transcripts were analysed using inductive thematic content analysis. Results In general, women viewed AI as an excellent complementary tool to help radiologists in their decision-making, rather than a complete replacement of their expertise. To trust the AI, the women requested a thorough evaluation, transparency about AI usage in healthcare, and the involvement of a radiologist in the assessment. They would rather be more worried because of being called in more often for scans than risk having overlooked a sign of cancer. They expressed substantial trust in the healthcare system if the implementation of AI was to become a standard practice. Conclusion The findings suggest that the interviewed women, in general, hold a positive attitude towards the implementation of AI in mammography; nonetheless, they expect and demand more from an AI than a radiologist. Effective communication regarding the role and limitations of AI is crucial to ensure that patients understand the purpose and potential outcomes of AI-assisted healthcare.
Article
Full-text available
Background Artificial intelligence (AI) is a rapidly advancing field that is beginning to enter the practice of medicine. Primary care is a cornerstone of medicine and deals with challenges such as physician shortage and burnout which impact patient care. AI and its application via digital health is increasingly presented as a possible solution. However, there is a scarcity of research focusing on primary care physician (PCP) attitudes toward AI. This study examines PCP views on AI in primary care. We explore its potential impact on topics pertinent to primary care such as the doctor-patient relationship and clinical workflow. By doing so, we aim to inform primary care stakeholders to encourage successful, equitable uptake of future AI tools. Our study is the first to our knowledge to explore PCP attitudes using specific primary care AI use cases rather than discussing AI in medicine in general terms. Methods From June to August 2023, we conducted a survey among 47 primary care physicians affiliated with a large academic health system in Southern California. The survey quantified attitudes toward AI in general as well as concerning two specific AI use cases. Additionally, we conducted interviews with 15 survey respondents. Results Our findings suggest that PCPs have largely positive views of AI. However, attitudes often hinged on the context of adoption. While some concerns reported by PCPs regarding AI in primary care focused on technology (accuracy, safety, bias), many focused on people-and-process factors (workflow, equity, reimbursement, doctor-patient relationship). Conclusion Our study offers nuanced insights into PCP attitudes towards AI in primary care and highlights the need for primary care stakeholder alignment on key issues raised by PCPs. AI initiatives that fail to address both the technological and people-and-process concerns raised by PCPs may struggle to make an impact.
Article
Full-text available
Artificial intelligence (AI) technologies in medicine are gradually changing biomedical research and patient care. High expectations and promises from novel AI applications aiming to positively impact society raise new ethical considerations for patients and caregivers who use these technologies. Based on a qualitative content analysis of semi-structured interviews and focus groups with healthcare professionals (HCPs), patients, and family members of patients with Parkinson’s Disease (PD), the present study investigates participant views on the comparative benefits and problems of using human versus AI predictive computer vision health monitoring, as well as participants’ ethical concerns regarding these technologies. Participants presumed that AI monitoring would enhance information sharing and treatment, but voiced concerns about data ownership, data protection, commercialization of patient data, and privacy at home. They highlighted that privacy issues at home and data security issues are often linked and should be investigated together. Findings may help technologists, HCPs, and policymakers determine how to incorporate stakeholders’ intersecting but divergent concerns into developing and implementing AI PD monitoring tools.
Article
Full-text available
Background There is great interest in using artificial intelligence (AI) to screen for skin cancer. This is fueled by a rising incidence of skin cancer and an increasing scarcity of trained dermatologists. AI systems capable of identifying melanoma could save lives, enable immediate access to screenings, and reduce unnecessary care and health care costs. While such AI-based systems are useful from a public health perspective, past research has shown that individual patients are very hesitant about being examined by an AI system. Objective The aim of this study was two-fold: (1) to determine the relative importance of the provider (in-person physician, physician via teledermatology, AI, personalized AI), costs of screening (free, 10€, 25€, 40€; 1€=US $1.09), and waiting time (immediate, 1 day, 1 week, 4 weeks) as attributes contributing to patients’ choices of a particular mode of skin cancer screening; and (2) to investigate whether sociodemographic characteristics, especially age, were systematically related to participants’ individual choices. Methods A choice-based conjoint analysis was used to examine the acceptance of medical AI for a skin cancer screening from the patient’s perspective. Participants responded to 12 choice sets, each containing three screening variants, where each variant was described through the attributes of provider, costs, and waiting time. Furthermore, the impacts of sociodemographic characteristics (age, gender, income, job status, and educational background) on the choices were assessed. Results Among the 383 clicks on the survey link, a total of 126 (32.9%) respondents completed the online survey. The conjoint analysis showed that the three attributes had more or less equal importance in contributing to the participants’ choices, with provider being the most important attribute. Inspecting the individual part-worths of conjoint attributes showed that treatment by a physician was the most preferred modality, followed by electronic consultation with a physician and personalized AI; the lowest scores were found for the three AI levels. Concerning the relationship between sociodemographic characteristics and relative importance, only age showed a significant positive association to the importance of the attribute provider (r=0.21, P=.02), in which younger participants put less importance on the provider than older participants. All other correlations were not significant. Conclusions This study adds to the growing body of research using choice-based experiments to investigate the acceptance of AI in health contexts. Future studies are needed to explore the reasons why AI is accepted or rejected and whether sociodemographic characteristics are associated with this decision.
Article
Full-text available
The use of artificial intelligence as a medical device (AIaMD) in healthcare systems is increasing rapidly. In dermatology, this has been accelerated in response to increasing skin cancer referral rates, workforce shortages and backlog generated by the COVID-19 pandemic. Evidence regarding patient perspectives of AIaMD is currently lacking in the literature. Patient acceptability is fundamental if this novel technology is to be effectively integrated into care pathways and patients must be confident that it is implemented safely, legally, and ethically. A prospective, single-center, single-arm, masked, non-inferiority, adaptive, group sequential design trial, recruited patients referred to a teledermatology cancer pathway. AIaMD assessment of dermoscopic images were compared with clinical or histological diagnosis, to assess performance (NCT04123678). Participants completed an online questionnaire to evaluate their views regarding use of AIaMD in the skin cancer pathway. Two hundred and sixty eight responses were received between February 2020 and August 2021. The majority of respondents were female (57.5%), ranged in age between 18 and 93 years old, Fitzpatrick type I-II skin (81.3%) and all 6 skin types were represented. Overall, there was a positive sentiment regarding potential use of AIaMD in skin cancer pathways. The majority of respondents felt confident in computers being used to help doctors diagnose and formulate management plans (median = 70; interquartile range (IQR) = 50–95) and as a support tool for general practitioners when assessing skin lesions (median = 85; IQR = 65–100). Respondents were comfortable having their photographs taken with a mobile phone device (median = 95; IQR = 70–100), which is similar to other studies assessing patient acceptability of teledermatology services. To the best of our knowledge, this is the first comprehensive study evaluating patient perspectives of AIaMD in skin cancer pathways in the UK. Patient involvement is essential for the development and implementation of new technologies. Continued end-user feedback will allow refinement of services to ensure patient acceptability. This study demonstrates patient acceptability of the use of AIaMD in both primary and secondary care settings.
Article
Full-text available
This study aimed to explore the experiences, perceptions, knowledge, concerns, and intentions of Generation Z (Gen Z) students with Generation X (Gen X) and Generation Y (Gen Y) teachers regarding the use of generative AI (GenAI) in higher education. A sample of students and teachers were recruited to investigate the above using a survey consisting of both open and closed questions. The findings showed that Gen Z participants were generally optimistic about the potential benefits of GenAI, including enhanced productivity, efficiency, and personalized learning, and expressed intentions to use GenAI for various educational purposes. Gen X and Gen Y teachers acknowledged the potential benefits of GenAI but expressed heightened concerns about overreliance, ethical and pedagogical implications, emphasizing the need for proper guidelines and policies to ensure responsible use of the technology. The study highlighted the importance of combining technology with traditional teaching methods to provide a more effective learning experience. Implications of the findings include the need to develop evidence-based guidelines and policies for GenAI integration, foster critical thinking and digital literacy skills among students, and promote responsible use of GenAI technologies in higher education.
Article
Full-text available
Billions of dollars are being invested into developing medical artificial intelligence (AI) systems and yet public opinion of AI in the medical field seems to be mixed. Although high expectations for the future of medical AI do exist in the American public, anxiety and uncertainty about what it can do and how it works is widespread. Continuing evaluation of public opinion on AI in healthcare is necessary to ensure alignment between patient attitudes and the technologies adopted. We conducted a representative-sample survey (total N = 203) to measure the trust of the American public towards medical AI. Primarily, we contrasted preferences for AI and human professionals to be medical decision-makers. Additionally, we measured expectations for the impact and use of medical AI in the future. We present four noteworthy results: (1) The general public strongly prefers human medical professionals make medical decisions, while at the same time believing they are more likely to make culturally biased decisions than AI. (2) The general public is more comfortable with a human reading their medical records than an AI, both now and “100 years from now.” (3) The general public is nearly evenly split between those who would trust their own doctor to use AI and those who would not. (4) Respondents expect AI will improve medical treatment but more so in the distant future than immediately.
Article
Full-text available
Artificial intelligence (AI) is expected to improve healthcare outcomes by facilitating early diagnosis, reducing the medical administrative burden, aiding drug development, personalising medical and oncological management, monitoring healthcare parameters on an individual basis, and allowing clinicians to spend more time with their patients. In the post-pandemic world where there is a drive for efficient delivery of healthcare and manage long waiting times for patients to access care, AI has an important role in supporting clinicians and healthcare systems to streamline the care pathways and provide timely and high-quality care for the patients. Despite AI technologies being used in healthcare for some decades, and all the theoretical potential of AI, the uptake in healthcare has been uneven and slower than anticipated and there remain a number of barriers, both overt and covert, which have limited its incorporation. This literature review highlighted barriers in six key areas: ethical, technological, liability and regulatory, workforce, social, and patient safety barriers. Defining and understanding the barriers preventing the acceptance and implementation of AI in the setting of healthcare will enable clinical staff and healthcare leaders to overcome the identified hurdles and incorporate AI technologies for the benefit of patients and clinical staff.
Article
Full-text available
The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human‐AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect‐laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual‐process theories of judgment and decision‐making. Regarding the design and implementation of AI‐based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted.
Article
Full-text available
Background and Aims The prospect of using artificial intelligence (AI) in healthcare is bright and promising, and its use can have a significant impact on cost reduction and decrease the possibility of error and negligence among healthcare workers. This study aims to investigate the level of knowledge, attitude, and acceptance among Iranian physicians and nurses. Methods This cross‐sectional descriptive‐analytical study was conducted in eight public university hospitals located in Tehran on 400 physicians and nurses. To conduct the study, convenient sampling was used with the help of researcher‐made questionnaires. Statistical analysis was done by SPSS 21 The mean and standard deviation and Chi‐square and Fisher's exact tests were used. Results In this study, the level of knowledge among the research subjects was average (14.66 ± 4.53), the level of their attitude toward AI was relatively favorable (47.81 ± 6.74), and their level of acceptance of AI was average (103.19 ± 13.70). Moreover, from the participant's perspective, AI in medicine is most widely used in increasing the accuracy of diagnostic tests (86.5%), identifying drug interactions (82.75%), and helping to analyze medical tests and imaging (80%). There was a statistically significant relationship between the variable of acceptance of AI and the participant's level of education (p = 0.028), participation in an AI training course (p = 0.022), and the hospital department where they worked (p < 0.001). Conclusion In this study, both the knowledge and the acceptance of the participants towards AI were proved to be at an average level and the attitude towards AI was relatively favorable, which is in contrast with the very rapid and inevitable expansion of AI. Although our participants were aware of the growing use of AI in medicine, they had a cautious attitude toward this.
Article
Full-text available
In this comprehensive study, insights from 1389 scholars across the US, UK, Germany, and Switzerland shed light on the multifaceted perceptions of artificial intelligence (AI). AI’s burgeoning integration into everyday life promises enhanced efficiency and innovation. The Trustworthy AI principles by the European Commission, emphasising data safeguarding, security, and judicious governance, serve as the linchpin for AI’s widespread acceptance. A correlation emerged between societal interpretations of AI’s impact and elements like trustworthiness, associated risks, and usage/acceptance. Those discerning AI’s threats often view its prospective outcomes pessimistically, while proponents recognise its transformative potential. These inclinations resonate with trust and AI’s perceived singularity. Consequently, factors such as trust, application breadth, and perceived vulnerabilities shape public consensus, depicting AI as humanity’s boon or bane. The study also accentuates the public’s divergent views on AI’s evolution, underlining the malleability of opinions amidst polarising narratives.
Article
Full-text available
Artificial intelligence (AI) has the potential to improve diagnostic accuracy. Yet people are often reluctant to trust automated systems, and some patient populations may be particularly distrusting. We sought to determine how diverse patient populations feel about the use of AI diagnostic tools, and whether framing and informing the choice affects uptake. To construct and pretest our materials, we conducted structured interviews with a diverse set of actual patients. We then conducted a pre-registered (osf.io/9y26x), randomized, blinded survey experiment in factorial design. A survey firm provided n = 2675 responses, oversampling minoritized populations. Clinical vignettes were randomly manipulated in eight variables with two levels each: disease severity (leukemia versus sleep apnea), whether AI is proven more accurate than human specialists, whether the AI clinic is personalized to the patient through listening and/or tailoring, whether the AI clinic avoids racial and/or financial biases, whether the Primary Care Physician (PCP) promises to explain and incorporate the advice, and whether the PCP nudges the patient towards AI as the established, recommended, and easy choice. Our main outcome measure was selection of AI clinic or human physician specialist clinic (binary, “AI uptake”). We found that with weighting representative to the U.S. population, respondents were almost evenly split (52.9% chose human doctor and 47.1% chose AI clinic). In unweighted experimental contrasts of respondents who met pre-registered criteria for engagement, a PCP’s explanation that AI has proven superior accuracy increased uptake (OR = 1.48, CI 1.24–1.77, p < .001), as did a PCP’s nudge towards AI as the established choice (OR = 1.25, CI: 1.05–1.50, p = .013), as did reassurance that the AI clinic had trained counselors to listen to the patient’s unique perspectives (OR = 1.27, CI: 1.07–1.52, p = .008). Disease severity (leukemia versus sleep apnea) and other manipulations did not affect AI uptake significantly. Compared to White respondents, Black respondents selected AI less often (OR = .73, CI: .55-.96, p = .023) and Native Americans selected it more often (OR: 1.37, CI: 1.01–1.87, p = .041). Older respondents were less likely to choose AI (OR: .99, CI: .987-.999, p = .03), as were those who identified as politically conservative (OR: .65, CI: .52-.81, p < .001) or viewed religion as important (OR: .64, CI: .52-.77, p < .001). For each unit increase in education, the odds are 1.10 greater for selecting an AI provider (OR: 1.10, CI: 1.03–1.18, p = .004). While many patients appear resistant to the use of AI, accuracy information, nudges and a listening patient experience may help increase acceptance. To ensure that the benefits of AI are secured in clinical practice, future research on best methods of physician incorporation and patient decision making is required.
Article
Full-text available
Artificial intelligence (AI) is often cited as a possible solution to current issues faced by healthcare systems. This includes the freeing up of time for doctors and facilitating person-centred doctor-patient relationships. However, given the novelty of artificial intelligence tools, there is very little concrete evidence on their impact on the doctor-patient relationship or on how to ensure that they are implemented in a way which is beneficial for person-centred care. Given the importance of empathy and compassion in the practice of person-centred care, we conducted a literature review to explore how AI impacts these two values. Besides empathy and compassion, shared decision-making, and trust relationships emerged as key values in the reviewed papers. We identified two concrete ways which can help ensure that the use of AI tools have a positive impact on person-centred doctor-patient relationships. These are (1) using AI tools in an assistive role and (2) adapting medical education. The study suggests that we need to take intentional steps in order to ensure that the deployment of AI tools in healthcare has a positive impact on person-centred doctor-patient relationships. We argue that the proposed solutions are contingent upon clarifying the values underlying future healthcare systems.
Article
Full-text available
Administrative and medical processes of the healthcare organizations are rapidly changing because of the use of artificial intelligence (AI) systems. This change demonstrates the critical impact of AI at multiple activities, particularly in medical processes related to early detection and diagnosis. Previous studies suggest that AI can raise the quality of services in the healthcare industry. AI-based technologies have reported to improve human life quality, making life easier, safer and more productive. This study presents a systematic review of academic articles on the application of AI in the healthcare sector. The review initially considered 1,988 academic articles from major scholarly databases. After a careful review, the list was filtered down to 180 articles for full analysis to present a classification framework based on four dimensions: AI-enabled healthcare benefits, challenges, methodologies, and functionalities. It was identified that AI continues to significantly outperform humans in terms of accuracy, efficiency and timely execution of medical and related administrative processes. Benefits for patients’ map directly to the relevant AI functionalities in the categories of diagnosis, treatment, consultation and health monitoring for self-management of chronic conditions. Implications for future research directions are identified in the areas of value-added healthcare services for medical decision-making, security and privacy for patient data, health monitoring features, and creative IT service delivery models using AI.
Article
Full-text available
Background Artificial intelligence (AI) is steadily entering and transforming the health care and Primary Care (PC) domains. AI-based applications assist physicians in disease detection, medical advice, triage, clinical decision-making, diagnostics and digital public health. Recent literature has explored physicians' perspectives on the potential impact of digital public health on key tasks in PC. However, limited attention has been given to patients' perspectives of AI acceptance in PC, specifically during the coronavirus pandemic. Addressing this research gap, we administered a pilot study to investigate criteria for patients' readiness to use AI-based PC applications by analyzing key factors affecting the adoption of digital public health technology. Methods The pilot study utilized a two-phase mixed methods approach. First, we conducted a qualitative study with 18 semi-structured interviews. Second, based on the Technology Readiness and Acceptance Model (TRAM), we conducted an online survey (n = 447). Results The results indicate that respondents who scored high on innovativeness had a higher level of readiness to use AI-based technology in PC during the coronavirus pandemic. Surprisingly, patients' health awareness and sociodemographic factors, such as age, gender and education, were not significant predictors of AI-based technology acceptance in PC. Conclusions This paper makes two major contributions. First, we highlight key social and behavioral determinants of acceptance of AI-enabled health care and PC applications. Second, we propose that to increase the usability of digital public health tools and accelerate patients' AI adoption, in complex digital public health care ecosystems, we call for implementing adaptive, population-specific promotions of AI technologies and applications.
Article
Full-text available
Background The rhetoric surrounding clinical artificial intelligence (AI) often exaggerates its effect on real-world care. Limited understanding of the factors that influence its implementation can perpetuate this. Objective In this qualitative systematic review, we aimed to identify key stakeholders, consolidate their perspectives on clinical AI implementation, and characterize the evidence gaps that future qualitative research should target. Methods Ovid-MEDLINE, EBSCO-CINAHL, ACM Digital Library, Science Citation Index-Web of Science, and Scopus were searched for primary qualitative studies on individuals’ perspectives on any application of clinical AI worldwide (January 2014-April 2021). The definition of clinical AI includes both rule-based and machine learning–enabled or non–rule-based decision support tools. The language of the reports was not an exclusion criterion. Two independent reviewers performed title, abstract, and full-text screening with a third arbiter of disagreement. Two reviewers assigned the Joanna Briggs Institute 10-point checklist for qualitative research scores for each study. A single reviewer extracted free-text data relevant to clinical AI implementation, noting the stakeholders contributing to each excerpt. The best-fit framework synthesis used the Nonadoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework. To validate the data and improve accessibility, coauthors representing each emergent stakeholder group codeveloped summaries of the factors most relevant to their respective groups. Results The initial search yielded 4437 deduplicated articles, with 111 (2.5%) eligible for inclusion (median Joanna Briggs Institute 10-point checklist for qualitative research score, 8/10). Five distinct stakeholder groups emerged from the data: health care professionals (HCPs), patients, carers and other members of the public, developers, health care managers and leaders, and regulators or policy makers, contributing 1204 (70%), 196 (11.4%), 133 (7.7%), 129 (7.5%), and 59 (3.4%) of 1721 eligible excerpts, respectively. All stakeholder groups independently identified a breadth of implementation factors, with each producing data that were mapped between 17 and 24 of the 27 adapted Nonadoption, Abandonment, Scale-up, Spread, and Sustainability subdomains. Most of the factors that stakeholders found influential in the implementation of rule-based clinical AI also applied to non–rule-based clinical AI, with the exception of intellectual property, regulation, and sociocultural attitudes. Conclusions Clinical AI implementation is influenced by many interdependent factors, which are in turn influenced by at least 5 distinct stakeholder groups. This implies that effective research and practice of clinical AI implementation should consider multiple stakeholder perspectives. The current underrepresentation of perspectives from stakeholders other than HCPs in the literature may limit the anticipation and management of the factors that influence successful clinical AI implementation. Future research should not only widen the representation of tools and contexts in qualitative research but also specifically investigate the perspectives of all stakeholder HCPs and emerging aspects of non–rule-based clinical AI implementation. Trial Registration PROSPERO (International Prospective Register of Systematic Reviews) CRD42021256005; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=256005 International Registered Report Identifier (IRRID) RR2-10.2196/33145
Article
Full-text available
Artificial intelligence (AI) based on machine learning and convolutional neuron networks (CNN) is rapidly becoming a realistic prospect in dermatology. Non-melanoma skin cancer is the most common cancer worldwide and melanoma is one of the deadliest forms of cancer. Dermoscopy has improved physicians’ diagnostic accuracy for skin cancer recognition but unfortunately it remains comparatively low. AI could provide invaluable aid in the early evaluation and diagnosis of skin cancer. In the last decade, there has been a breakthrough in new research and publications in the field of AI. Studies have shown that CNN algorithms can classify skin lesions from dermoscopic images with superior or at least equivalent performance compared to clinicians. Even though AI algorithms have shown very promising results for the diagnosis of skin cancer in reader studies, their generalizability and applicability in everyday clinical practice remain elusive. Herein we attempted to summarize the potential pitfalls and challenges of AI that were underlined in reader studies and pinpoint strategies to overcome limitations in future studies. Finally, we tried to analyze the advantages and opportunities that lay ahead for a better future for dermatology and patients, with the potential use of AI in our practices.
Article
Full-text available
The expectations of professionals working on the development of healthcare Artificial Intelligence (AI) technologies and the patients who will be affected by them have received limited attention. This paper reports on a Foresight Workshop with professionals involved with pulmonary hypertension (PH) and a Focus Group with members of a PH patient group, to discuss expectations of AI development and implementation. We show that while professionals and patients had similar expectations of AI, with respect to the priority of early diagnosis; data risks of privacy and reuse; and responsibility, other expectations differed. One important point of difference was in the attitude toward using AI to point up other potential health problems (in addition to PH). A second difference was in the expectations regarding how much clinical professionals should know about the role of AI in diagnosis. These findings allow us to better prepare for the future by providing a frank appraisal of the complexities of AI development with foresight, and the anxieties of key stakeholders.
Article
Full-text available
The exponential increase in algorithm-based mobile health (mHealth) applications (apps) for melanoma screening is a reaction to a growing market. However, the performance of available apps remains to be investigated. In this prospective study, we investigated the diagnostic accuracy of a class 1 CE-certified smartphone app in melanoma risk stratification and its patient and dermatologist satisfaction. Pigmented skin lesions ≥3 mm and any suspicious smaller lesions were assessed by the smartphone app SkinVision® (SkinVision® B.V., Amsterdam, the Netherlands, App-Version 6.8.1), 2D FotoFinder ATBM® master (FotoFinder ATBM® Systems GmbH, Bad Birnbach, Germany, Version 3.3.1.0), 3D Vectra® WB360 (Canfield Scientific, Parsippany, New Jersey, USA, Version 4.7.1) total body photography (TBP) devices, and dermatologists. The high-risk score of the smartphone app was compared with the two gold standards: histological diagnosis, or if not available, the combination of dermatologists’, 2D and 3D risk assessments. A total of 1204 lesions among 114 patients (mean age 59 years; 51% females (55 patients at high-risk for developing a melanoma, 59 melanoma patients)) were included. The smartphone app’s sensitivity, specificity, and area under the receiver operating characteristics (AUROC) varied between 41.3–83.3%, 60.0–82.9%, and 0.62–0.72 according to two study-defined reference standards. Additionally, all patients and dermatologists completed a newly created questionnaire for preference and trust of screening type. The smartphone app was rated as trustworthy by 36% (20/55) of patients at high-risk for melanoma, 49% (29/59) of melanoma patients, and 8.8% (10/114) of dermatologists. Most of the patients rated the 2D TBP imaging (93% (51/55) resp. 88% (52/59)) and the 3D TBP imaging (91% (50/55) resp. 90% (53/59)) as trustworthy. A skin cancer screening by combination of dermatologist and smartphone app was favored by only 1.8% (1/55) resp. 3.4% (2/59) of the patients; no patient preferred an assessment by a smartphone app alone. The diagnostic accuracy in clinical practice was not as reliable as previously advertised and the satisfaction with smartphone apps for melanoma risk stratification was scarce. MHealth apps might be a potential medium to increase awareness for melanoma screening in the lay population, but healthcare professionals and users should be alerted to the potential harm of over-detection and poor performance. In conclusion, we suggest further robust evidence-based evaluation before including market-approved apps in self-examination for public health benefits.
Article
Full-text available
Acceptance of Artificial Intelligence (AI) may be predicted by individual psychological correlates, examined here. Study 1 reports confirmatory validation of the General Attitudes towards Artificial Intelligence Scale (GAAIS) following initial validation elsewhere. Confirmatory Factor Analysis confirmed the two-factor structure (Positive, Negative) and showed good convergent and divergent validity with a related scale. Study 2 tested whether psychological factors (Big Five personality traits, corporate distrust, and general trust) predicted attitudes towards AI. Introverts had more positive attitudes towards AI overall, likely because of algorithm appreciation. Conscientiousness and agreeableness were associated with forgiving attitudes towards negative aspects of AI. Higher corporate distrust led to negative attitudes towards AI overall, while higher general trust led to positive views of the benefits of AI. The dissociation between general trust and corporate distrust may reflect the public’s attributions of the benefits and drawbacks of AI. Results are discussed in relation to theory and prior findings. © 2022 The Author(s). Published with license by Taylor & Francis Group, LLC.
Article
Full-text available
Understanding individual differences in attitudes towards Artificial Intelligence (AI) is of importance, among others in system development. Against this background, we sought to investigate associations between personality and attitudes towards AI. Relations were investigated in samples from two countries—Germany and China—to find potentially replicable, cross-culturally applicable associations. In German (N = 367, n = 137 men) and Chinese (N = 879; n = 220 men) online surveys, participants completed items on sociodemographic variables, the Attitudes Towards Artificial Intelligence (ATAI) scale, and the Big Five Inventory. Correlational analysis revealed among others significant positive associations between Neuroticism and fear of AI in both samples, with similar effect sizes. In addition to a significant association of acceptance of AI with gender, regression analyses revealed a small but significant positive association between Neuroticism and fear of AI in the German sample. In the Chinese sample, regression analyses showed positive associations of acceptance of AI with age, Openness, and Agreeableness. Fear of AI was only significantly negatively related to Agreeableness in the Chinese sample. The association of fear of AI with Neuroticism just failed to be significant in the regression model in the Chinese sample. These results reveal important insights into associations between certain personality traits and attitudes towards AI. However, given mostly small effect sizes of relations between personality and attitudes towards AI, other factors aside from personality traits seem to be of relevance to explain variance in individuals’ attitudes towards AI, as well.
Article
Full-text available
Artificial intelligence can assist providers in a variety of patient care and intelligent health systems. Artificial intelligence techniques ranging from machine learning to deep learning are prevalent in healthcare for disease diagnosis, drug discovery, and patient risk identification. Numerous medical data sources are required to perfectly diagnose diseases using artificial intelligence techniques, such as ultrasound, magnetic resonance imaging, mammography, genomics, computed tomography scan, etc. Furthermore, artificial intelligence primarily enhanced the infirmary experience and sped up preparing patients to continue their rehabilitation at home. This article covers the comprehensive survey based on artificial intelligence techniques to diagnose numerous diseases such as Alzheimer, cancer, diabetes, chronic heart disease, tuberculosis, stroke and cerebrovascular, hypertension, skin, and liver disease. We conducted an extensive survey including the used medical imaging dataset and their feature extraction and classification process for predictions. Preferred reporting items for systematic reviews and Meta-Analysis guidelines are used to select the articles published up to October 2020 on the Web of Science, Scopus, Google Scholar, PubMed, Excerpta Medical Database, and Psychology Information for early prediction of distinct kinds of diseases using artificial intelligence-based techniques. Based on the study of different articles on disease diagnosis, the results are also compared using various quality parameters such as prediction rate, accuracy, sensitivity, specificity, the area under curve precision, recall, and F1-score.
Article
Full-text available
We examined how individuals’ personality relates to various attitudes toward artificial intelligence (AI). Attitudes were organized into two dimensions of affective components (positive and negative emotions) and two dimensions of cognitive components (sociality and functionality). For personality, we focused on the Big Five personality traits (extraversion, agreeableness, conscientiousness, neuroticism, openness) and personal innovativeness in information technology. Based on a survey of 1,530 South Korean adults, we found that extraversion was related to negative emotions and low functionality. Agreeableness was associated with both positive and negative emotions, and it was positively associated with sociality and functionality. Conscientiousness was negatively related to negative emotions, and it was associated with high functionality, but also with low sociality. Neuroticism was related to negative emotions, but also to high sociality. Openness was positively linked to functionality, but did not predict other attitudes when other proximal predictors were included (e.g. prior use, personal innovativeness). Personal innovativeness in information technology consistently showed positive attitudes toward AI across all four dimensions. These findings provide mixed support for our hypotheses, and we discuss specific implications for future research and practice.
Article
Full-text available
Objectives To investigate the general population’s view on artificial intelligence (AI) in medicine with specific emphasis on 3 areas that have experienced major progress in AI research in the past few years, namely radiology, robotic surgery, and dermatology. Methods For this prospective study, the April 2020 Online Longitudinal Internet Studies for the Social Sciences Panel Wave was used. Of the 3117 Longitudinal Internet Studies For The Social Sciences panel members contacted, 2411 completed the full questionnaire (77.4% response rate), after combining data from earlier waves, the final sample size was 1909. A total of 3 scales focusing on trust in the implementation of AI in radiology, robotic surgery, and dermatology were used. Repeated-measures analysis of variance and multivariate analysis of variance was used for comparison. Results The overall means show that respondents have slightly more trust in AI in dermatology than in radiology and surgery. The means show that higher educated males, employed or student, of Western background, and those not admitted to a hospital in the past 12 months have more trust in AI. The trust in AI in radiology, robotic surgery, and dermatology is positively associated with belief in the efficiency of AI and these specific domains were negatively associated with distrust and accountability in AI in general. Conclusions The general population is more distrustful of AI in medicine unlike the overall optimistic views posed in the media. The level of trust is dependent on what medical area is subject to scrutiny. Certain demographic characteristics and individuals with a generally positive view on AI and its efficiency are significantly associated with higher levels of trust in AI.
Article
Full-text available
Augmented intelligence (AuI) integrates human intelligence (HI) and artificial intelligence (AI) to harness their strengths and mitigate their weaknesses. The combination of HI and AI has seen to improve both human and machine capabilities, and achieve a better performance compared to separate HI and AI approaches. In this paper, we present a survey of literature to understand how AuI has been applied in the literature, including the roles of HI and AI, AI approaches, features, and applications. Due to the limited literature related to this topic, we also present a survey of expert opinion to answer four main questions to understand the experts’ implications of AuI, including: a) the definition of AuI and the significance of HI in AuI; b) the roles of HI in AuI; c) the current and future applications of AuI in research, industry, and public, as well as the advantages and shortcomings of AuI; and d) end users’ view of the application of AuI. We also present recommendations to improve AuI, and provide a comparison between the findings from the surveys of both literature and expert opinion. The discussion of this paper shows the promising potential of AuI compared to separate HI and AI approaches.
Article
Full-text available
While there is significant enthusiasm in the medical community about the use of artificial intelligence (AI) technologies in healthcare, few research studies have sought to assess patient perspectives on these technologies. We conducted 15 focus groups examining patient views of diverse applications of AI in healthcare. Our results indicate that patients have multiple concerns, including concerns related to the safety of AI, threats to patient choice, potential increases in healthcare costs, data-source bias, and data security. We also found that patient acceptance of AI is contingent on mitigating these possible harms. Our results highlight an array of patient concerns that may limit enthusiasm for applications of AI in healthcare. Proactively addressing these concerns is critical for the flourishing of ethical innovation and ensuring the long-term success of AI applications in healthcare.
Article
Full-text available
With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence (AI) that uses deep learning (DL), an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open question to what extent we can trust these systems. The question of trust becomes more urgent as we delegate more and more decision-making to and increasingly rely on AI to safeguard significant human goods, such as security, healthcare, and safety. Models that “open the black box” by making the non-linear and complex decision process understandable by human observers are promising solutions to the black box problem in AI but are limited, at least in their current state, in their ability to make these processes less opaque to most observers. A philosophical analysis of trust will show why transparency is a necessary condition for trust and eventually for judging AI to be trustworthy. A more fruitful route for establishing trust in AI is to acknowledge that AI is situated within a socio-technical system that mediates trust, and by increasing the trustworthiness of these systems, we thereby increase trust in AI.
Article
Full-text available
Artificial intelligence (AI) promises to change health care, with some studies showing proof of concept of a provider-level performance in various medical specialties. However, there are many barriers to implementing AI, including patient acceptance and understanding of AI. Patients’ attitudes toward AI are not well understood. We systematically reviewed the literature on patient and general public attitudes toward clinical AI (either hypothetical or realised), including quantitative, qualitative, and mixed methods original research articles. We searched biomedical and computational databases from Jan 1, 2000, to Sept 28, 2020, and screened 2590 articles, 23 of which met our inclusion criteria. Studies were heterogeneous regarding the study population, study design, and the field and type of AI under study. Six (26%) studies assessed currently available or soon-to-be available AI tools, whereas 17 (74%) assessed hypothetical or broadly defined AI. The quality of the methods of these studies was mixed, with a frequent issue of selection bias. Overall, patients and the general public conveyed positive attitudes toward AI but had many reservations and preferred human supervision. We summarise our findings in six themes: AI concept, AI acceptability, AI relationship with humans, AI development and implementation, AI strengths and benefits, and AI weaknesses and risks. We suggest guidance for future studies, with the goal of supporting the safe, equitable, and patient-centred implementation of clinical AI.
Article
Full-text available
Background Advanced analytics, such as artificial intelligence (AI), increasingly gain relevance in medicine. However, patients’ responses to the involvement of AI in the care process remains largely unclear. The study aims to explore whether individuals were more likely to follow a recommendation when a physician used AI in the diagnostic process considering a highly (vs. less) severe disease compared to when the physician did not use AI or when AI fully replaced the physician. Methods Participants from the USA (n = 452) were randomly assigned to a hypothetical scenario where they imagined that they received a treatment recommendation after a skin cancer diagnosis (high vs. low severity) from a physician, a physician using AI, or an automated AI tool. They then indicated their intention to follow the recommendation. Regression analyses were used to test hypotheses. Beta coefficients ( ß ) describe the nature and strength of relationships between predictors and outcome variables; confidence intervals [ CI ] excluding zero indicate significant mediation effects. Results The total effects reveal the inferiority of automated AI ( ß = .47, p = .001 vs. physician; ß = .49, p = .001 vs. physician using AI). Two pathways increase intention to follow the recommendation. When a physician performs the assessment (vs. automated AI), the perception that the physician is real and present (a concept called social presence) is high, which increases intention to follow the recommendation ( ß = .22, 95% CI [.09; 0.39]). When AI performs the assessment (vs. physician only), perceived innovativeness of the method is high, which increases intention to follow the recommendation ( ß = .15, 95% CI [− .28; − .04]). When physicians use AI, social presence does not decrease and perceived innovativeness increases. Conclusion Pairing AI with a physician in medical diagnosis and treatment in a hypothetical scenario using topical therapy and oral medication as treatment recommendations leads to a higher intention to follow the recommendation than AI on its own. The findings might help develop practice guidelines for cases where AI involvement benefits outweigh risks, such as using AI in pathology and radiology, to enable augmented human intelligence and inform physicians about diagnoses and treatments.
Article
Full-text available
This study investigated the diagnostic performance, feasibility, and end-user experiences of an artificial intelligence (AI)-assisted diabetic retinopathy (DR) screening model in real-world Australian healthcare settings. The study consisted of two components: (1) DR screening of patients using an AI-assisted system and (2) in-depth interviews with health professionals involved in implementing screening. Participants with type 1 or type 2 diabetes mellitus attending two endocrinology outpatient and three Aboriginal Medical Services clinics between March 2018 and May 2019 were invited to a prospective observational study. A single 45-degree (macula centred), non-stereoscopic, colour retinal image was taken of each eye from participants and were instantly screened for referable DR using a custom offline automated AI system. A total of 236 participants, including 174 from endocrinology and 62 from Aboriginal Medical Services clinics, provided informed consent and 203 (86.0%) were included in the analysis. A total of 33 consenting participants (14%) were excluded from the primary analysis due to ungradable or missing images from small pupils (n = 21, 63.6%), cataract (n = 7, 21.2%), poor fixation (n = 2, 6.1%), technical issues (n = 2, 6.1%), and corneal scarring (n = 1, 3%). The area under the curve, sensitivity, and specificity of the AI system for referable DR were 0.92, 96.9% and 87.7%, respectively. There were 51 disagreements between the reference standard and index test diagnoses, including 29 which were manually graded as ungradable, 21 false positives, and one false negative. A total of 28 participants (11.9%) were referred for follow-up based on new ocular findings, among whom, 15 (53.6%) were able to be contacted and 9 (60%) adhered to referral. Of 207 participants who completed a satisfaction questionnaire, 93.7% specified they were either satisfied or extremely satisfied, and 93.2% specified they would be likely or extremely likely to use this service again. Clinical staff involved in screening most frequently noted that the AI system was easy to use, and the real-time diagnostic report was useful. Our study indicates that AI-assisted DR screening model is accurate and well-accepted by patients and clinicians in endocrinology and indigenous healthcare settings. Future deployments of AI-assisted screening models would require consideration of downstream referral pathways.
Article
Full-text available
Skin cancer is one of the most dangerous forms of cancer. Skin cancer is caused by un-repaired deoxyribonucleic acid (DNA) in skin cells, which generate genetic defects or mutations on the skin. Skin cancer tends to gradually spread over other body parts, so it is more curable in initial stages, which is why it is best detected at early stages. The increasing rate of skin cancer cases, high mortality rate, and expensive medical treatment require that its symptoms be diagnosed early. Considering the seriousness of these issues, researchers have developed various early detection techniques for skin cancer. Lesion parameters such as symmetry, color, size, shape, etc. are used to detect skin cancer and to distinguish benign skin cancer from melanoma. This paper presents a detailed systematic review of deep learning techniques for the early detection of skin cancer. Research papers published in well-reputed journals, relevant to the topic of skin cancer diagnosis, were analyzed. Research findings are presented in tools, graphs, tables, techniques, and frameworks for better understanding.
Article
Skin cancer is a significant global health concern, with its early detection and diagnosis playing a pivotal role in improving patient health outcomes. In recent years, artificial intelligence (AI) has emerged as a transformative force in the field of dermatology, revolutionizing the way skin cancer is detected and diagnosed. This comprehensive survey paper delves into the realm of AI-enhanced early skin cancer diagnosis, offering a thorough examination of the state-of-the-art techniques, methodologies, and advancements in this critical domain. Our survey begins by providing a comprehensive overview of the different types of skin cancer, emphasizing the importance of early detection in preventing disease progression. It then explores the pivotal role that AI and machine learning algorithms play in automating the detection and classification of skin lesions, making dermatology more accessible and accurate. A critical analysis of various AI-driven approaches, including image-based classification, feature extraction, and deep learning models, is presented to elucidate their strengths and limitations. Furthermore, this survey examines the integration of AI into clinical practice, discussing real-world applications, challenges, and ethical considerations. It explores the potential of AI to assist dermatologists in making faster and more accurate diagnoses, ultimately enhancing patient care. The paper also addresses the need for large, diverse datasets and standardization in the development and validation of AI models for skin cancer diagnosis. In conclusion, “Revolutionizing Dermatology” presents a comprehensive synthesis of the current landscape of AI-enhanced early skin cancer diagnosis, offering insights into its transformative potential, challenges, and future directions. By bridging the gap between dermatology and cutting-edge AI technologies, this survey aims to facilitate informed decision-making among researchers, clinicians, and stakeholders in the pursuit of more effective skin cancer detection and treatment strategies.
Article
Purpose of review: Despite the growing scope of artificial intelligence (AI) and deep learning (DL) applications in the field of ophthalmology, most have yet to reach clinical adoption. Beyond model performance metrics, there has been an increasing emphasis on the need for explainability of proposed DL models. Recent findings: Several explainable AI (XAI) methods have been proposed, and increasingly applied in ophthalmological DL applications, predominantly in medical imaging analysis tasks. Summary: We summarize an overview of the key concepts, and categorize some examples of commonly employed XAI methods. Specific to ophthalmology, we explore XAI from a clinical perspective, in enhancing end-user trust, assisting clinical management, and uncovering new insights. We finally discuss its limitations and future directions to strengthen XAI for application to clinical practice.
Chapter
Melanoma and non-melanoma skin cancers are one of the most common cancers prevalent in white-skinned populations, and its rising incidence and mortality associated with it have generated a significant need for preventative strategies. Approximately, more than 100,000 cases of melanoma and 3,000,000 cases of non-melanoma skin cancer occur globally each year. Ultraviolet radiation exposure is a major risk factor for the development of skin cancers, and increased incidences of development of skin cancers and their precursors have been reported with the use of non-solar radiations such as sunlamps and tanning beds. Campaigns are being implemented by dermatologists worldwide that focus on patient education and promote behavioral modification aiming at primary and secondary prevention of skin cancers. Early detection and evidence-based screening of skin cancer through conventional methods and breakthrough technologies are critical for improving treatment outcomes and reducing mortality.
Article
The rapid growth of artificial intelligence (AI) in radiology has led to Food and Drug Administration clearance of more than 20 AI algorithms for breast imaging. The steps involved in the clinical implementation of an AI product include identifying all stakeholders, selecting the appropriate product to purchase, evaluating it with a local data set, integrating it into the workflow, and monitoring its performance over time. Despite the potential benefits of improved quality and increased efficiency with AI, several barriers, such as high costs and liability concerns, may limit its widespread implementation. This article lists currently available AI products for breast imaging, describes the key elements of clinical implementation, and discusses barriers to clinical implementation.
Article
Recent advances in artificial intelligence (AI) in dermatology have demonstrated the potential to improve the accuracy of skin cancer detection. These capabilities may augment current diagnostic processes and improve the approach to management of skin cancer. To explain this technology, we discuss fundamental terminology, potential benefits, and limitations of AI and commercial applications relevant to dermatologists. A clearer understanding of the technology may help to reduce physician concerns about AI and promote its use in the clinical setting. Ultimately, the development and validation of AI technologies, their approval by regulatory agencies and widespread adoption by both dermatologists and other clinicians may enhance patient care. Technology-augmented detection of skin cancer has the potential to improve quality of life, reduce health care costs by reducing unnecessary procedures, and promote greater access to high quality skin assessment. Dermatologists play a critical role in the responsible development and deployment of AI capabilities applied to skin cancer.
Article
Purpose: Artificial intelligence (AI) technology is poised to revolutionize modern delivery of health care services. We set to evaluate the patient perspective of AI use in diabetic retinal screening. Design: Survey. Methods: Four hundred thirty-eight patients undergoing diabetic retinal screening across New Zealand participated in a survey about their opinion of AI technology in retinal screening. The survey consisted of 13 questions covering topics of awareness, trust, and receptivity toward AI systems. Results: The mean age was 59 years. The majority of participants identified as New Zealand European (50%), followed by Asian (31%), Pacific Islander (10%), and Maori (5%). Whilst 73% of participants were aware of AI, only 58% have heard of it being implemented in health care. Overall, 78% of respondents were comfortable with AI use in their care, with 53% saying they would trust an AI-assisted screening program as much as a health professional. Despite having a higher awareness of AI, younger participants had lower trust in AI systems. A higher proportion of Maori and Pacific participants indicated a preference toward human-led screening. The main perceived benefits of AI included faster diagnostic speeds and greater accuracy. Conclusions: There is low awareness of clinical AI applications among our participants. Despite this, most are receptive toward the implementation of AI in diabetic eye screening. Overall, there was a strong preference toward continual involvement of clinicians in the screening process. There are key recommendations to enhance the receptivity of the public toward incorporation of AI into retinal screening programs.
Chapter
Ophthalmology is the branch of medicine that encompasses diseases and treatments of the eye. Its technical, clinical and public health features have enabled it to be an ideal field for the development and deployment of artificial intelligence (AI). Indeed, it is within ophthalmology that many of healthcare’s most promising AI applications have emerged – from early-stage tools in development, to regulatory-approved and commercialised platforms in real-world clinical use. The field’s technical, clinical and public health contexts are described within this chapter and are further illustrated through case studies in diabetic retinopathy (DR), glaucoma and retinopathy of prematurity (ROP).
Article
Artificial intelligence (AI) is poised to broadly reshape medicine, potentially improving the experiences of both clinicians and patients. We discuss key findings from a 2-year weekly effort to track and share key developments in medical AI. We cover prospective studies and advances in medical image analysis, which have reduced the gap between research and deployment. We also address several promising avenues for novel medical AI research, including non-image data sources, unconventional problem formulations and human–AI collaboration. Finally, we consider serious technical and ethical challenges in issues spanning from data scarcity to racial bias. As these challenges are addressed, AI’s potential may be realized, making healthcare more accurate, efficient and accessible for patients worldwide. AI has the potential to reshape medicine and make healthcare more accurate, efficient and accessible; this Review discusses recent progress, opportunities and challenges toward achieving this goal.
Chapter
Accurate and efficient triaging to ophthalmology services is essential to patient care and appropriate resource allocation. Current triaging processes are both time consuming and prone to human error. The use of deep learning (DL) and natural language processing (NLP) in ophthalmology triaging is a novel application of artificial intelligence (AI) established at the South Australian Institute of Ophthalmology (SAIO), Australia. AI assisted triaging has demonstrated early promise in the ability to identify urgent referrals with potential sight-threatening pathologies, with accuracies of up to 81%. Technical challenges in AI assisted triaging include small dataset size, distant labels and the presence of specialized medical vocabulary. Future research relating to AI assisted triaging should endeavour to use larger sample sizes, specialist guided triage allocation, and data from multiple centres.
Article
Background: Convolutional neural networks (artificial intelligence) are rapidly appearing within the field of dermatology, with diagnostic accuracy matching that of dermatologists. As technologies become available for use by both health professionals and the general public, their uptake in healthcare will become more acceptable. NHS England recognizes the potential of AI for healthcare but emphasizes that patient-centred care should be at the forefront of these technological advancements. Objectives: To obtain opinions of patients on the use of AI in a Dermatology setting, when aiding the diagnosis of skin cancers. Methods: A cross-sectional 14-point questionnaire was handed out to patients attending Dermatology outpatients skin cancer clinics in two UK secondary care hospitals, between March and August 2018. Results: 603 patient questionnaires were completed. 47% (n=282) of respondents were not concerned if artificial intelligence technology was used by a skin specialist to aid skin cancer diagnosis. 81%( n=491) of respondents, considered it important for a Dermatologist to examine and confirm a diagnosis, and be present for discussion of a cancer diagnosis. Conclusion: Whilst the majority of respondents are not reluctant with the use of AI for skin cancer diagnosis, it is still considered of importance by respondents that dermatologists are involved in the diagnosis and/or confirmation of skin cancer. Furthermore, results demonstrate that personal interaction with a clinician is felt to be of importance. This would be in keeping with proposals that AI be used as an adjunct, through increasing accuracy of skin cancer diagnoses, but not as a substitute for a dermatologist.