Article

Artificial intelligence in medical imaging

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The medical specialty radiology has experienced a number of extremely important and influential technical developments in the past that have affected how medical imaging is deployed. Artificial intelligence (AI) is potentially another such development that will introduce fundamental changes into the practice of radiology. In this commentary the historical evolution of some major changes in radiology are traced as background to how AI may also be embraced into practice. Potential new capabilities provided by AI offer exciting prospects for more efficient and effective use of medical images.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Medical imaging has been fundamental in cancer detection, diagnosis, and treatment planning, with advancements in radiomics and texture-based imaging biomarkers significantly enhancing tumor characterization. Radiomics refers to the extraction of quantitative imaging features from medical scans, allowing for the identification of tumor heterogeneity, microenvironment, and prognostic markers (Gore, 2019). Texture-based imaging biomarkers derived from computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) have been utilized to assess tumor aggressiveness and predict treatment response (Lewis et al., 2019). ...
... Histopathological image processing has also played a pivotal role in digital pathology and AI-assisted diagnostics, enabling more precise and automated cancer classification. Traditional histopathological examination relies on microscopic evaluation of stained tissue sections, which is subject to variability in pathologist expertise and interpretation (Gore, 2019). The introduction of whole-slide imaging (WSI) and digital pathology platforms has allowed for highresolution digitization of tissue samples, facilitating computerized image analysis (Lewis et al., 2019). ...
... Volumetric analysis of tumors using these computational tools has enabled clinicians to track tumor progression, assess treatment response, and refine therapeutic strategies, leading to better patient outcomes (Mukhopadhyay et al., 2018). Moreover, the integration of radiomics with tumor segmentation has provided new opportunities for predictive modeling, allowing for the early identification of treatment-resistant tumors (Gore, 2019). The increasing adoption of AI-driven image analysis and computational oncology has provided oncologists with powerful tools to improve cancer detection and treatment planning. ...
Article
Full-text available
The rapid advancement of predictive analytics, biomarker-driven precision medicine, genomic sequencing, nanotechnology, and immunotherapy has significantly transformed cancer diagnosis, treatment selection, and therapeutic outcomes. This systematic literature review, based on the analysis of 147 peer-reviewed studies, explores the role of these emerging technologies in reshaping oncology and evaluates the barriers limiting their widespread adoption. The study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to ensure a systematic, transparent, and rigorous review process. The findings indicate that machine learning-based predictive models are enhancing early cancer detection, prognosis, and treatment optimization, with multi-modal AI-driven approaches improving diagnostic accuracy by 15-20% compared to conventional methods. The review further highlights the growing importance of biomarker-driven liquid biopsy techniques, with circulating tumor DNA (ctDNA) and microRNA (miRNA) biomarkers proving highly effective in real-time disease monitoring, recurrence prediction, and treatment response assessment. Additionally, genomic sequencing, particularly whole-exome sequencing (WES) and whole-genome sequencing (WGS), has improved the identification of oncogenic mutations, therapy response prediction, and personalized treatment approaches, despite its high cost and accessibility limitations. The study also emphasizes the critical role of nanotechnology in cancer drug delivery, with liposomal formulations, polymeric nanoparticles, and gold-based drug carriers demonstrating significant improvements in chemotherapy bioavailability, tumor selectivity, and reduced systemic toxicity. Immunotherapy has emerged as a revolutionary cancer treatment modality, with immune checkpoint inhibitors (ICIs), CAR-T cell therapy, and tumor-infiltrating lymphocyte (TIL) therapy achieving unprecedented response rates in hematologic and solid tumors, yet remaining financially and logistically inaccessible for many patients. The economic burden of biomarker-driven therapies, the high cost of genomic sequencing, and the computational challenges of AI-based predictive analytics continue to limit equitable access to precision medicine.
... According to the study in [4], they stated that not all brain tumors are cancerous; some are benign [5]. Brain tumors can be identified by specialists (e.g., physicians or doctors) using medical imaging technologies such as magnetic resonance imaging (MRI) [6,7], computerized tomography (CT) [8], and positron emission tomography (PET) [9]. These imaging technologies are used to scan and produce detailed images of the interior of the human body. ...
... These imaging technologies are used to scan and produce detailed images of the interior of the human body. The scanning is used to diagnose different health conditions, including cancer [7]. Images are processed on a computer using specialized algorithms to provide more details of the human body. ...
... Despite it is possibility to identify the tumors manually by the specialists, this is very consuming. Although the automatic detection-based algorithms can assist the identification processes, there is no gold process currently for image detection of brain tumors [7]. ...
Article
Full-text available
A brain tumor is the abnormal cells that growth in the brain, and it is considered as one of the most dangerous diseases that lead to the cause of death. Diagnosis at early is important for increasing the survival rate from the brain tumors. Specialists can identify the tumors manually, but it is very time and effort consuming, and are subject to human error, especially when dealing with large amounts of images. The automatic identification algorithms-based applications can facilitate the process. This study aimed to investigate the possibility of detecting brain cancer based on images using Deep Learning (DL) techniques and statistical operations. The features were extracted using two models of Convolutional Neural Network (CNN), (VGG-19 and AlexNet), then they were used to generate new datasets for statistical operations. CNN is used to extract features with distinct details from brain MRI images. The data were trained in three different training–testing data splitting ratios. Then, the features were classified based on the KNN, RF, and SVM to find the best accuracy of brain MRI image. At the end, the obtained classification accuracy was in favor of statistical operations especially for Large-Value, and Merge between features using KNN (99.1) and SVM (99.1). The features that extracted used in this study can provide high influence on the classification accuracy. The results across all three training–testing data splitting ratios were almost similar, and this approves that the brain cancer can be identified with high accuracy even if the training dataset sizes were minimal.
... Precision medicine, often known as PM, is beginning to emerge as a viable paradigm that takes into account an individual's genetics, environment, and lifestyle in order to produce individualised treatment options (MacEachern and Forkert 2021). The rapid growth of artificial intelligence (AI), in particular machine learning (ML) and deep learning (DL), has resulted in the availability of major technologies that can assist in the advancement of PM in cancer (Gore 2020). The exponential growth of multi-omics data and clinical information, which includes genomes, proteomics, transcriptomics, electronic medical records, and medical imaging (Gore 2020;Kaczmarek, Pyman, Nanayakkara, Tuschl, et al. 20220;Hwang 2021), has led to the convergence of artificial intelligence and proteomics in the treatment of cancer. ...
... The rapid growth of artificial intelligence (AI), in particular machine learning (ML) and deep learning (DL), has resulted in the availability of major technologies that can assist in the advancement of PM in cancer (Gore 2020). The exponential growth of multi-omics data and clinical information, which includes genomes, proteomics, transcriptomics, electronic medical records, and medical imaging (Gore 2020;Kaczmarek, Pyman, Nanayakkara, Tuschl, et al. 20220;Hwang 2021), has led to the convergence of artificial intelligence and proteomics in the treatment of cancer. This is where artificial intelligence shines; it not only discovers hidden trends and offers key insights that might influence decisions about healthcare, but it also processes and analyses these huge and complex datasets (Picard, Scott-Boyer, Bodein, Perin, et al. 2021). ...
Chapter
In precision oncology, the utilization of artificial intelligence (AI) is revolutionising around personalised cancer therapy. After providing an introduction of AI technology and emphasising the importance of precision oncology, this chapter investigates AI applications in oncology. The importance of AI in cancer detection and staging is emphasised, with particular attention on machine learning for precise staging, medical imaging improvements, and AI-assisted pathology. AI examines cancer genomes to identify biomarkers, targetable mutations, and predicts the progression of the disease. A comprehensive description of the link amid AI with genomic profiling is given. More focus is paid on AI-driven decision support systems, combination therapy optimisation, and personalised regimens as well as a deeper examination of individualised treatment planning. The efficacy and integration of AI-based diagnostic tools such as imaging techniques, pathology, histopathology, and liquid biopsies were assessed in this study. This chapter addressed the application of AI in radiation, chemotherapy, surgical decision-making, treatment planning and optimisation. AI-assisted customised dosage and real-time treatment monitoring have enabled significant advancements in adaptive radiation, chemotherapeutic dose, and response tracking. AI's predictive capacity for treatment outcomes and survival rates was investigated, along with its use in immunotherapy and targeted therapies, such as response prediction, CAR-T cell therapy customisation, and assistance in the creation of tailored cancer drugs. This chapter also discusses the technological, ethical, and legal issues surrounding AI in cancer treatment, highlighting the necessity of strong validation procedures and interdisciplinary cooperation. The transformational potential of AI in precision oncology is ultimately highlighted in this chapter, opening the door to more efficient and individualised cancer therapies.
... Humans operate most imaging systems, not being exempt from making errors due to experience level, stress, work overload, or lack of sleep. AI can have a beneficial impact on this domain by improving imaging interpretation, reducing physicians' workload and the chances of details being overlooked [5,36]. ...
... Concerning big data, databases like urogynecological care cloud platforms can receive, record, and analyze patients' data and match them with diseases in the database. WDs and hospital systems' data are uploaded into the cloud and AI allows the automated integration of EHR with images and other data, improving decision support [36,89]. ...
Article
Full-text available
Artificial intelligence (AI) is the new medical hot topic, being applied mainly in specialties with a strong imaging component. In the domain of gynecology, AI has been tested and shown vast potential in several areas with promising results, with an emphasis on oncology. However, fewer studies have been made focusing on urogynecology, a branch of gynecology known for using multiple imaging exams (IEs) and tests in the management of women’s pelvic floor health. This review aims to illustrate the current state of AI in urogynecology, namely with the use of machine learning (ML) and deep learning (DL) in diagnostics and as imaging tools, discuss possible future prospects for AI in this field, and go over its limitations that challenge its safe implementation.
... New technologies have been integrated into radiology workflow in clinical practice and education, in part driven by pressures for increased efficiency and costeffectiveness [28,29]. Considering the rapid growth of AI applications in medical imaging, some warnings were sounded regarding the possible impact of AI on radiology as a profession, estimating a progressive replacement of radiologists or at least a significant restructuring of the work processes [30][31][32]. ...
... However, the number of cases on which they can base their decisions is limited. In comparison, computers can be trained on massive datasets and have impeccable recall [28]. AI may assist in proper probe positioning and valid measurements in learning ultrasound techniques [23,24]. ...
Article
Full-text available
The digitization of medicine will play an increasingly significant role in future years. In particular, telemedicine, Virtual Reality (VR) and innovative Artificial Intelligence (AI) systems offer tremendous potential in imaging diagnostics and are expected to shape ultrasound diagnostics and teaching significantly. However, it is crucial to consider the advantages and disadvantages of employing these new technologies and how best to teach and manage their use. This paper provides an overview of telemedicine, VR and AI in student ultrasound education, presenting current perspectives and controversies.
... This low-magnetic field MR scanner is relatively mobile and can be deployed in a military hospital with minimal staff and resources in order to provide critical anatomical information on injured service members. Such portable systems are becoming much more advanced and common in large part due to the contribution of AI data processing models that can generate high-value images within the portable scanner's limitations [65][66][67][68][69]. ...
Article
Full-text available
Conducted in challenging environments such as disaster or conflict areas, operational medicine presents unique challenges for the delivery of efficient and quality healthcare. It exposes first responders and medical personnel to many unexpected health risks and dangerous situations. To tackle these issues, artificial intelligence (AI) has been progressively incorporated into operational medicine, both on the front lines and also more recently in support roles. The ability of AI to rapidly analyze high-dimensional data and make inferences has opened up a wide variety of opportunities and increased efficiency for its early adopters, notably for the United States military, for non-invasive medical imaging and for mental health applications. This review discusses the current state of AI and highlights its broad array of potential applications in operational medicine as developed for the United States military.
... Neural networks are a specific type of deep learning model that imitates the functioning of the human visual cortex. The neural network layer comprises neurons that identify various image characteristics through edge, color, and texture filters (9)(10)(11). Artificial intelligence-driven radiomics applies sophisticated computational methods to extract several investigator-defined characteristics from medical pictures (12). ...
Article
Full-text available
Background Colorectal cancer is the third most common malignant tumor with the third highest incidence rate. Distant metastasis is the main cause of death in colorectal cancer patients. Early detection and prognostic prediction of colorectal cancer has improved with the widespread use of artificial intelligence technologies. Purpose The aim of this study was to comprehensively evaluate the accuracy and validity of AI-based imaging data for predicting distant metastasis in colorectal cancer patients. Methods A systematic literature search was conducted to find relevant studies published up to January, 2024, in different databases. The quality of articles was assessed using the Quality Assessment of Diagnostic Accuracy Studies 2 tool. The predictive value of AI-based imaging data for distant metastasis in colorectal cancer patients was assessed using pooled sensitivity, specificity. To explore the reasons for heterogeneity, subgroup analyses were performed using different covariates. Results Seventeen studies were included in the systematic evaluation. The pooled sensitivity, specificity, and AUC of AI-based imaging data for predicting distant metastasis in colorectal cancer patients were 0.86, 0.82, and 0.91. Based on QUADAS-2, risk of bias was detected in patient selection, diagnostic tests to be evaluated, and gold standard. Based on the results of subgroup analyses, found that the duration of follow-up, site of metastasis, etc. had a significant impact on the heterogeneity. Conclusion Imaging data images based on artificial intelligence algorithms have good diagnostic accuracy for predicting distant metastasis in colorectal cancer patients and have potential for clinical application. Systematic review registration https://www.crd.york.ac.uk/PROSPERO/, identifier PROSPERO (CRD42024516063).
... This data, which includes image sequences acquired through various modalities (CT, MR, PET-CT, etc.), requires fast and accurate processing. Artificial intelligence (AI) can aid in the analysis, automation of workflows, and improvement of quality assurance [4]. ...
Article
Full-text available
Background and Purpose In the field of medicine, artificial intelligence (AI) is emerging as a promising tool. In this paper, we present our experience with the integration of commercially available AI-based software into our radiotherapy contouring workflow. We also analyzed the accuracy of the automated segmentation system. Methods and Materials We analyzed contours of 19 anatomical regions from 24 patients. Comparisons between AI-generated and human-generated contours were made based on volume, Dice coefficients, and contour center of mass shifts. Results The data indicate that there are minimal differences between AI-generated and human-generated contours, such as those of the lungs. The volume differences are relatively minor <1 cm ³ ( P > 0.05). Nevertheless, for certain organs, such as the small intestine, there can be considerable discrepancies, as the AI delineates the entire organ, in contrast to the RTT. Variations of volumes (bowels) > 300 cm ³ . The AI completes the contouring process in approximately 2 min, whereas human experts take up to 1 h to create the structures for a given region. Conclusion The workflow can be highly automated and standardised, resulting in significant time savings. A consistent level of quality can be maintained, regardless of RTT experience. The results are comparable to those reported by Doolan et al.
... In recent years, artificial intelligence and machine learning (AI/ML) have garnered significant attention and have been extensively applied in the field of biomedical engineering, particularly in areas such as drug development, biomedical imaging, and protein structure prediction. This widespread adoption can largely be attributed to the availability of large-scale datasets from open repositories in these domains [22][23][24]. Traditional statistical approaches typically assume that data follow a normal distribution; however, genomic datasets often do not conform to this assumption, rendering conventional parametric analyses, such as t-tests and ANOVA, unsuitable. In genomic studies, extreme values (such as gene mutations or genes with excessively high expression levels) or other characteristics deviating from normality can distort the data distribution. ...
Article
Full-text available
Background: Keratoconus (KTCN) is a multifactorial disease characterized by progressive corneal degeneration. Recent studies suggest that a gene expression analysis of corneas may uncover potential novel biomarkers involved in corneal matrix remodeling. However, identifying reliable combinations of biomarkers that are linked to disease risk or progression remains a significant challenge. Objective: This study employed multiple machine learning algorithms to analyze the transcriptomes of keratoconus patients, identifying feature gene combinations and their functional associations, with the aim of enhancing the understanding of keratoconus pathogenesis. Methods: We analyzed the GSE77938 (PRJNA312169) dataset for differential gene expression (DGE) and performed gene set enrichment analysis (GSEA) using Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways to identify enriched pathways in keratoconus (KTCN) versus controls. Machine learning algorithms were then used to analyze the gene sets, with SHapley Additive exPlanations (SHAP) applied to assess the contribution of key feature genes in the model’s predictions. Selected feature genes were further analyzed through Gene Ontology (GO) enrichment to explore their roles in biological processes and cellular functions. Results: Machine learning models, including XGBoost, Random Forest, Logistic Regression, and SVM, identified a set of important feature genes associated with keratoconus, with 15 notable genes appearing across multiple models, such as IL1R1, JUN, CYBB, CXCR4, KRT13, KRT14, S100A8, S100A9, and others. The under-expressed genes in KTCN were involved in the mechanical resistance of the epidermis (KRT14, KRT15) and in inflammation pathways (S100A8/A9, IL1R1, CYBB, JUN, and CXCR4), as compared to controls. The GO analysis highlighted that the S100A8/A9 complex and its associated genes were primarily involved in biological processes related to the cytoskeleton organization, inflammation, and immune response. Furthermore, we expanded our analysis by incorporating additional datasets from PRJNA636666 and PRJNA1184491, thereby offering a broader representation of gene features and increasing the generalizability of our results across diverse cohorts. Conclusions: The differing gene sets identified by XGBoost and SVM may reflect distinct but complementary aspects of keratoconus pathophysiology. Meanwhile, XGBoost captured key immune and chemotactic regulators (e.g., IL1R1, CXCR4), suggesting upstream inflammatory signaling pathways. SVM highlighted structural and epithelial differentiation markers (e.g., KRT14, S100A8/A9), possibly reflecting downstream tissue remodeling and stress responses. Our findings provide a novel research platform for the evaluation of keratoconus using machine learning-based approaches, offering valuable insights into its pathogenesis and potential therapeutic targets.
... Conversely, artificial intelligence (AI) has a significant advantage in the extraction of complex image features and the capacity to automatically and quantitatively analyze any type of data. As a result, it can produce more reliable support data for medical decision-making and reach more objective, repeatable conclusions [4]. ...
Article
Full-text available
A revolution in medical diagnosis and treatment is being driven by the use of artificial intelligence (AI) in medical imaging. The diagnostic efficacy and accuracy of medical imaging are greatly enhanced by AI technologies, especially deep learning, that performs image recognition, feature extraction, and pattern analysis. Furthermore, AI has demonstrated significant promise in assessing the effects of treatments and forecasting the course of diseases. It also provides doctors with more advanced tools for managing the conditions of their patients. AI is poised to play a more significant role in medical imaging, especially in real‐time image processing and multimodal fusion. By integrating multiple forms of image data, multimodal fusion technology provides more comprehensive disease information, whereas real‐time image analysis can assist surgeons in making more precise decisions. By tailoring treatment regimens to each patient's unique needs, AI enhances both the effectiveness of treatment and the patient experience. Overall, AI in medical imaging promises a bright future, significantly enhancing diagnostic precision and therapeutic efficacy, and ultimately delivering higher‐quality medical care to patients.
... AI has demonstrated its potential to enhance diagnostic accuracy, optimize treatment strategies, and reduce human errors in clinical research across various diseases and organs [66][67][68]. In the field of medical-engineering integration, AI has contributed to advancements in device development, with the potential to significantly improve the performance and sensitivity of label-free optical systems. ...
Article
Full-text available
Intraoperative misidentification or vascular injury to the parathyroid glands can lead to hypoparathyroidism and hypocalcemia, resulting in serious postoperative complications. Therefore, functional localization of the parathyroid glands during thyroid (parathyroid) surgery is a key focus and challenge in thyroid surgery. The current clinical prospects of various optical imaging technologies for intraoperative localization, identification, and protection of parathyroid glands varies. However, "Label-free optical imaging technology" is increasingly favored by surgeons due to its simplicity, efficiency, safety, real-time capability, and non-invasiveness. This manuscript focuses on the relatively well-researched near-infrared autofluorescence (NIRAF) and NIRAF-combined studies including those integrating laser speckle imaging, artificial intelligence(AI) optimization, hardware integration, and optical path improvements. It also briefly introduces promising technologies, including Laser-Induced Fluorescence (LIF), Hyperspectral Imaging (HSI), Fluorescence Lifetime Imaging (FLIm), Laser-Induced Breakdown Spectroscopy (LIBS), Optical Coherence Tomography (OCT), and Dynamic Optical Contrast Imaging (DOCI). While these technologies are still in early stages with limited clinical application and standardization, current research highlights their potential for improving intraoperative parathyroid identification. Future studies should focus on refining these methods for broader clinical use.
... In recent years, artificial intelligence (AI) has achieved remarkable progress in the medical field, particularly in the field of radiology, generating considerable enthusiasm and anticipation amongst healthcare professionals and the general public [1]. Initially, there was apprehension that the emergence of Artificial Intelligence (AI) in radiology could potentially jeopardize the profession, culminating in a decline in the number of radiologists [2]. ...
Article
Full-text available
Background To investigate the perspectives and expectations of faculty radiologists, residents, and medical students regarding the integration of artificial intelligence (AI) in radiology education, a survey was conducted to collect their opinions and attitudes on implementing AI in radiology education. Methods An online questionnaire was used for this survey, and the participant anonymity was ensured. In total, 41 faculty radiologists, 38 residents, and 120 medical students from the authors’ institution completed the questionnaire. Results Most residents and students experience different levels of psychological stress during the initial stage of clinical practice, and this stress mainly stems from tight schedules, heavy workloads, apprehensions about making mistakes in diagnostic report writing, as well as academic or employment pressures. Although most of the respondents were not familiar with how AI is applied in radiology education, a substantial proportion of them expressed eagerness and enthusiasm for the integration of AI into radiology education. Especially among radiologists and residents, they showed a desire to utilize an AI-driven online platform for practicing radiology skills, including reading medical images and writing diagnostic reports, before engaging in clinical practice. Furthermore, faculty radiologists demonstrated strong enthusiasm for the notion that AI training platforms can enhance training efficiency and boost learners’ confidence. Notably, only approximately half of the residents and medical students shared the instructors’ optimism, with the remainder expressing neutrality or concern, emphasizing the need for robust AI feedback systems and user-centered designs. Moreover, the authors’ team has developed a preliminary framework for an AI-driven radiology education training platform, consisting of four key components: imaging case sets, intelligent interactive learning, self-quiz, and online exam. Conclusions The integration of AI technology in radiology education has the potential to revolutionize the field by providing innovative solutions for enhancing competency levels and optimizing learning outcomes.
... Recently, deep learning has demonstrated substantial potential in the domain of image analysis, with convolutional neural networks (CNNs) representing one prominent methodology [11]. Deep learning demonstrates high performance in tasks such as image segmentation and disease diagnosis [12,13]. ...
Article
Full-text available
Objective The objective of this study is to investigate the value of radiomics features and deep learning features based on positron emission tomography/computed tomography (PET/CT) in predicting perineural invasion (PNI) in rectal cancer. Methods We retrospectively collected 120 rectal cancer (56 PNI-positive patients 64 PNI-negative patients) patients with preoperative ¹⁸F-FDG PET/CT examination and randomly divided them into training and validation sets at a 7:3 ratio. We also collected 31 rectal cancer patients from two other hospitals as an independent external validation set. χ2 test and binary logistic regression were used to analyze PET metabolic parameters. PET/CT images were utilized to extract radiomics features and deep learning features. The Mann-Whitney U test and LASSO were employed to select valuable features. Metabolic parameter, radiomics, deep learning and combined models were constructed. ROC curves were generated to evaluate the performance of models. Results The results indicate that metabolic tumor volume (MTV) is correlated with PNI (P = 0.001). In the training set and validation set, the AUC values of the metabolic parameter model were 0.673 (95%CI: 0.572–0.773), 0.748 (95%CI: 0.599–0.896). We selected 16 radiomics features and 17 deep learning features as valuable factors for predicting PNI. The AUC values of radiomics model and deep learning model were 0.768 (95%CI: 0.667–0.868) and 0.860 (95%CI: 0.780–0.940) in the training set. And the AUC values in the validation set were 0.803 (95%CI: 0.656–0.950) and 0.854 (95% CI 0.721–0.987). Finally, the combined model exhibited AUCs of 0.893 (95%CI: 0.825–0.961) in the training set and 0.883 (95%CI: 0.775–0.990) in the validation set. In the external validation set, the combined model achieved an AUC of 0.829 (95% CI: 0.674–0.984), outperforming each individual model. The decision curve analysis of these models indicated that using the combined model to guide treatment provided a substantial net benefit. Conclusions This combined model established by integrating PET metabolic parameters, radiomics features, and deep learning features can accurately predict the PNI in rectal cancer.
... [1,2] The role of AI in medicine is rapidly expanding. It is believed that AI will be used by every type of clinician, ranging from paramedics to specialty doctors, as it can assist healthcare providers in various ways, such as interpreting radiology imaging, [3][4][5] pathology slides, [6,7] retinal images, [8,9] skin lesions, [10,11] electrocardiograms, [12] and endoscopy. [13,14] The reasons for expanding the role of AI in medicine are related to increased data. ...
Article
Full-text available
BACKGROUND Deep learning’s role in blood film screening is expanding, with recent advancements including algorithms for the automated detection of sickle cell anemia, malaria, and leukemia using smartphone images. OBJECTIVES This study aims to build the artificial intelligence (AI) models and assess their performance in classifying blood film cells as normal or abnormal. MATERIALS AND METHODS The dataset included 171,374 images from 961 patients which were classified by experts. These images were resized, denoized, normalized, augmented, and classified into two categories, normal and abnormal cells. Two stain normalization techniques were used in this study; Reinhard and Mackenko techniques. The data were split into training and testing sets with a ratio of (8:2). The model was built through transfer learning by using the pretrained model Inception-Resnet v2 as a backbone. Three different fine-tuning techniques were tested in this study. The training was done using Python with Keras library on Google Colab for 10 epochs. The model was tested for accurately classifying individual blood cells whether normal or abnormal and evaluated using accuracy and area under receiver operator characteristic curve. RESULTS The counts of the three most common cell types were as follows: Segmented neutrophils: 29,424; erythroblasts: 27,395; and lymphocytes: 26,242. The Reinhard stain normalization had better accuracy than Mackenko, the best AI model achieved the highest accuracy of 96.7%%, the area under the curve (AUC) of 99.87%, while the second technique achieved an accuracy of 91.46% and an AUC of 97.23% in classifying normal from abnormal cells. CONCLUSION In conclusion, AI can effectively classify the blood cells as either normal or abnormal, yielding accurate results in a time-effective manner, especially with the use of transfer learning of pretrained models and fine-tuning. In this study, Inception-Resnet V2 showed good accuracy in differentiating normal from abnormal cells.
... detection, prediction of patient outcomes, and even interpretation of complex medical data [11][12][13]. These models can be trained on vast amounts of data, allowing them to learn patterns and make predictions that might be difficult or timeconsuming for human clinicians [14]. ...
Article
Background: Cardiopulmonary exercise testing (CPET) is used in the evaluation of unexplained dyspnea. However, its interpretation requires expertise that is often not available. We aim to evaluate the utility of ChatGPT (GPT) in interpreting CPET results. Research design and methods: This cross-sectional study included 150 patients who underwent CPET. Two expert pulmonologists categorized the results as normal or abnormal (cardiovascular, pulmonary, or other exercise limitations), being the gold standard. GPT versions 3.5 (GPT-3.5) and 4 (GPT-4) analyzed the same data using pre-defined structured inputs. Results: GPT-3.5 correctly interpreted 67% of the cases. It achieved a sensitivity of 75% and specificity of 98% in identifying normal CPET results. GPT-3.5 had varying results for abnormal CPET tests, depending on the limiting etiology. In contrast, GPT-4 demonstrated improvements in interpreting abnormal tests, with sensitivities of 83% and 92% for respiratory and cardiovascular limitations, respectively. Combining the normal CPET interpretations by both AI models resulted in 91% sensitivity and 98% specificity. Low work rate and peak oxygen consumption were independent predictors for inaccurate interpretations. Conclusions: Both GPT-3.5 and GPT-4 succeeded in ruling out abnormal CPET results. This tool could be utilized to differentiate between normal and abnormal results.
... Artificial Intelligence in Computer Science aims to imitate human cognition, learning, and knowledge retention. The potential for new AI capabilities opens up exciting opportunities for more effective and efficient use of medical imaging (Gore, 2020). ...
Chapter
Biotechnology is in the vanguard of scientific advancement, consistently pushing the envelope and influencing the direction of environmental sustainability, agriculture, and healthcare in the future while genetic engineering is modifying an organism's genetic makeup, creating a wealth of opportunities for advancing agricultural output, addressing environmental issues, and promoting human health. With the potential to completely change the way genetic illnesses are treated; this ground-breaking technique gives patients with few alternatives in the past hope. Artificial intelligence (AI) is the replication of human intelligence in machines built to think and learn like people. Research on genetic engineering and gene treatment as well as biotechnology can greatly benefit from AI. In this chapter, the authors will examine the most recent developments in genetic engineering, biotechnology, and artificial intelligence (AI), as well as their possible uses, moral implications, and their promising futures.
... Of all the applications that use AI in the field of Medicine, a large part of them focuses specifically on radiology [5,6], with the number of related publications having grown exponentially in recent years [7], especially those applied in emergency radiology [8][9][10]. ...
Article
Full-text available
Background/Objectives: The growing use of artificial intelligence (AI) in musculoskeletal radiographs presents significant potential to improve diagnostic accuracy and optimize clinical workflow. However, assessing its performance in clinical environments is essential for successful implementation. We hypothesized that our AI applied to urgent bone X-rays could detect fractures, joint dislocations, and effusion with high sensitivity (Sens) and specificity (Spec). The specific objectives of our study were as follows: 1. To determine the Sens and Spec rates of AI in detecting bone fractures, dislocations, and elbow joint effusion compared to the gold standard (GS). 2. To evaluate the concordance rate between AI and radiology residents (RR). 3. To compare the proportion of doubtful results identified by AI and the RR, and the rates confirmed by GS. Methods: We conducted an observational, double-blind, retrospective study on adult bone X-rays (BXRs) referred from the emergency department at our center between October and November 2022, with a final sample of 792 BXRs, categorized into three groups: large joints, small joints, and long-flat bones. Our AI system detects fractures, dislocations, and elbow effusions, providing results as positive, negative, or doubtful. We compared the diagnostic performance of AI and the RR against a senior radiologist (GS). Results: The study population’s median age was 48 years; 48.6% were male. Statistical analysis showed Sens = 90.6% and Spec = 98% for fracture detection by the RR, and 95.8% and 97.6% by AI. The RR achieved higher Sens (77.8%) and Spec (100%) for dislocation detection compared to AI. The Kappa coefficient between RR and AI was 0.797 for fractures in large joints, and concordance was considered acceptable for all other variables. We also analyzed doubtful cases and their confirmation by GS. Additionally, we analyzed findings not detected by AI, such as chronic fractures, arthropathy, focal lesions, and anatomical variants. Conclusions: This study assessed the impact of AI in a real-world clinical setting, comparing its performance with that of radiologists (both in training and senior). AI achieved high Sens, Spec, and AUC in bone fracture detection and showed strong concordance with the RR. In conclusion, AI has the potential to be a valuable screening tool, helping reduce missed diagnoses in clinical practice.
... Its application in the field of medical imaging refers to the capability of the computer in identifying, studying, and processing medical imaging data for accurate, efficient, standardized, and reliable interpretation [31]. The integration of AI in medical imaging has the potential to revolutionize the dimension of healthcare with better patient outcomes by improving disease detection, disease characterization, predicting prognosis, and customizing treatment through AI-driven precision medicine [32][33][34][35]. ...
Article
Full-text available
Pulmonary embolism (PE) is a clinically challenging diagnosis that varies from silent to life-threatening symptoms. Timely diagnosis of the condition is subject to clinical assessment, D-dimer testing and radiological imaging. Computed tomography pulmonary angiogram (CTPA) is considered the gold standard imaging modality, although some cases can be missed due to reader dependency, resulting in adverse patient outcomes. Hence, it is crucial to implement faster and precise diagnostic strategies to help clinicians diagnose and treat PE patients promptly and mitigate morbidity and mortality. Machine learning (ML) and artificial intelligence (AI) are the newly emerging tools in the medical field, including in radiological imaging, potentially improving diagnostic efficacy. Our review of the studies showed that computer-aided design (CAD) and AI tools displayed similar to superior sensitivity and specificity in identifying PE on CTPA as compared to radiologists. Several tools demonstrated the potential in identifying minor PE on radiological scans showing promising ability to aid clinicians in reducing missed cases substantially. However, it is imperative to design sophisticated tools and conduct large clinical trials to integrate AI use in everyday clinical setting and establish guidelines for its ethical applicability. ML and AI can also potentially help physicians in formulating individualized management strategies to enhance patient outcomes.
... Chatbots have been frequently used in many different areas in recent years, such as diagnosis and imaging, treatment, patient follow-up and support, health promotion, customer service, sales, marketing, information and technical support [21][22][23][24][25][26]. Despite this, there are still many question marks on the responses given by chatbots, and researchers are evaluating the readability, understandability and accuracy of the responses given by chatbots [20,[27][28][29][30][31][32]. ...
Article
Full-text available
Objective: Chatbots have been frequently used in many different areas in recent years, such as diagnosis and imaging, treatment, patient follow-up and support, health promotion, customer service, sales, marketing, information and technical support. The aim of this study is to evaluate the readability, comprehensibility, and accuracy of queries made by researchers in the field of health through artificial intelligence chatbots in biostatistics. Methods: A total of 10 questions from the topics frequently asked by researchers in the field of health in basic biostatistics were determined by 4 experts. The determined questions were addressed to the artificial intelligence chatbots by one of the experts and the answers were recorded. In this study, free versions of most widely preferred ChatGPT4, Gemini and Copilot chatbots were used. The recorded answers were independently evaluated as “Correct”, “Partially correct” and “Wrong” by three experts who blinded to which chatbot the answers belonged to. Then, these experts came together and examined the answers together and made the final evaluation by reaching a consensus on the levels of accuracy. The readability and understandability of the answers were evaluated with the Ateşman readability formula, Sönmez formula, Çetinkaya-Uzun readability formula and Bezirci-Yılmaz readability formulas. Results: According to the answers given to the questions addressed to the artificial intelligence chatbots, it was determined that the answers were at the “difficult” level according to the Ateşman readability formula, “insufficient reading level” according to the Çetinkaya-Uzun readability formula, and “academic level” according to the Bezirci-Yılmaz readability formula. On the other hand, the Sönmez formula gave the result of “the text is understandable” for all chatbots. It was determined that there was no statistically significant difference (p=0.819) in terms of accuracy rates of the answers given by the artificial intelligence chatbots to the questions. Conclusion: It was determined that although the chatbots tended to provide accurate information, the answers given were not readable, understandable and their accuracy levels were not high.
... In the past, artificial intelligence (AI) was only defined as a kind of science and engineering that made only machines (10,11). However, with the development of science and technology, artificial intelligence has gradually become the representative word of machine learning and deep learning (12)(13)(14). Due to the continuous innovation of deep learning and machine learning technologies, artificial intelligence has made significant progress and has been applied in ophthalmology, cancer diagnosis, drug synthesis, molecular targeting, genomic medicine, proteomics medicine and other fields, especially in image recognition and image diagnosis, and has achieved mature clinical applications (15)(16)(17). The application of artificial intelligence in the field of ophthalmology is very broad, including the diagnosis and screening of a variety of eye diseases (18). ...
Article
Full-text available
Glaucoma is a pathologically irreversible eye illness in the realm of ophthalmic diseases. Because it is difficult to detect concealed and non-obvious progressive changes, clinical diagnosis and treatment of glaucoma is extremely challenging. At the same time, screening and monitoring for glaucoma disease progression are crucial. Artificial intelligence technology has advanced rapidly in all fields, particularly medicine, thanks to ongoing in-depth study and algorithm extension. Simultaneously, research and applications of machine learning and deep learning in the field of glaucoma are fast evolving. Artificial intelligence, with its numerous advantages, will raise the accuracy and efficiency of glaucoma screening and diagnosis to new heights, as well as significantly cut the cost of diagnosis and treatment for the majority of patients. This review summarizes the relevant applications of artificial intelligence in the screening and diagnosis of glaucoma, as well as reflects deeply on the limitations and difficulties of the current application of artificial intelligence in the field of glaucoma, and presents promising prospects and expectations for the application of artificial intelligence in other eye diseases such as glaucoma.
... The copyright holder for this preprint this version posted December 5, 2024. ; https://doi.org/10.1101/2024.12.03.24318410 doi: medRxiv preprint (14). As such, the objective of our study was to develop a foundation model for neonatal imaging to interpret radiographs to identify common pathologies and findings relevant to neonatal intensive care. ...
Preprint
Full-text available
Importance: Artificial intelligence (AI) based on deep learning has shown promise in adult and pediatric populations in the interpretation of medical imaging to make important diagnostic and management recommendations. However, there has been little work developing new AI methods for neonatal populations. Objective: To develop a novel, deep contrastive learning model to predict a comprehensive set of pathologies from radiographs relevant to neonatal intensive care. Design, Setting, and Participants: We identified a retrospective cohort of infants who obtained a radiograph while admitted to a large neonatal intensive care unit in Boston, MA from January 2008 to December 2023. After collecting radiographs with corresponding reports and relevant demographics for all subjects, we randomized the cohort into three sets: training (80%), validation (10%), and test (10%). Interventions: We developed a deep learning model, NeoCLIP, to identify 15 unique pathologies and 5 medical devices relevant to neonatal intensive care on plain film radiographs. The pathologies were automatically extracted from radiology reports using a custom pipeline based on large language models. Main Outcomes and Measures: We compared the performance of our model, as defined by AUROC, against various baseline methods. Results: We identified 4,629 infants which were randomized into the training (3,731 infants), validation (419 infants), and test (479 infants) sets. In total, we collected 20,154 radiographs with a corresponding 15,795 reports. The AUROC of our model was greater than all baseline methods for every radiographic finding other than portal venous gas. The addition of demographics improved the AUROC of our model for all findings, but the difference was not statistically significant. Conclusions and Relevance: NeoCLIP successfully identified a broad set of pathologies and medical devices on neonatal radiographs, outperforming similar models developed for adult populations. This represents the first such application of advanced AI methodologies to interpret neonatal radiographs.
... Several recent studies have highlighted the role of AI in predicting cardiovascular events using ECG signal analysis [16][17][18][19]. Furthermore, the application of AI extends to medical imaging [20][21][22][23][24][25]. Technologies such as MRI, CT scans, and x-rays are significantly augmented by AI to improve image processing speeds, enhance resolution and clarity, and facilitate automatic and precise interpretation. ...
... Although DSLRs provide high-quality images, it is likely that smartphone photos will become more prevalent in the future, especially with the development of apps for diagnosing oral lesions. However, smartphone images can vary in quality due to differences in models, camera specifications, lighting conditions, and user handling, which could impact AI performance if not trained on a diverse dataset [4,[31][32][33]. ...
Article
Full-text available
Aim: Accurately identifying primary lesions in oral medicine, particularly elementary white lesions, is a significant challenge, especially for trainee dentists. This study aimed to develop and evaluate a deep learning (DL) model for the detection and classification of elementary white mucosal lesions (EWMLs) using clinical images. Materials and Methods: A dataset was created by collecting photographs of various oral lesions, including oral leukoplakia, OLP plaque-like and reticular forms, OLL, oral candidiasis, and hyperkeratotic lesions from the Unit of Oral Medicine. The SentiSight.AI (Neurotechnology Co.®, Vilnius, Lithuania) AI platform was used for image labeling and model training. The dataset comprised 221 photos, divided into training (n = 179) and validation (n = 42) sets. Results: The model achieved an overall precision of 77.2%, sensitivity of 76.0%, F1 score of 74.4%, and mAP of 82.3%. Specific classes, such as condyloma and papilloma, demonstrated high performance, while others like leucoplakia showed room for improvement. Conclusions: The DL model showed promising results in detecting and classifying EWMLs, with significant potential for educational tools and clinical applications. Expanding the dataset and incorporating diverse image sources are essential for improving model accuracy and generalizability.
... Through automated image analysis, particularly DL techniques, transplant histopathology, kidney tissue samples, and diagnoses can be rapidly analyzed via reproducible quantitative data, and specific features such as glomerular structure, fibrosis, tubular atrophy, and inflammation can be extracted. AI systems offer high reproducibility in diagnosing kidney diseases by minimizing intra-and inter-pathologist variability, ensuring consistent diagnostic criteria across different cases [ 80 ]. It can also facilitate the development of predictive scoring systems that estimate the risk of disease progression. ...
Article
Full-text available
Recent advancements in artificial intelligence (AI) have significantly impacted the diagnosis and treatment of kidney diseases, offering novel approaches for precise quantitative assessments of nephropathology. The collaboration between computer engineers, renal specialists, and nephropathologists has led to the development of AI- assisted technology, presenting promising avenues for renal pathology diagnoses, disease prediction, treatment effectiveness assessment, and outcome prediction. This review provides a comprehensive overview of AI applications in renal pathology, focusing on computer vision algorithms for kidney structure segmentation, specific pathological changes, diagnosis, treatment, and prognosis prediction based on images along with the role of machine learning (ML) and deep learning (DL) in addressing global public health issues related to various nephrological conditions. Despite the transformative potential, the review acknowledges challenges such as data privacy, interpretability of AI models, the imperative need for trust in AI-driven recommendations for broad applicability, external validation, and improved clinical decision-making. Overall, the ongoing integration of AI technologies in nephrology paves the newer way for more precise diagnostics, personalized treatments, and improved patient care outcome.
... Radiomics, which involves the extraction of quantitative information from medical imaging data, is an emerging field in the realm of medical imaging [15,16]. In recent years, artificial intelligence [17] has become more and more common, especially in the medical industry. ...
Article
Full-text available
Purpose Despite suffering from the same disease, each patient exhibits a distinct microbiological profile and variable reactivity to prescribed treatments. Most doctors typically use a standardized treatment approach for all patients suffering from a specific disease. Consequently, the challenge lies in the effectiveness of this standardized treatment and in adapting it to each individual patient. Personalized medicine is an emerging field in which doctors use diagnostic tests to identify the most effective medical treatments for each patient. Prognosis, disease monitoring, and treatment planning rely on manual, error-prone methods. Artificial intelligence (AI) uses predictive techniques capable of automating prognostic and monitoring processes, thus reducing the error rate associated with conventional methods. Methods This paper conducts an analysis of current literature, encompassing the period from January 2015 to 2023, based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Results In assessing 25 pertinent studies concerning predicting neoadjuvant treatment (NAT) response in breast cancer (BC) patients, the studies explored various imaging modalities (Magnetic Resonance Imaging, Ultrasound, etc.), evaluating results based on accuracy, sensitivity, and area under the curve. Additionally, the technologies employed, such as machine learning (ML), deep learning (DL), statistics, and hybrid models, were scrutinized. The presentation of datasets used for predicting complete pathological response (PCR) was also considered. Conclusion This paper seeks to unveil crucial insights into the application of AI techniques in personalized oncology, particularly in the monitoring and prediction of responses to NAT for BC patients. Finally, the authors suggest avenues for future research into AI-based monitoring systems.
... AI's integration into PET technology promises to revolutionize how we approach diagnostics and treatment planning, marking a significant leap forward in medical science. The application of AI can enhance diagnostic accuracy, improved efficiency in image processing, and results in tailored patient care via sophisticated algorithmic analyses [13][14][15]. ...
Article
Full-text available
Background This study investigates the integration of Artificial Intelligence (AI) in compensating the lack of time-of-flight (TOF) of the GE Omni Legend PET/CT, which utilizes BGO scintillation crystals. Methods The current study evaluates the image quality of the GE Omni Legend PET/CT using a NEMA IQ phantom. It investigates the impact on imaging performance of various deep learning precision levels (low, medium, high) across different data acquisition durations. Quantitative analysis was performed using metrics such as contrast recovery coefficient (CRC), background variability (BV), and contrast to noise Ratio (CNR). Additionally, patient images reconstructed with various deep learning precision levels are presented to illustrate the impact on image quality. Results The deep learning approach significantly reduced background variability, particularly for the smallest region of interest. We observed improvements in background variability of 11.8%%\%, 17.2%%\%, and 14.3%%\% for low, medium, and high precision deep learning, respectively. The results also indicate a significant improvement in larger spheres when considering both background variability and contrast recovery coefficient. The high precision deep learning approach proved advantageous for short scans and exhibited potential in improving detectability of small lesions. The exemplary patient study shows that the noise was suppressed for all deep learning cases, but low precision deep learning also reduced the lesion contrast (about −30%%\%), while high precision deep learning increased the contrast (about 10%%\%). Conclusion This study conducted a thorough evaluation of deep learning algorithms in the GE Omni Legend PET/CT scanner, demonstrating that these methods enhance image quality, with notable improvements in CRC and CNR, thereby optimizing lesion detectability and offering opportunities to reduce image acquisition time.
... Established AI tools are now available to help with diagnosis and therapy, a significant improvement over past medical care approaches. Disease risk prediction models, diagnostic imaging aids, and triage systems are all examples of such aids [87][88][89][90]. ChatGPT stands out from other AI assistants since it is able to execute all of these tasks and also provide medical advice to users. ...
Article
Full-text available
Artificial Intelligence and Natural Language Processing technology have demonstrated significant promise across several domains within the medical and healthcare sectors. This technique has numerous uses in the field of healthcare. One of the primary challenges in implementing ChatGPT in healthcare is the requirement for precise and up-to-date data. In the case of the involvement of sensitive medical information, it is imperative to carefully address concerns regarding privacy and security when using GPT in the healthcare sector. This paper outlines ChatGPT and its relevance in the healthcare industry. It discusses the important aspects of ChatGPT's workflow and highlights the usual features of ChatGPT specifically designed for the healthcare domain. The present review uses the ChatGPT model within the research domain to investigate disorders associated with the hepatic system. This review demonstrates the possible use of ChatGPT in supporting researchers and clinicians in analyzing and interpreting liver-related data, thereby improving disease diagnosis, prognosis, and patient care.
Article
Background The Ki-67 antigen, a marker of cell proliferation, serves as a biomarker for assessing tumor malignancy. However, measuring Ki-67 levels through immunohistochemistry is often challenging due to difficulties in specimen collection and individual health issues. Radiological analysis has emerged as a potential alternative for predicting Ki-67 levels, although its accuracy has been limited. This study aims to enhance the prediction of Ki-67 levels using chest X-rays by employing a refined approach that combines detailed, manually delineated radiological features with conventional imaging characteristics. Methods This study collected X-ray images and Ki-67 expression data from 109 patients diagnosed with Non-Small Cell Lung Cancer (NSCLC). Seven radiological features related to tumor progression were annotated on each image by clinical professionals. Tumor areas were delineated using Python, resulting in the generation of 5 types of data from these regions. Data integration facilitated the development of predictive models utilizing Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), and Deep Neural Networks (DNN), with feature selection processes applied. Results Using the RF, 8 predictive features were selected from the datasets, of which 7 exhibited a linear correlation with Ki-67 levels (Mantel-Haenszel test, P < .05). The model demonstrated robust performance metrics: Accuracy: 0.818, Precision: 0.823, Recall: 0.849, and F1 Score: 0.783. Conclusions This research underscores the effectiveness of integrating specific radiological features, manually delineated regions of interest (ROIs), with traditional imaging characteristics and machine learning techniques. This approach significantly enhances the predictive accuracy of chest X-rays for Ki-67 levels, offering a non-invasive method for Ki-67 estimation.
Article
Recent years have seen rapid development of artificial intelligence (AI) technology revolutionizing the healthcare industry by a tremendous measure, especially in the field of nursing, highlighting its great potential for application. Aside from assisting nurses to make more accurate decisions in complex clinical environments, AI also provides patients with more convenient remote care services. These trends highlight the indispensable and important value of AI in nursing. The current study comprehensively reviewed the current literature on the application of AI in nursing environments, aiming to deeply analyze the current status of the application of AI technology in nursing practice and to provide a prospective outlook on its future development trends in the nursing field. Through this review, we hope to provide nursing practitioners and healthcare policy makers with valuable information to facilitate the further application of AI technologies in enhancing the quality and efficiency of nursing care.
Chapter
Blockchain and Artificial Intelligence (AI) together are revolutionizing healthcare by improving safety, privacy, and electronic health records' security. Adoption of private blockchain enhances accessibility and efficiency while demonstrating the industry's dedication to data protection. In addition to addressing important sector issues, this convergence offers improved illness detection, response, and overall healthcare efficacy. Additional research into cutting-edge AI features combined with blockchain promises to improve results and influence the way healthcare is delivered globally in the future while ensuring data security, privacy, and innovation. In light of the aforementioned, this chapter looks at how blockchain technology and AI can be combined to solve current issues with healthcare data security and privacy. It offers insights into best practices and strategies for utilizing blockchain and AI's revolutionary potential in healthcare by examining various recent studies and developing trends.
Article
Introduction: To realise the full potential of artificial intelligence (AI) systems in medical imaging, it is crucial to address challenges, such as cyberterrorism to foster trust and acceptance. This study aimed to determine the principles that enhance trust in AI systems from the perspective of medical imaging professionals in Ghana. Methods: An anonymous, online, nationwide cross-sectional survey was conducted. The survey contained questions related to socio-demographic characteristics and AI trustworthy principles, including “human agency and oversight”, “technical robustness and safety”, “data privacy, security and governance” and “transparency, fairness and accountability”. Results: A total of 370 respondents completed the survey. Among the respondents, 66.5 % (n = 246) were diagnostic radiographers. Considerable number of respondents (n = 121, 32.7 %) reported having little or no understanding of how medical imaging AI systems work. Overall, 54.9 % (n = 203) of the respondents agreed or strongly agreed that each of the four principles was important to enhance trust in medical imaging AI systems, with a composite mean score of 3.88 ± 0.45. Transparency, fairness and accountability had the highest rating (4.27 ± 0.58), whereas the mean score for human agency and oversight was 3.89 ± 0.53. Technical robustness and safety as well as data privacy, security and governance obtained mean scores of 3.79 ± 0.61 and 3.58 ± 0.65, respectively. Conclusion: Medical imaging professionals in Ghana agreed that human agency, technical robustness, data privacy and transparency are important principles to enhance trust in AI systems; however, future plans including medical imaging AI educational interventions are required to improve AI literacy among medical imaging professionals in Ghana. Implications for practice: The evidence presented should encourage organisations to design and deploy trustworthy medical imaging AI systems.
Article
In the last few years, the scientific community has seen an increasing interest towards the potential applications of artificial intelligence in medicine and healthcare. In this context, urology represents an area of rapid development, particularly in uro-oncology, where a wide range of applications has focused on prostate cancer diagnosis. Other urological branches are also starting to explore the potential advantages of AI in the diagnostic and therapeutic process, and functional urology and neurourology are among them. Although the experiences in this sense have been quite limited so far, some AI applications have already started to show potential benefits, especially for urodynamic and imaging interpretation, as well as for the development of AI-based predictive models for treatment response. A few experiences on the use of ChatGPT to answer questions on functional urology and neurourology topics have also been reported. Conversely, AI applications in functional urology surgery remain largely unexplored. This paper provides a critical overview of the current evidence on this topic, highlighting the potential benefits for the diagnostic workflow, therapeutic evaluation and surgical training, as well as the current limitations that need to be addressed to enable the integration of this tools in the clinical practice in the future.
Article
Imaging disciplines, such as ophthalmology, offer a wide range of opportunities for the beneficial use of artificial intelligence (AI). The analysis of images and data by trained algorithms has the potential to facilitate making the diagnosis and patient care and not just in ophthalmology. If AI brings about advances in clinical practice that benefit patients, this is ethically to be welcomed; however, respect for the self-determination of patients and data security must be guaranteed. Traceability and explainability of the algorithms would strengthen trust in automated decision-making and enable ultimate medical responsibility. It should be noted that algorithms are only as good and unbiased as the data used to train them. If the use of AI is likely to lead to a loss of skills on the part of doctors (deskilling), this must be counteracted, for example through improved training. Accompanying medical ethics research is necessary to identify those aspects of the use of AI that require regulation. In principle, care must be taken to ensure that AI serves people and adapts to their needs, not the other way round.
Chapter
Postmodern society in the early 21st century has been heavily impacted by the technological development of machine learning ML, also known as artificial intelligence (AI). In the absence of the fundamental truth usually provided by science and the educational system, inherently disregarded and questioned by those who aspire to simple answers to the world's most complex problems, new and old conspiracy theories grow and engulf public discourse. In this chapter, we seek to demystify ML and AI and their current impact on higher education. In an area as sensitive as education, where critical awareness is paramount, the involvement of academics and educational experts in discussions about the applications and risks of AI is essential. Here, we discuss how AI can enable higher education experts to move beyond a cloning model based on knowledge transmission, working from Foucault's perspective, opening up possibilities of autonomy and freedom, to make a difference in the lives of students.
Article
Full-text available
Artificial intelligence (AI) integration in medical imaging leads to far more accurate, efficient, and personalized care. In this paper, I examine the progress of AI technologies (both the key inventions and technologies grouped into this category), which have advanced medical imaging to diagnose more effectively. Yet these advancements still pose challenges like data privacy, biased algorithms, and ethical concerns. This research outlines these challenges and proposes that others use AI in healthcare effectively, yet there must be regulation and education for equitable and effective adoption. By way of this analysis, we hope to gain insights into the power of AI for transforming medical diagnostics in imaging and its implications for the future.
Article
Full-text available
The integration of artificial intelligence (AI) into medical education presents numerous opportunities for innovation and efficiency. However, it also introduces significant ethical concerns, including data privacy, bias in algorithms, informed consent, and the protection of student data. This paper explores these challenges and emphasizes the need for ethical oversight in AI-driven medical education. The absence of dedicated ethics committees for educational AI applications complicates the establishment of ethical guidelines, leading to gaps in regulation. The study highlights potential solutions, such as creating specialized ethics committees, improving transparency in AI algorithms, and training medical educators and students in ethical AI use. Addressing these ethical concerns will be essential to harnessing the benefits of AI while minimizing risks in medical education.
Article
Objectives Artificial intelligence (AI) represents an exciting and evolving technology that is increasingly being utilized across pain medicine. Large language models (LLMs) are one type of AI that has become particularly popular. Currently, there is a paucity of literature analyzing the impact that AI may have on trainee education. As such, we sought to assess the benefits and pitfalls that AI may have on pain medicine trainee education. Given the rapidly increasing popularity of LLMs, we particularly assessed how these LLMs may promote and hinder trainee education through a pilot quality improvement project. Materials and Methods A comprehensive search of the existing literature regarding AI within medicine was performed to identify its potential benefits and pitfalls within pain medicine. The pilot project was approved by UPMC Quality Improvement Review Committee (#4547). Three of the most commonly utilized LLMs at the initiation of this pilot study – ChatGPT Plus, Google Bard, and Bing AI – were asked a series of multiple choice questions to evaluate their ability to assist in learner education within pain medicine. Results Potential benefits of AI within pain medicine trainee education include ease of use, imaging interpretation, procedural/surgical skills training, learner assessment, personalized learning experiences, ability to summarize vast amounts of knowledge, and preparation for the future of pain medicine. Potential pitfalls include discrepancies between AI devices and associated cost‐differences, correlating radiographic findings to clinical significance, interpersonal/communication skills, educational disparities, bias/plagiarism/cheating concerns, lack of incorporation of private domain literature, and absence of training specifically for pain medicine education. Regarding the quality improvement project, ChatGPT Plus answered the highest percentage of all questions correctly (16/17). Lowest correctness scores by LLMs were in answering first‐order questions, with Google Bard and Bing AI answering 4/9 and 3/9 first‐order questions correctly, respectively. Qualitative evaluation of these LLM‐provided explanations in answering second‐ and third‐order questions revealed some reasoning inconsistencies (e.g., providing flawed information in selecting the correct answer). Conclusions AI represents a continually evolving and promising modality to assist trainees pursuing a career in pain medicine. Still, limitations currently exist that may hinder their independent use in this setting. Future research exploring how AI may overcome these challenges is thus required. Until then, AI should be utilized as supplementary tool within pain medicine trainee education and with caution.
Article
Fractures are one of the most common reasons of admission to emergency department affecting individuals of all ages and regions worldwide that can be misdiagnosed during radiologic examination. Accurate and timely diagnosis of fracture is crucial for patients, and artificial intelligence that uses algorithms to imitate human intelligence to aid or enhance human performs is a promising solution to address this issue. In the last few years, numerous commercially available algorithms have been developed to enhance radiology practice and a large number of studies apply artificial intelligence to fracture detection. Recent contributions in literature have described numerous advantages showing how artificial intelligence performs better than doctors who have less experience in interpreting musculoskeletal X-rays, and assisting radiologists increases diagnostic accuracy and sensitivity, improves efficiency, and reduces interpretation time. Furthermore, algorithms perform better when they are trained with big data on a wide range of fracture patterns and variants and can provide standardized fracture identification across different radiologist, thanks to the structured report. In this review article, we discuss the use of artificial intelligence in fracture identification and its benefits and disadvantages. We also discuss its current potential impact on the field of radiology and radiomics.
Article
Full-text available
A novel nomogram model to predict the prognosis of hepatocellular carcinoma (HCC) treated with radiofrequency ablation and transarterial chemoembolization was recently published in the World Journal of Gastrointestinal Surgery . This model includes clinical and laboratory factors, but emerging imaging aspects, particularly from magnetic resonance imaging (MRI) and radiomics, could enhance the predictive accuracy thereof. Multiparametric MRI and deep learning radiomics models significantly improve prognostic predictions for the treatment of HCC. Incorporating advanced imaging features, such as peritumoral hypointensity and radiomics scores, alongside clinical factors, can refine prognostic models, aiding in personalized treatment and better predicting outcomes. This letter underscores the importance of integrating novel imaging techniques into prognostic tools to better manage and treat HCC.
Article
Full-text available
Background Artificial Intelligence (AI) is becoming integral to the health sector, particularly radiology, because it enhances diagnostic accuracy and optimizes patient care. This study aims to assess the awareness and acceptance of AI among radiology professionals in Saudi Arabia, identifying the educational and training needs to bridge knowledge gaps and enhance AI-related competencies. Methods This cross-sectional observational study surveyed radiology professionals across various hospitals in Saudi Arabia. Participants were recruited through multiple channels, including direct invitations, emails, social media, and professional societies. The survey comprised four sections: demographic details, perceptions of AI, knowledge about AI, and willingness to adopt AI in clinical practice. Results Out of 374 radiology professionals surveyed, 45.2% acknowledged AI’s significant impact on their field. Approximately 44% showed enthusiasm for AI adoption. However, 58.6% reported limited AI knowledge and inadequate training, with 43.6% identifying skill development and the complexity of AI educational programs as major barriers to implementation. Conclusion While radiology professionals in Saudi Arabia are generally positive about integrating AI into clinical practice, significant gaps in knowledge and training need to be addressed. Tailored educational programs are essential to fully leverage AI’s potential in improving medical imaging practices and patient care outcomes.
Article
Full-text available
In recent studies, neuroanatomical volume and shape asymmetries have been seen during the course of Alzheimer's Disease (AD) and could potentially be used as preclinical imaging biomarkers for the prediction of Mild Cognitive Impairment (MCI) and AD dementia. In this study, a deep learning framework utilizing Siamese neural networks trained on paired lateral inter-hemispheric regions is used to harness the discriminative power of whole-brain volumetric asymmetry. The method uses the MRICloud pipeline to yield low-dimensional volumetric features of pre-defined atlas brain structures, and a novel non-linear kernel trick to normalize these features to reduce batch effects across datasets and populations. By working with the low-dimensional features, Siamese networks were shown to yield comparable performance to studies that utilize whole-brain MR images, with the advantage of reduced complexity and computational time, while preserving the biological information density. Experimental results also show that Siamese networks perform better in certain metrics by explicitly encoding the asymmetry in brain volumes, compared to traditional prediction methods that do not use the asymmetry, on the ADNI and BIOCARD datasets.
Article
Magnetic resonance (MR) images with both high resolutions and high signal-to-noise ratios (SNRs) are desired in many clinical and research applications. However, acquiring such images takes a long time, which is both costly and susceptible to motion artifacts. Acquiring MR images with good in-plane resolution and poor through-plane resolution is a common strategy that saves imaging time, preserves SNR, and provides one viewpoint with good resolution in two directions. Unfortunately, this strategy also creates orthogonal viewpoints that have poor resolution in one direction and, for 2D MR acquisition protocols, also creates aliasing artifacts. A deep learning approach called SMORE that carries out both anti-aliasing and super-resolution on these types of acquisitions using no external atlas or exemplars has been previously reported but not extensively validated. This paper reviews the SMORE algorithm and then demonstrates its performance in four applications with the goal to demonstrate its potential for use in both research and clinical scenarios. It is first shown to improve the visualization of brain white matter lesions in FLAIR images acquired from multiple sclerosis patients. Then it is shown to improve the visualization of scarring in cardiac left ventricular remodeling after myocardial infarction. Third, its performance on multi-view images of the tongue is demonstrated and finally it is shown to improve performance in parcellation of the brain ventricular system. Both visual and selected quantitative metrics of resolution enhancement are demonstrated.
Article
Magnetic resonance (MR) images denoising is important in medical image analysis. Denoising methods based on deep learning have shown great promise and outperform all of the other conventional methods. However, deep-learning methods are limited by the number of training samples. In this article, using a small sample size, we applied a wider denoising neural network to MR images with Rician noise and trained several denoising models. The first model is specific to a certain noise, while the other applies to a wide range of noise levels. We considered the noise range as one interval, two sub-intervals, three sub-intervals, or even more sub-intervals to train the corresponding models. Experimental results demonstrate that for MR images, the proposed deep-learning models are efficient in terms of peak-signal-to-noise ratio, structure-similarity-index metrics and normalized mutual information. In addition, for blind noise, the effect of the three sub-intervals is better than that of the other sub-intervals.
Article
Diffusion magnetic resonance images typically suffer from spatial distortions due to susceptibility induced off-resonance fields, which may affect the geometric fidelity of the reconstructed volume and cause mismatches with anatomical images. State-of-the art susceptibility correction (for example, FSL's TOPUP algorithm) typically requires data acquired twice with reverse phase encoding directions, referred to as blip-up blip-down acquisitions, in order to estimate an undistorted volume. Unfortunately, not all imaging protocols include a blip-up blip-down acquisition, and cannot take advantage of the state-of-the art susceptibility and motion correction capabilities. In this study, we aim to enable TOPUP-like processing with historical and/or limited diffusion imaging data that include only a structural image and single blip diffusion image. We utilize deep learning to synthesize an undistorted non-diffusion weighted image from the structural image, and use the non-distorted synthetic image as an anatomical target for distortion correction. We evaluate the efficacy of this approach (named Synb0-DisCo) and show that our distortion correction process results in better matching of the geometry of undistorted anatomical images, reduces variation in diffusion modeling, and is practically equivalent to having both blip-up and blip-down non-diffusion weighted images.
Article
The complexity of modern multi-parametric MRI has increasingly challenged conventional interpretations of such images. Machine learning has emerged as a powerful approach to integrating diverse and complex imaging data into signatures of diagnostic and predictive value. It has also allowed us to progress from group comparisons to imaging biomarkers that offer value on an individual basis. We review several directions of research around this topic, emphasizing the use of machine learning in personalized predictions of clinical outcome, in breaking down broad umbrella diagnostic categories into more detailed and precise subtypes, and in non-invasively estimating cancer molecular characteristics. These methods and studies contribute to the field of precision medicine, by introducing more specific diagnostic and predictive biomarkers of clinical outcome, therefore pointing to better matching of treatments to patients.
Article
For quantitative neuroimaging studies using multi-echo gradient echo (mGRE) images, additional T1-weighted magnetization prepared rapid gradient echo (MPRAGE) images are often acquired to supplement the insufficient morphometric information of mGRE for tissue segmentation which require lengthened scan time and additional processing such as image registration. This study investigated the feasibility of generating synthetic MPRAGE images from mGRE images using a deep convolutional neural network. Tissue segmentation results derived from the synthetic MPRAGE showed good agreement with those from actual MPRAGE (DSC = 0.882 ± 0.017). There was no statistically significant difference between the mean susceptibility values obtained with the regions of interest from synthetic and actual MPRAGEs and high correlation between the two measurements.
Article
This article describes a technique in which X-ray transmission readings are taken through the head at a multitude of angles: from these data, absorption values of the material contained within the head are calculated on a computer and presented as a series of pictures of slices of the cranium. The system is approximately 100 times more sensitive than conventional X-ray systems to such an extent that variations in soft tissues of nearly similar density can be displayed.
Article
The future of integrated electronics is the future of electronics itself. Integrated circuits will lead to such wonders as home computers, automatic controls for automobiles, and personal portable communications equipment. But the biggest potential lies in the production of large systems. In telephone communications, integrated circuits in digital filters will separate channels on multiplex equipment. Integrated circuits will also switch telephone circuits and perform data processing. In addition, the improved reliability made possible by integrated circuits will allow the construction of larger processing units. Machines similar to those in existence today will be built at lower costs and with faster turnaround.
Using deep Siamese neural networks for detection of brain asymmetries associated with Alzheimer's Disease and Mild Cognitive Impairment
  • Liu C.-F
  • Padhy S.
  • Ramachandran S.
  • Wang V.X.
  • Efimov A.
  • Bernal A.
  • Shi L.
  • for the Alzheimer's Disease Neuroimaging Initiative