ResearchPDF Available

Abstract

We present a synopsis of publications focused on machine learning (ML) or artificial intelligence (AI) applications in healthcare for the year 2019. We appreciate the work of researchers and authors who have contributed significantly to the advancement of science in this area.
A preview of the PDF is not available
... Recently, a BERT (Bidirectional Encoder Representation from Transformer) based language-based model was developed to assess the quality of AI models in medical literature 8 . We have attempted to evaluate peer-reviewed publications using BERT both quantitatively and also qualitatively using clinician-provided annotation in selected healthcare publications from 2019, 2020, and 2021 10,11,12 . We aimed to understand areas of healthcare that have the most mature models and what we can learn from them to advance AI in other healthcare areas. ...
Preprint
Full-text available
Background: An ever increasing number of artificial intelligence (AI) models targeting healthcare applications are developed and published every day, but their use in real world decision making is limited. Beyond a quantitative assessment, it is important to have qualitative evaluation of the maturity of these publications with additional details related to trends in type of data used, type of models developed across the healthcare spectrum. Methods: We assessed the maturity of selected peer-reviewed AI publications pertinent to healthcare for 2019 to 2021. For the report, the data collection was performed by PubMed search using machine learning OR artificial intelligence AND Healthcare with the English language and human subject research as of December 31, each year. All three years selected were manually classified into 34 distinct medical specialties. We used the Bidirectional Encoder Representations from Transformers (BERT) neural networks model to identify the maturity level of research publications based on their abstracts. We further classified a mature publication based on the healthcare specialty and geographical location of the article's senior author. Finally, we manually annotated specific details from mature publications, such as model type, data type, and disease type. Results: Of the 7062 publications relevant to AI in healthcare from 2019 to 2021, 385 were classified as mature. In 2019, 6.01 percent of publications were mature. 7.7 percent were mature in 2020, and 1.81 percent of publications were mature in 2021. Radiology publications had the most mature model publications across all specialties over the last three years, followed by pathology in 2019, ophthalmology in 2020, and gastroenterology in 2021. Geographical pattern analysis revealed a non-uniform distribution pattern. In 2019 and 2020, the United States ranked first with a frequency of 22 and 50, followed by China with 20 and 47. In 2021, China ranked first with 17 mature articles, followed by the United States with 11 mature articles. Imaging based data was the primary source, and deep learning was the most frequently used modeling technique in mature publications. Interpretation: Despite the growing number of publications of AI models in healthcare, only a few publications have been found to be mature with a potentially positive impact on healthcare. Globally, there is an opportunity to leverage diverse datasets and models across the health spectrum, to develop more mature models and related publications, which can fully realize the potential of AI to transform healthcare.
... However, in our study, the sinusoidal dilatation is most probably due to a hepato-portal sclerosis that generates an intra-hepatic shunt. These pathological liver modifications are often described as the result of toxic effects of various exogenous substances [23]. ...
Article
Full-text available
Organophosphate (OP) use remains largely available worldwide despite more strict regulatory measures, in agriculture, parks or households, leading to a daily low-dose exposure. The systemic dysfunction appears partly due to acetylcholinesterase inhibition, exhibiting a primary toxic effect on the endocrine system but also on the liver and kidneys, which are responsible for products metabolization and elimination. Prolonged OP exposure can be responsible for histopathological (HP) changes that can either evolve or worsen pre-existing conditions. We conducted an experimental study including six male Wistar rats divided into two groups (four rats in the study group and two in the control group). The subjects in the first group were administered 100 mg∕kg Chlorpyrifos half median lethal dose (LD50) at baseline and at 48 hours, under general anesthesia. Organ harvesting was achieved after one week. HP modifications were discovered in all kidney samples, with dystrophic changes and vacuolization of mesangial cells, dilation of renal tubules and epithelial atrophy. Congestion of vascular structures also occurred. The liver samples showed severe alteration in both vessels and hepatocytes. Adrenal gland impairment was confirmed through an increase in vacuole number in all areas, while a decrease in colloid content was noted in the thyroid gland simultaneously with a modified foamy aspect. This study is the first to certify the extent of organ injury induced by OP exposure, describing both glomerular and tubular involvement in the kidneys, liver necrosis and endocrine disturbances.
Article
Full-text available
In recent years, the successful implementation of human genome project has made people realize that genetic, environmental and lifestyle factors should be combined together to study cancer due to the complexity and various forms of the disease. The increasing availability and growth rate of 'big data' derived from various omics, opens a new window for study and therapy of cancer. In this paper, we will introduce the application of machine learning methods in handling cancer big data including the use of artificial neural networks, support vector machines, ensemble learning and naïve Bayes classifi-ers.
Article
Full-text available
Computer-aided polyp detection in gastric gastroscopy has been the subject of research over the past few decades. However, despite significant advances, automatic polyp detection in real time is still an unsolved problem. In this paper, we report on a convolutional neural network (CNN) for polyp detection that is constructed based on Single Shot MultiBox Detector (SSD) architecture and which we call SSD for Gastric Polyps (SSD-GPNet). To take full advantages of feature maps' information from the feature pyramid and to acquire higher accuracy, we re-use information that is abandoned by Max-Pooling layers. In other words, we reuse the lost data from the pooling layers and concatenate that data as extra feature maps to contribute to classification and detection. Meanwhile, in the feature pyramid, we concatenate feature maps of the lower layers and feature maps that are deconvolved from upper layers to make explicit relationships between layers and to effectively increase the number of channels. The results show that our enhanced SSD for gastric polyp detection can realize real-time polyp detection with 50 frames per second (FPS) and can improve the mean average precision (mAP) from 88.5% to 90.4%, with only a little loss in time-performance. And the further experiment shows that SSD-GPNet has excellent performance in improving polyp detection recalls over 10% (p = 0.00053), especially in small polyp detection. This can help endoscopic physicians more easily find missed polyps and decrease the gastric polyp miss rate. It may be applicable in daily clinical practice to reduce the burden on physicians.
Article
Full-text available
Considering the existing issues of traditional blood pressure (BP) measurement methods and non-invasive continuous BP measurement techniques, this study aims to establish the systolic BP and diastolic BP estimation models based on machine learning using pulse transit time and characteristics of pulse waveform. In the process of model construction, the mean impact value method was introduced to investigate the impact of each feature on the models and the genetic algorithm was introduced to implement parameter optimization. The experimental results showed that the proposed models could effectively describe the nonlinear relationship between the features and BP and had higher accuracy than the traditional methods with the error of 3.27 ± 5.52 mmHg for systolic BP and 1.16 ± 1.97 mmHg for diastolic BP. Moreover, the estimation errors met the requirements of the Advancement of Medical Instrumentation and British Hypertension Society criteria. In conclusion, this study was helpful in promoting the practical application of methods for non-invasive continuous BP estimation models.
Article
Full-text available
Magnetic resonance imaging (MRI) offers the most detailed brain structure image available today; it can identify tiny lesions or cerebral cortical abnormalities. The primary purpose of the procedure is to confirm whether there is structural variation that causes epilepsy, such as hippocampal sclerotherapy, local cerebral cortical dysplasia, and cavernous hemangioma. Cerebrovascular disease, the second most common factor of death in the world, is also the fourth leading cause of death in Taiwan, with cerebrovascular disease having the highest rate of stroke. Among the most common are large vascular atherosclerotic lesions, small vascular lesions, and cardiac emboli. The purpose of this thesis is to establish a computer-aided diagnosis system based on small blood vessel lesions in MRI images, using the method of Convolutional Neural Network and deep learning to analyze brain vascular occlusion by analyzing brain MRI images. Blocks can help clinicians more quickly determine the probability and severity of stroke in patients. We analyzed MRI data from 50 patients, including 30 patients with stroke, 17 patients with occlusion but no stroke, and 3 patients with dementia. This system mainly helps doctors find out whether there are cerebral small vessel lesions in the brain MRI images, and to output the found results into labeled images. The marked contents include the position coordinates of the small blood vessel blockage, the block range, the area size, and if it may cause a stroke. Finally, all the MRI images of the patient are synthesized, showing a 3D display of the small blood vessels in the brain to assist the doctor in making a diagnosis or to provide accurate lesion location for the patient.
Article
Full-text available
Background: Various hypertension predictive models have been developed worldwide; however, there is no existing predictive model for hypertension among Chinese rural populations. Methods: This is a 6-year population-based prospective cohort in rural areas of China. Data was collected in 2007-2008 (baseline survey) and 2013-2014 (follow-up survey) from 8319 participants ranging in age from 35 to 74 years old. Specified gender hypertension predictive models were established based on multivariate Cox regression, Artificial Neural Network (ANN), Naive Bayes Classifier (NBC), and Classification and Regression Tree (CART) in the training set. External validation was conducted in the testing set. The estimated models were assessed by discrimination and calibration, respectively. Results: During the follow-up period, 432 men and 604 women developed hypertension in the training set. Assessment for established models in men suggested men office-based model (M1) was better than others. C-index of M1 model in the testing set was 0.771 (95% confidence Interval (CI) = 0.750, 0.791), and calibration χ2 = 6.3057 (P = 0.7090). In women, women office-based model (W1) and ANN were better than the other models assessed. The C-indexes for the W1 model and the ANN model in the testing set were 0.765 (95% CI = 0.746, 0.783) and 0.756 (95% CI = 0.737, 0.775) and the calibrations χ2 were 6.7832 (P = 0.1478) and 4.7447 (P = 0.3145), respectively. Conclusions: Not all machine-learning models performed better than the traditional Cox regression models. The W1 and ANN models for women and M1 model for men have better predictive performance which could potentially be recommended for predicting hypertension risk among rural populations.
Article
Full-text available
Pathological classification through transmission electron microscopy (TEM) is essential for the diagnosis of certain nephropathy, and the changes of thickness in glomerular basement membrane (GBM) and presence of immune complex deposits in GBM are often used as diagnostic criteria. The automatic segmentation of the GBM on TEM images by computerized technology can provide clinicians with clear information about glomerular ultrastructural lesions. The GBM region on the TEM image is not only complicated and changeable in shape but also has a low contrast and wide distribution of grayscale. Consequently, extracting image features and obtaining excellent segmentation results are difficult. To address this problem, we introduce a random forest- (RF-) based machine learning method, namely, RF stacks (RFS), to realize automatic segmentation. Specifically, this work proposes a two-level integrated RFS that is more complicated than a one-level integrated RF to improve accuracy and generalization performance. The integrated strategies include training integration and testing integration. Training integration can derive a full-view RFS1 by simultaneously sampling several images of different grayscale ranges in the train phase. Testing integration can derive a zoom-view RFS2 by separately sampling the images of different grayscale ranges and integrating the results in the test phase. Experimental results illustrate that the proposed RFS can be used to automatically segment different morphologies and gray-level basement membranes. Future study on GBM thickness measurement and deposit identification will be based on this work.
Article
A Fully Convolutional Network (FCN) based deep architecture called Dual Path U-Net (DPU-Net) is proposed for automatic segmentation of the lumen and media-adventitia in IntraVascular UltraSound (IVUS) frames, which is crucial for diagnosis of many cardiovascular diseases and also for facilitating 3D reconstructions of human arteries. One of the most prevalent problems in medical image analysis is the lack of training data. To overcome this limitation, we propose a twofold solution. First, we introduce a deep architecture that is able to learn using a small number of training images and still achieves a high degree of generalization ability. Second, we strengthen the proposed DPU-Net by having a real-time augmentor control the image augmentation process. Our real-time augmentor contains specially-designed operations that simulate three types of IVUS artifacts and integrate them into the training images. We exhaustively assessed our twofold contribution over Balocco’s standard publicly available IVUS 20 MHz and 40 MHz B-mode dataset, which contain 109 training image, 326 test images and 19 training images, 59 test images, respectively. Models are trained from scratch with the training images provided and evaluated with two commonly used metrics in the IVUS segmentation literature, namely Jaccard Measure (JM) and Hausdorff Distance (HD). Experimental results show that DPU-Net achieves 0.87 JM, 0.82 mm HD and 0.86 JM, 1.07 mm HD over 40 MHz dataset for segmenting the lumen and the media, respectively. Also, DPU-Net achieves 0.90 JM, 0.25 mm HD and 0.92 JM, 0.30 mm HD over 20 MHz images for segmenting the lumen and the media, respectively. In addition, DPU-Net outperforms existing methods by 8–15% in terms of HD distance. DPU-Net also shows a strong generalization property for predicting images in the test sets that contain a significant amount of major artifacts such as bifurcations, shadows, and side branches that are not common in the training set. Furthermore, DPU-Net runs within 0.03 s to segment each frame with a single modern GPU (Nvidia GTX 1080). The proposed work leverages modern deep learning-based method for segmentation of lumen and the media vessel walls in both 20 MHz and 40 MHz IVUS B-mode images and achieves state-of-the-art results without any manual intervention. The code is available online at https://github.com/Kulbear/IVUS-Ultrasonic.
Article
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Article
Commercially available artificial intelligence (AI) algorithms outside of health care have been shown to be susceptible to ethnic, gender, and social bias, which has important implications in the development of AI algorithms in health care and the radiologic sciences. To prevent the introduction bias in health care AI, the physician community should work with developers and regulators to develop pathways to ensure that algorithms marketed for widespread clinical practice are safe, effective, and free of unintended bias. The ACR Data Science Institute has developed structured AI use cases with data elements that allow the development of standardized data sets for AI testing and training across multiple institutions to promote the availability of diverse data for algorithm development. Additionally, the ACR Data Science Institute validation and monitoring services, ACR Certify-AI and ACR Assess-AI, incorporate standards to mitigate algorithm bias and promote health equity. In addition to promoting diversity, the ACR should promote and advocate for payment models for AI that afford access to AI tools for all of our patients regardless of socioeconomic status or the inherent resources of their health systems.
Article
This annotation briefly reviews the history of artificial intelligence and machine learning in health care and orthopaedics, and considers the role it will have in the future, particularly with reference to statistical analyses involving large datasets. Cite this article: Bone Joint J 2019;101-B:1476–1478