Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Purpose: The aim of this study was to evaluate the diagnostic performance of artificial intelligence (AI) application evaluating of the impacted third molar teeth in Cone-beam Computed Tomography (CBCT) images. Material and methods: In total, 130 third molar teeth (65 patients) were included in this retrospective study. Impaction detection, Impacted tooth numbers, root/canal numbers of teeth, relationship with adjacent anatomical structures (inferior alveolar canal and maxillary sinus) were compared between the human observer and AI application. Recorded parameters agreement between the human observer and AI application based on the deep-CNN system was evaluated using the Kappa analysis. Results: In total, 112 teeth (86.2%) were detected as impacted by AI. The number of roots was correctly determined in 99 teeth (78.6%) and the number of canals in 82 teeth (68.1%). There was a good agreement in the determination of the inferior alveolar canal in relation to the mandibular impacted third molars (kappa: 0.762) as well as the number of roots detection (kappa: 0.620). Similarly, there was an excellent agreement in relation to maxillary impacted third molar and the maxillary sinus (kappa: 0.860). For the maxillary molar canal number detection, a moderate agreement was found between the human observer and AI examinations (kappa: 0.424). Conclusions: Artificial Intelligence (AI) application showed high accuracy values in the detection of impacted third molar teeth and their relationship to anatomical structures.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Zhang et al. (2018) predicted postoperative facial swelling following impacted mandibular third molars extraction using 15 factors related to patients [39]. Orhan et al. (2021) performed a segmentation task to detect third molar teeth using Cone-Beam Computed Tomography [40]. One hundred twelve teeth are used. ...
... Zhang et al. (2018) predicted postoperative facial swelling following impacted mandibular third molars extraction using 15 factors related to patients [39]. Orhan et al. (2021) performed a segmentation task to detect third molar teeth using Cone-Beam Computed Tomography [40]. One hundred twelve teeth are used. ...
... They used angulation of the third molar with respect to the second molar as a parameter but did not perform detection. Orhan et al. (2021) performed segmentation to detect third molar teeth with a precision value of 0.77 [40]. They used Cone-Beam Computed Tomography images and compared agreement between the human observer and AI application. ...
Article
Full-text available
Third molar impacted teeth are a common issue with all ages, possibly causing tooth decay, root resorption, and pain. This study was aimed at developing a computer-assisted detection system based on deep convolutional neural networks for the detection of third molar impacted teeth using different architectures and to evaluate the potential usefulness and accuracy of the proposed solutions on panoramic radiographs. A total of 440 panoramic radiographs from 300 patients were randomly divided. As a two-stage technique, Faster RCNN with ResNet50, AlexNet, and VGG16 as a backbone and one-stage technique YOLOv3 were used. The Faster-RCNN, as a detector, yielded a mAP@0.5 rate of 0.91 with ResNet50 backbone while VGG16 and AlexNet showed slightly lower performances: 0.87 and 0.86, respectively. The other detector, YOLO v3, provided the highest detection efficacy with a mAP@0.5 of 0.96. Recall and precision were 0.93 and 0.88, respectively, which supported its high performance. Considering the findings from different architectures, it was seen that the proposed one-stage detector YOLOv3 had excellent performance for impacted mandibular third molar tooth detection on panoramic radiographs. Promising results showed that diagnostic tools based on state-ofthe-art deep learning models were reliable and robust for clinical decision-making.
... However, as aforementioned, panoramic radiography has limitations in describing anatomical structures as a twodimensional (2D) imaging modality. Orhan et al. reported an AI application (Diagnocat, Inc.) based on CNN with high accuracy in detecting the M3 and determining the number of roots and their relation to adjacent anatomical structures [36]. However, the details of the classification of the M3-MC relation were not elaborated in their report. ...
... In recent years, deep learning has gained increasing attention and rapid development in dental imaging . Several studies reported the application of CNN in the evaluation of M3, showing promising results in tooth development staging [31,32], prediction of M3 eruption [33], and the detection and diagnosis of M3 [34][35][36]. Most of them were conducted on panoramic radiographs. ...
... For them, the use of CBCT would result in fewer coronectomy decisions. Orhan et al. reported a deep CNNbased AI application with high performance in detecting the M3 and determining the number of roots and their relation to adjacent anatomical structures in CBCT, in good agreement with the manual detection (kappa: 0.762) [36]. However, the details of the classification of the M3-MC relation were not elaborated. ...
Article
Full-text available
Objectives The objective of our study was to develop and validate a deep learning approach based on convolutional neural networks (CNNs) for automatic detection of the mandibular third molar (M3) and the mandibular canal (MC) and evaluation of the relationship between them on CBCT.Materials and methodsA dataset of 254 CBCT scans with annotations by radiologists was used for the training, the validation, and the test. The proposed approach consisted of two modules: (1) detection and pixel-wise segmentation of M3 and MC based on U-Nets; (2) M3-MC relation classification based on ResNet-34. The performances were evaluated with the test set. The classification performance of our approach was compared with two residents in oral and maxillofacial radiology.ResultsFor segmentation performance, the M3 had a mean Dice similarity coefficient (mDSC) of 0.9730 and a mean intersection over union (mIoU) of 0.9606; the MC had a mDSC of 0.9248 and a mIoU of 0.9003. The classification models achieved a mean sensitivity of 90.2%, a mean specificity of 95.0%, and a mean accuracy of 93.3%, which was on par with the residents.Conclusions Our approach based on CNNs demonstrated an encouraging performance for the automatic detection and evaluation of the M3 and MC on CBCT.Clinical relevanceAn automated approach based on CNNs for detection and evaluation of M3 and MC on CBCT has been established, which can be utilized to improve diagnostic efficiency and facilitate the precision diagnosis and treatment of M3.
... All seven retrospective studies involve a total of 1288 human CBCT scans. Five out of seven studies used convolutional neural network algorithms [37][38][39][40][41], and in the other two studies, one used statistical shape models [42], and the other one tested a new automated method [43]. Despite the progress of AI within oral and maxillofacial radiology, the number of published studies testing AI algorithms for IAN/IANC detection on CBCT scans is relevantly low; from 2016 till the 22 of August 2021, only seven studies have been published and identified. ...
... The U-net-like algorithms implemented by Diagnocat software (Diagnocat Inc, West Sacramento, USA) were tested by Orhan et al. [37] and Bayrakdar et al. [39], respectively tested 85 and 75 CBCT scans as sample size. In each study, one oral and maxillofacial radiologist was involved in performing the reference test. ...
... The sensitivity (90.2%) and specificity (95%) were only reported in Lui et al. [38] study, while three studies [38,40,41] reported the accuracy without presenting the diagnostic odds. Kappa statistics and Kendall's coefficient were reported respectively by Orhan et al. [37] (0.762) and Liu et al. [38] (0.901) in their studies to describe the level of agreement between the index and reference test. Liu et al. [38] determined the reliability between the two investigators using Weighted Kappa (0.783) that indicated good results. ...
Article
Full-text available
This systematic review aims to identify the available semi-automatic and fully automatic algorithms for inferior alveolar canal localization as well as to present their diagnostic accuracy. Articles related to inferior alveolar nerve/canal localization using methods based on artificial intelligence (semi-automated and fully automated) were collected electronically from five different databases (PubMed, Medline, Web of Science, Cochrane, and Scopus). Two independent reviewers screened the titles and abstracts of the collected data, stored in EndnoteX7, against the inclusion criteria. Afterward, the included articles have been critically appraised to assess the quality of the studies using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Seven studies were included following the deduplication and screening against exclusion criteria of the 990 initially collected articles. In total, 1288 human cone-beam computed tomography (CBCT) scans were investigated for inferior alveolar canal localization using different algorithms and compared to the results obtained from manual tracing executed by experts in the field. The reported values for diagnostic accuracy of the used algorithms were extracted. A wide range of testing measures was implemented in the analyzed studies, while some of the expected indexes were still missing in the results. Future studies should consider the new artificial intelligence guidelines to ensure proper methodology, reporting, results, and validation.
... For the AI-automated segmentation, the mandibular segment, teeth, dental sockets, and canal for the mandibular nerve, were segmented with the Diagnocat online program (LLC "DIAGNOCAT" Moscow, Russia) ( Figure 5). This solution then processed the CBCT scan of the mandible within five minutes and created the STL segments of the described anatomical structures with the use of a CNN [44,[49][50][51]. The STL segments were downloaded from this program and stored on a computer. ...
... For the AI-automated segmentation, the mandibular segment, teeth, dental sockets, and canal for the mandibular nerve, were segmented with the Diagnocat online program (LLC "DIAGNOСAT" Moscow, Russia) ( Figure 5). This solution then processed the CBCT scan of the mandible within five minutes and created the STL segments of the described anatomical structures with the use of a CNN [44,[49][50][51]. The STL segments were downloaded from this program and stored on a computer. ...
... The diagnostic performance of the AI application for evaluating the impacted third molar teeth in CBCT images was clinically evaluated. The AI application indicated high accuracy values in the recognition of impacted third molar teeth and their relationship to surrounding anatomical structures [50]. The effectiveness of AI in the identification of periapical pathosis on CBCT scans is impressive, as well. ...
Article
Full-text available
(1) Teeth, in humans, represent the most resilient tissues. However, exposure to concentrated acids might lead to their dissolving, thus making human identification difficult. Teeth often contain dental restorations from materials that are even more resilient to acid impact. This paper aims to introduce a novel method for the 3D reconstruction of dental patterns as a crucial step for the digital identification of dental records. (2) With a combination of modern methods, including micro-computed tomography, cone-beam computer tomography, and attenuated total reflection, in conjunction with Fourier transform infrared spectroscopy and artificial intelligence convolutional neural network algorithms, this paper presents a method for 3D-dental-pattern reconstruction, and human remains identification. Our research studies the morphology of teeth, bone, and dental materials (amalgam, composite, glass-ionomer cement) under different periods of exposure to 75% sulfuric acid. (3) Our results reveal a significant volume loss in bone, enamel, dentine, as well as glass-ionomer cement. The results also reveal a significant resistance by the composite and amalgam dental materials to the impact of sulfuric acid, thus serving as strong parts in the dental-pattern mosaic. This paper also probably introduces the first successful artificial intelligence application in automated-forensic-CBCT segmentation. (4) Interdisciplinary cooperation, utilizing the mentioned technologies, can solve the problem of human remains identification with a 3D reconstruction of dental patterns and their 2D projections over existing ante-mortem records.
... IAN damage and subsequent neurosensory disturbances is rare but serious complication [2]. Panoramic radiography is routinely utilized to initially assessment of possible IAN damage with advantages including common availability and low cost however has several mis interpretation regarding with image quality [26]. CBCT scans can provide diagnostic information in different planes without overlap anatomical structures [26]. ...
... Panoramic radiography is routinely utilized to initially assessment of possible IAN damage with advantages including common availability and low cost however has several mis interpretation regarding with image quality [26]. CBCT scans can provide diagnostic information in different planes without overlap anatomical structures [26]. In the literature, it was reported that preoperative CBCT scan did not decrease IAN damage risk in comparison to panoramic radiograph [27]. ...
Article
Full-text available
Objectives: The aim of this retrospective study was to investigate anatomical structure of mandibular canal and the factors those increase the possibility of inferior alveolar nerve damage in mandibular third molar region of Turkish population. Material and Methods: Overall 320 participants with 436 mandibular third molars were included from four different study centers. Following variables were measured: type and depth of third molar impaction, position of mandibular canal in relation to third molars, morphology of mandibular canal, cortication status of mandibular canal, possible contact between the third molars and mandibular canal, thickness and density of superior, buccal, and lingual mandibular canal wall, bucco-lingual and apico-coronal mandibular canal diameters on cone-beam computed tomography scans. Results: Lingual mandibular canal wall density and thickness were decreased significantly as the impaction depth of mandibular third molar was increased (P = 0.045, P = 0.001 respectively). Highest buccal mandibular canal wall density and thickness were observed in lingual position of mandibular canal in relation to mandibular third molar (P = 0.021, P = 0.034 respectively). Mandibular canal with oval/round morphology had higher apico-coronal diameter in comparison to tear drop and dumbbell morphologies (P = 0.018). Additionally, mandibular canals with observed cortication border and no contact with mandibular third molar had denser and thicker lingual mandibular canal wall (P = 0.003, P = 0.001 respectively). Conclusions: Buccal and lingual mandibular canal wall density, thickness and mandibular canal diameter may be related with high-risk indicators of inferior alveolar nerve injury.
... Diagnostic performance of AI application evaluating the impacted third molar teeth in CBCT images was clinically evaluated. The AI application showed high accuracy values in the detection of impacted third molar teeth and their relationship to anatomical structures [51]. ...
... For AI automated segmentation the mandibular segment, teeth, dental sockets and canal for mandibular nerve were segmented with Diagnocat on-line program (LLC "DIAGNOСAT" Moscow, Russia) ( Figure 5). This solution processed the CBCT scan of the mandible within five minutes and created STL segments of described anatomical structures with the use of convolutional neural network [8,[50][51][52]. The STL segments were downloaded from this program and stored on computer. ...
Preprint
Full-text available
(1) Human teeth are the most resilient tissues in the body. However, exposure to concentrated acids might lead to their obliteration, thus making human identification difficult. Teeth often contain dental restorations from materials that are even more resilient to acid impact. This paper introduces novel method of 3D reconstruction of dental patterns as a crucial step for digital identification with dental records.; (2) With combination of modern methods of Micro-Computed Tomography, Cone Beam Computed Tomography, Attenuated Total Reflection in conjunction with Fourier-Transform Infrared Spectroscopy and Artificial Intelligence Convolutional Neural Network algorithms, the paper presents the way of 3D dental pattern reconstruction and human remains identification. Research studies morphology of teeth, bone and dental materials (Amalgam, Composite, Glass-ionomer cement) under different periods of exposure to 75% sulfuric acid; (3) Results reveal significant volume loss in bone, enamel, dentine and as well glass-ionomer cement. Results also reveal significant resistance of composite and amalgam dental materials to sulfuric acid impact, thus serving as strong parts in the dental pattern mosaic. Paper also introduces probably first successful artificial intelligence application in automated forensic CBCT segmentation.; (4) Interdisciplinary cooperation utilizing mentioned technologies can solve problem of human remains identification with 3D reconstruction of dental patterns and their 2D projections over existing ante-mortem records.
... Some studies in dentistry applied deep learning to diagnose and make decisions [7][8][9]. Deep learning detects caries and periapical lesions based on periapical and panoramic images [10][11][12]. Applying deep learning to distinguish prosthetic restorations in PR could save more clinical time for clinicians to focus on treatment planning and prosthodontic operations. ...
Article
Full-text available
Aim. This study applied a CNN (convolutional neural network) algorithm to detect prosthetic restorations on panoramic radiographs and to automatically detect these restorations using deep learning systems. Materials and Methods. This study collected a total of 5126 panoramic radiographs of adult patients. During model training, .bmp, .jpeg, and .png files for images and .txt files containing five different types of information are required for the labels. Herein, 10% of panoramic radiographs were used as a test dataset. Owing to labeling, 2988 crowns and 2969 bridges were formed in the dataset. Results. The mAP and mAR values were obtained when the confidence threshold was set at 0.1. TP, FP, FN, precision, recall, and F1 score values were obtained when the confidence threshold was 0.25. The YOLOv4 model demonstrated that accurate results could be obtained quickly. Bridge results were found to be more successful than crown results. Conclusion. The detection of prosthetic restorations with artificial intelligence on panoramic radiography, which is widely preferred in clinical applications, provides convenience to physicians in terms of diagnosis and time management.
... Artificial intelligence (AI) is defined as the ability of a machine to perform complex tasks that mimic humans' cognitive functions, such as problem solving, recognition of objects and words, and decision making [13]. The objective here is to develop machines that can learn through data to solve problems. ...
Article
Full-text available
The present study aims to validate the diagnostic performance and evaluate the reliability of an artificial intelligence system based on the convolutional neural network method for the morphological classification of sella turcica in CBCT (cone-beam computed tomography) images. In this retrospective study, sella segmentation and classification models (CranioCatch, Eskisehir, Türkiye) were applied to sagittal slices of CBCT images, using PyTorch supported by U-Net and TensorFlow 1, and we implemented the GoogleNet Inception V3 algorithm. The AI models achieved successful results for sella turcica segmentation of CBCT images based on the deep learning models. The sensitivity , precision, and F-measure values were 1.0, 1.0, and 1.0, respectively, for segmentation of sella turcica in sagittal slices of CBCT images. The sensitivity, precision, accuracy, and F1-score were 1.0, 0.95, 0.98, and 0.84, respectively, for sella-turcica-flattened classification; 0.95, 0.83, 0.92, and 0.88, respectively, for sella-turcica-oval classification; 0.75, 0.94, 0.90, and 0.83, respectively, for sella-turcica-round classification. It is predicted that detecting anatomical landmarks with orthodontic importance, such as the sella point, with artificial intelligence algorithms will save time for orthodontists and facilitate diagnosis.
... 3D imaging is another workspace in terms of artificial intelligence for the same problem. Orhan et al. [17] designed a 3D study to evaluate the performance of an AI application in determining the impacted third molar teeth and the relationship with neighboring anatomical structures. Similar to the present study, their deep CNN algorithm was based on a U-Net-like architecture for the segmentation of the MCs on CBCT images. ...
Article
Full-text available
The study aimed to generate a fused deep learning algorithm that detects and classifies the relationship between the mandibular third molar and mandibular canal on orthopantomographs. Radiographs (n = 1880) were randomly selected from the hospital archive. Two dentomaxillofacial radiologists annotated the data via MATLAB and classified them into four groups according to the overlap of the root of the mandibular third molar and mandibular canal. Each radiograph was segmented using a U-Net-like architecture. The segmented images were classified by AlexNet. Accuracy, the weighted intersection over union score, the dice coefficient, specificity, sensitivity, and area under curve metrics were used to quantify the performance of the models. Also, three dental practitioners were asked to classify the same test data, their success rate was assessed using the Intraclass Correlation Coefficient. The segmentation network achieved a global accuracy of 0.99 and a weighted intersection over union score of 0.98, average dice score overall images was 0.91. The classification network achieved an accuracy of 0.80, per class sensitivity of 0.74, 0.83, 0.86, 0.67, per class specificity of 0.92, 0.95, 0.88, 0.96 and AUC score of 0.85. The most successful dental practitioner achieved a success rate of 0.79. The fused segmentation and classification networks produced encouraging results. The final model achieved almost the same classification performance as dental practitioners. Better diagnostic accuracy of the combined artificial intelligence tools may help to improve the prediction of the risk factors, especially for recognizing such anatomical variations.
... Detect Net also showed perfect detection performance with recall, precision, and F-measure values of 1.0. A study conducted by Orhan et al. [26] using Cone-Beam Computed Tomography (CBCT) assessed the diagnostic performance of AI application to detect impacted third molar teeth. In this study, a total of 112 teeth (86.2%) were detected as impacted by AI. ...
Article
Full-text available
Objectives: The goal of this study was to develop and evaluate the performance of a new deep-learning (DL) artificial intelligence (AI) model for diagnostic charting in panoramic radiography. Methods: One thousand eighty-four anonymous dental panoramic radiographs were labeled by two dento-maxillofacial radiologists for ten different dental situations: crown, pontic, root-canal treated tooth, implant, implant-supported crown, impacted tooth, residual root, filling, caries, and dental calculus. AI Model CranioCatch, developed in Eskişehir, Turkey and based on a deep CNN method, was proposed to be evaluated. A Faster R-CNN Inception v2 (COCO) model implemented with the TensorFlow library was used for model development. The assessment of AI model performance was evaluated with sensitivity, precision, and F1 scores. Results: When the performance of the proposed AI model for detecting dental conditions in panoramic radiographs was evaluated, the best sensitivity values were obtained from the crown, implant, and impacted tooth as 0.9674, 0.9615, and 0.9658, respectively. The worst sensitivity values were obtained from the pontic, caries, and dental calculus, as 0.7738, 0.3026, and 0.0934, respectively. The best precision values were obtained from pontic, implant, implant-supported crown as 0.8783, 0.9259, and 0.8947, respectively. The worst precision values were obtained from residual root, caries, and dental calculus, as 0.6764, 0.5096, and 0.1923, respectively. The most successful F1 Scores were obtained from the implant, crown, and implant-supported crown, as 0.9433, 0.9122, and 0.8947, respectively. Conclusion: The proposed AI model has promising results at detecting dental conditions in panoramic radiographs, except for caries and dental calculus. Thanks to the improvement of AI models in all areas of dental radiology, we predict that they will help physicians in panoramic diagnosis and treatment planning, as well as digital-based student education, especially during the pandemic period.
... [31][32][33] Similarly, a deep learning application provided high accuracy in the detection of impacted third molars and their relationship with anatomical structures. 34,35 Ahn et al. 18 investigated the performance of a deep learning model to detect mesiodens on primary or mixed dentition panoramic radiographs and suggested that this method may help clinicians with insufficient clinical experience make more accurate and faster diagnoses. Ha et al. 19 proposed a model based on YOLOv3 to detect mesiodens on panoramic radiographs of primary, mixed and permanent dentition groups. ...
Article
Abstract Purpose: The aim of this study was to assess the performance of a deep learning system for permanent tooth germ detection on pediatric panoramic radiographs. Materials and Methods: In total, 4518 anonymized panoramic radiographs of children aged between 5 and 13 years old were collected. YOLO V4, a CNN (Convolutional Neural Networks) based object detection model, was used to automatically detect permanent tooth germs. Panoramic images of children, processed in LabelImg, were trained and tested in the YOLO V4 algorithm. True positive, false positive, and false negative rates were calculated. A confusion matrix was used to evaluate the performance of the model. Results: The YOLO V4 model, which detected permanent tooth germ on pediatric panoramic radiographs, provided an AP (Average Precision) value at 94.16% and the F1 value at 0.90, resulting in a high significance of the study. YOLO V4 inference time was 90 msec on average. Conclusion: Detection of permanent tooth germs on pediatric panoramic x-rays by using a deep learning-based approach may provide early diagnosis of tooth deficiency or supernumerary teeth and help dental practitioners to find more accurate treatment options by saving time and effort.
... [9][10][11] These models can be trained with clinical data sets and used for a variety of diagnostic tasks in dentistry. Taking into consideration the literature, quite a number of studies are available to assess the performance of AI algorithms to solve different problems in dentistry such as tooth detection and numbering, caries and restoration detection, detection of periapical lesion and jaw pathologies, dental implant planning, impacted tooth detection, etc. [12][13][14][15][16][17][18] Moreover, AI-based automatic and semi-automatic system that can be an alternative to fully automatic systems with the advantages such as faster and easier point identification, although it has some disadvantages including loss of standardization, has a great potential in developing tools that will provide significant benefits to assist orthodontists in providing standardized patient care and maximizing the chances of meeting goals. Orthodontists can benefit from AI technology for better clinical decision-making. ...
Article
Objective: The aim of this study is to develop an artificial intelligence model to detect cephalometric landmark automatically en- abling the automatic analysis of cephalometric radiographs which have a very important place in dental practice and is used routinely in the diagnosis and treatment of dental and skeletal disorders. Methods: In this study, 1620 lateral cephalograms were obtained and 21 landmarks were included. The coordinates of all landmarks in the 1620 films were obtained to establish a labeled data set: 1360 were used as a training set, 140 as a validation set, and 180 as a testing set. A convolutional neural network-based artificial intelligence algorithm for automatic cephalometric landmark detection was developed. Mean radial error and success detection rate within the range of 2 mm, 2.5 mm, 3 mm, and 4 mm were used to eval- uate the performance of the model. Results: Presented artificial intelligence system (CranioCatch, Eskişehir, Turkey) could detect 21 anatomic landmarks in a lateral ceph- alometric radiograph. The highest success detection rate scores of 2 mm, 2.5 mm, 3 mm, and 4 mm were obtained from the sella point as 98.3, 99.4, 99.4, and 99.4, respectively. The mean radial error ± standard deviation value of the sella point was found as 0.616 ± 0.43. The lowest success detection rate scores of 2 mm, 2.5 mm, 3 mm, and 4 mm were obtained from the Gonion point as 48.3, 62.8, 73.9, and 87.2, respectively. The mean radial error ± standard deviation value of Gonion point was found as 8.304 ± 2.98. Conclusion: Although the success of the automatic landmark detection using the developed artificial intelligence model was not in- sufficient for clinical use, artificial intelligence-based cephalometric analysis systems seem promising to cephalometric analysis which provides a basis for diagnosis, treatment planning, and following-up in clinical orthodontics practice.
... В настоящее время на рынке представлено несколько программ, которые позволяют проводить диагностику стоматологических заболеваний и автоматически заполнять медицинскую документацию. Одна из таких программ -Diagnocat -проводит анализ КЛКТ, панорамных и прицельных снимков на предмет кариеса, разряжения в костной ткани [22], определяет наличие ретинированных зубов [23], измеряет ширину и высоту альвеолярного отростка и находит границы верхнечелюстных пазух [24]. Программа автоматически создает отчет о наличии заболеваний по каждому зубу и составляет зубную формулу. ...
Article
Aim. This review is devoted to the analysis of available on-line services and programs using artificial neural networks (ANNs) in dentistry, especially for cephalometric analysis. Materials and methods. We searched for scientific publications in the information and analytical databases PubMed, Google Scholar and eLibrary using combinations of the following keywords: artificial intelligence, deep learning, computer vision, neural network, dentistry, orthodontics, cephalometry, cephalometric analysis. 1612 articles were analyzed, of which 23 publications were included in our review. Results. Deep machine learning based on ANN has been successfully used in various branches of medicine as an analytical tool for processing various data. ANNs are especially successfully used for image recognition in radiology and histology. In dentistry, computer vision is used to diagnose diseases of the maxillofacial region, plan surgical treatment, including dental implantation, as well as for cephalometric analysis for the needs of orthodontists and maxillofacial surgeons. Conclusion. Currently, there are many programs and on-line services for cephalometric analysis. However, only 7 of them use ANNs for automatic landmarking and image analysis. Also, there is not enough data to evaluate the accuracy of their work and convenience.
Article
Objectives The aim of this study is to recommend an automatic caries detection and segmentation model based on the Convolutional Neural Network (CNN) algorithms in dental bitewing radiographs using VGG-16 and U-Net architecture and evaluate the clinical performance of the model comparing to human observer.MethodsA total of 621 anonymized bitewing radiographs were used to progress the Artificial Intelligence (AI) system (CranioCatch, Eskisehir, Turkey) for the detection and segmentation of caries lesions. The radiographs were obtained from the Radiology Archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry of Ordu University. VGG-16 and U-Net implemented with PyTorch models were used for the detection and segmentation of caries lesions, respectively.ResultsThe sensitivity, precision, and F-measure rates for caries detection and caries segmentation were 0.84, 0.81; 0.84, 0.86; and 0.84, 0.84, respectively. Comparing to 5 different experienced observers and AI models on external radiographic dataset, AI models showed superiority to assistant specialists.ConclusionCNN-based AI algorithms can have the potential to detect and segmentation of dental caries accurately and effectively in bitewing radiographs. AI algorithms based on the deep-learning method have the potential to assist clinicians in routine clinical practice for quickly and reliably detecting the tooth caries. The use of these algorithms in clinical practice can provide to important benefit to physicians as a clinical decision support system in dentistry.
Article
Over the past few years, artificial intelligence (AI) technologies have been actively used in many areas of medicine, including dentistry. The aim of the study is to determine the diagnostic value of IS in the detection of caries and its complications according to cone beam computed tomography (CBCT) data in comparison with clinical examination. Materials and methods. CBCT images of 15 patients with carious and periodontal lesions were analyzed by an experienced dentist, who also specializes in radiology, and the Diagnocat AI software. The dentist also performed a visual examination of these patients. Results. Most of all contact caries were determined using AI (n = 20), and occlusal caries − during clinical examination (n = 10). The greatest number of periapical changes was also detected using IS (n = 22). The difference between the indicators of detection of pathological foci in the assessment of IS and the radiologist was statistically insignificant, which indicates the equivalence of these methods. X-ray image evaluation revealed more contact caries compared to clinical examination (14 vs. 7, p < 0.05), but clinical examination was superior in detecting occlusal caries (10 vs. 2, p < 0.03). Periodontal disease was more accurately diagnosed by X-ray (17 vs. 9, p < 0.05). The average time for evaluation of CBCT images by a radiologist was 21.54 ± 4.4 minutes, and the AI completed the report in 4.6 ± 4.4 minutes from the moment the loading of CBCT was completed (p < 0.01). Conclusion. The use of AI technologies in the analysis of CBCT images can improve the accuracy of diagnosing caries and its complications by up to 98%, as well as significantly speed up the time for making a diagnostic decision.
Article
Objectives To create and assess a deep learning model using segmentation and transfer learning methods to visualize the proximity of the mandibular canal to an impacted third molar on panoramic radiographs. Study design Panoramic radiographs containing the mandibular canal and impacted third molar were collected from two hospitals (Hospitals A and B). A total of 3200 areas were used for creating and evaluating learning models. A source model was created using the data form Hospital A, simulatively transferred to Hospital B, and trained using various amounts of data from Hospital B to create target models. The same data were then applied to the target models to calculate the Dice coefficient, Jaccard index, and sensitivity. Results The performance of target models trained using 200 or more datasets was equivalent to that of the source model tested using data obtained from the same hospital (Hospital A). Conclusions Sufficiently qualified models could delineate the mandibular canal in relation to an impacted third molar on panoramic radiographs using a segmentation technique. Transfer learning appears to be an effective method for creating such models using a relatively small number of datasets.
Article
Objective: This study explored the feasibility of using deep learning for profiling of panoramic radiographs. Study design: Panoramic radiographs of 1000 patients were used. Patients were categorized using seven dental or physical characteristics: age, gender, mixed or permanent dentition, number of presenting teeth, impacted wisdom tooth status, implant status, and prosthetic treatment status. A Neural Network Console (Sony Network Communications Inc., Tokyo, Japan) deep learning system and the VGG-Net deep convolutional neural network were used for classification. Results: Dentition and prosthetic treatment status exhibited classification accuracies of 93.5% and 90.5%, respectively. Tooth number and implant status both exhibited 89.5% classification accuracy; impacted wisdom tooth status exhibited 69.0% classification accuracy. Age and gender exhibited classification accuracies of 56.0% and 75.5%, respectively. Conclusion: Our proposed preliminary profiling method may be useful for preliminary interpretation of panoramic images and preprocessing before the application of additional artificial intelligence techniques.
Book
Full-text available
INSAC WORLD HEALTH SCIENCES
Article
Introduction Deep learning methods have recently been applied for the processing of medical images, and they have shown promise in a variety of applications. This study aimed to develop a deep learning approach for identifying oral lichen planus lesions using photographic images. Material and Methods Anonymous retrospective photographic images of buccal mucosa with 65 healthy and 72 oral lichen planus lesions were identified using the CranioCatch program (CranioCatch, Eskişehir, Turkey). All images were re-checked and verified by Oral Medicine and Maxillofacial Radiology experts. This data set was divided into training (n =51; n=58), verification (n =7; n=7), and test (n =7; n=7) sets for healthy mucosa and mucosa with the oral lichen planus lesion, respectively. In the study, an artificial intelligence model was developed using Google Inception V3 architecture implemented with Tensorflow, which is a deep learning approach. Results AI deep learning model provided the classification of all test images for both healthy and diseased mucosa with a 100% success rate. Conclusion In the healthcare business, AI offers a wide range of uses and applications. The increased effort increased complexity of the job, and probable doctor fatigue may jeopardize diagnostic abilities and results. Artificial intelligence (AI) components in imaging equipment would lessen this effort and increase efficiency. They can also detect oral lesions and have access to more data than their human counterparts. Our preliminary findings show that deep learning has the potential to handle this significant challenge.
Article
In the last few years, artificial intelligence (AI) research has been rapidly developing and emerging in the field of dental and maxillofacial radiology. Dental radiography, which is commonly used in daily practices, provides an incredibly rich resource for AI development and attracted many researchers to develop its application for various purposes. This study reviewed the applicability of AI for dental radiography from the current studies. Online searches on PubMed and IEEE Xplore databases, up to December 2020, and subsequent manual searches were performed. Then, we categorized the application of AI according to similarity of the following purposes: diagnosis of dental caries, periapical pathologies, and periodontal bone loss; cyst and tumor classification; cephalometric analysis; screening of osteoporosis; tooth recognition and forensic odontology; dental implant system recognition; and image quality enhancement. Current development of AI methodology in each aforementioned application were subsequently discussed. Although most of the reviewed studies demonstrated a great potential of AI application for dental radiography, further development is still needed before implementation in clinical routine due to several challenges and limitations, such as lack of datasets size justification and unstandardized reporting format. Considering the current limitations and challenges, future AI research in dental radiography should follow standardized reporting formats in order to align the research designs and enhance the impact of AI development globally.
Article
Full-text available
Objectives: To analyze all artificial intelligence abstracts presented at the European Congress of Radiology (ECR) 2019 with regard to their topics and their adherence to the Standards for Reporting Diagnostic accuracy studies (STARD) checklist. Methods: A total of 184 abstracts were analyzed with regard to adherence to the STARD criteria for abstracts as well as the reported modality, body region, pathology, and use cases. Results: Major topics of artificial intelligence abstracts were classification tasks in the abdomen, chest, and brain with CT being the most commonly used modality. Out of the 10 STARD for abstract criteria analyzed in the present study, on average, 5.32 (SD = 1.38) were reported by the 184 abstracts. Specifically, the highest adherence with STARD for abstracts was found for general interpretation of results of abstracts (100.0%, 184 of 184), clear study objectives (99.5%, 183 of 184), and estimates of diagnostic accuracy (96.2%, 177 of 184). The lowest STARD adherence was found for eligibility criteria for participants (9.2%, 17 of 184), type of study series (13.6%, 25 of 184), and implications for practice (20.7%, 44 of 184). There was no significant difference in the number of reported STARD criteria between abstracts accepted for oral presentation (M = 5.35, SD = 1.31) and abstracts accepted for the electronic poster session (M = 5.39, SD = 1.45) (p = .86). Conclusions: The adherence with STARD for abstract was low, indicating that providing authors with the related checklist may increase the quality of abstracts.
Article
Full-text available
Accurate localisation of mandibular canals in lower jaws is important in dental implantology, in which the implant position and dimensions are currently determined manually from 3D CT images by medical experts to avoid damaging the mandibular nerve inside the canal. Here we present a deep learning system for automatic localisation of the mandibular canals by applying a fully convolutional neural network segmentation on clinically diverse dataset of 637 cone beam CT volumes, with mandibular canals being coarsely annotated by radiologists, and using a dataset of 15 volumes with accurate voxel-level mandibular canal annotations for model evaluation. We show that our deep learning model, trained on the coarsely annotated volumes, localises mandibular canals of the voxel-level annotated set, highly accurately with the mean curve distance and average symmetric surface distance being 0.56 mm and 0.45 mm, respectively. These unparalleled accurate results highlight that deep learning integrated into dental implantology workflow could significantly reduce manual labour in mandibular canal annotations.
Article
Full-text available
The practicability of deep learning techniques has been demonstrated by their successful implementation in varied fields, including diagnostic imaging for clinicians. In accordance with the increasing demands in the healthcare industry, techniques for automatic prediction and detection are being widely researched. Particularly in dentistry, for various reasons, automated mandibular canal detection has become highly desirable. The positioning of the inferior alveolar nerve (IAN), which is one of the major structures in the mandible, is crucial to prevent nerve injury during surgical procedures. However, automatic segmentation using Cone beam computed tomography (CBCT) poses certain difficulties, such as the complex appearance of the human skull, limited number of datasets, unclear edges, and noisy images. Using work-in-progress automation software, experiments were conducted with models based on 2D SegNet, 2D and 3D U-Nets as preliminary research for a dental segmentation automation tool. The 2D U-Net with adjacent images demonstrates higher global accuracy of 0.82 than naïve U-Net variants. The 2D SegNet showed the second highest global accuracy of 0.96, and the 3D U-Net showed the best global accuracy of 0.99. The automated canal detection system through deep learning will contribute significantly to efficient treatment planning and to reducing patients’ discomfort by a dentist. This study will be a preliminary report and an opportunity to explore the application of deep learning to other dental fields.
Article
Full-text available
Panoramic radiographs and computed tomography (CT) play a paramount role in the accurate diagnosis, treatment planning, and prognostic evaluation of various complex dental pathologies. The advent of cone-beam computed tomography (CBCT) has revolutionized the practice of dentistry, and this technique is now considered the gold standard for imaging the oral and maxillofacial area due to its numerous advantages, including reductions in exposure time, radiation dose, and cost in comparison to other imaging modalities. This review highlights the broad use of CBCT in the dentomaxillofacial region, and also focuses on future software advancements that can further optimize CBCT imaging.
Article
Full-text available
Introduction: We applied deep convolutional neural networks (CNNs) to detect apical lesions (ALs) on panoramic dental radiographs. Methods: Based on a synthesized data set of 2001 tooth segments from panoramic radiographs, a custom-made 7-layer deep neural network, parameterized by a total number of 4,299,651 weights, was trained and validated via 10 times repeated group shuffling. Hyperparameters were tuned using a grid search. Our reference test was the majority vote of 6 independent examiners who detected ALs on an ordinal scale (0, no AL; 1, widened periodontal ligament, uncertain AL; 2, clearly detectable lesion, certain AL). Metrics were the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive/negative predictive values. Subgroup analysis for tooth types was performed, and different margins of agreement of the reference test were applied (base case: 2; sensitivity analysis: 6). Results: The mean (standard deviation) tooth level prevalence of both uncertain and certain ALs was 0.16 (0.03) in the base case. The AUC of the CNN was 0.85 (0.04). Sensitivity and specificity were 0.65 (0.12) and 0.87 (0.04,) respectively. The resulting positive predictive value was 0.49 (0.10), and the negative predictive value was 0.93 (0.03). In molars, sensitivity was significantly higher than in other tooth types, whereas specificity was lower. When only certain ALs were assessed, the AUC was 0.89 (0.04). Increasing the margin of agreement to 6 significantly increased the AUC to 0.95 (0.02), mainly because the sensitivity increased to 0.74 (0.19). Conclusions: A moderately deep CNN trained on a limited amount of image data showed satisfying discriminatory ability to detect ALs on panoramic radiographs.
Article
Full-text available
Purpose Artificial intelligence (AI), represented by deep learning, can be used for real-life problems and is applied across all sectors of society including medical and dental field. The purpose of this study is to review articles about deep learning that were applied to the field of oral and maxillofacial radiology. Materials and Methods A systematic review was performed using Pubmed, Scopus, and IEEE explore databases to identify articles using deep learning in English literature. The variables from 25 articles included network architecture, number of training data, evaluation result, pros and cons, study object and imaging modality. Results Convolutional Neural network (CNN) was used as a main network component. The number of published paper and training datasets tended to increase, dealing with various field of dentistry. Conclusion Dental public datasets need to be constructed and data standardization is necessary for clinical application of deep learning in dental field.
Article
Full-text available
We propose using faster regions with convolutional neural network features (faster R-CNN) in the TensorFlow tool package to detect and number teeth in dental periapical films. To improve detection precisions, we propose three post-processing techniques to supplement the baseline faster R-CNN according to certain prior domain knowledge. First, a filtering algorithm is constructed to delete overlapping boxes detected by faster R-CNN associated with the same tooth. Next, a neural network model is implemented to detect missing teeth. Finally, a rule-base module based on a teeth numbering system is proposed to match labels of detected teeth boxes to modify detected results that violate certain intuitive rules. The intersection-over-union (IOU) value between detected and ground truth boxes are calculated to obtain precisions and recalls on a test dataset. Results demonstrate that both precisions and recalls exceed 90% and the mean value of the IOU between detected boxes and ground truths also reaches 91%. Moreover, three dentists are also invited to manually annotate the test dataset (independently), which are then compared to labels obtained by our proposed algorithms. The results indicate that machines already perform close to the level of a junior dentist.
Article
Full-text available
Objectives:: Analysis of dental radiographs is an important part of the diagnostic process in daily clinical practice. Interpretation by an expert includes teeth detection and numbering. In this project, a novel solution based on convolutional neural networks (CNNs) is proposed that performs this task automatically for panoramic radiographs. Methods:: A data set of 1352 randomly chosen panoramic radiographs of adults was used to train the system. The CNN-based architectures for both teeth detection and numbering tasks were analyzed. The teeth detection module processes the radiograph to define the boundaries of each tooth. It is based on the state-of-the-art Faster R-CNN architecture. The teeth numbering module classifies detected teeth images according to the FDI notation. It utilizes the classical VGG-16 CNN together with the heuristic algorithm to improve results according to the rules for spatial arrangement of teeth. A separate testing set of 222 images was used to evaluate the performance of the system and to compare it to the expert level. Results:: For the teeth detection task, the system achieves the following performance metrics: a sensitivity of 0.9941 and a precision of 0.9945. For teeth numbering, its sensitivity is 0.9800 and specificity is 0.9994. Experts detect teeth with a sensitivity of 0.9980 and a precision of 0.9998. Their sensitivity for tooth numbering is 0.9893 and specificity is 0.9997. The detailed error analysis showed that the developed software system makes errors caused by similar factors as those for experts. Conclusions:: The performance of the proposed computer-aided diagnosis solution is comparable to the level of experts. Based on these findings, the method has the potential for practical application and further evaluation for automated dental radiograph analysis. Computer-aided teeth detection and numbering simplifies the process of filling out digital dental charts. Automation could help to save time and improve the completeness of electronic dental records.
Article
Full-text available
Machine learning (ML) is a form of artificial intelligence which is placed to transform the twenty-first century. Rapid, recent progress in its underlying architecture and algorithms and growth in the size of datasets have led to increasing computer competence across a range of fields. These include driving a vehicle, language translation, chatbots and beyond human performance at complex board games such as Go. Here, we review the fundamentals and algorithms behind machine learning and highlight specific approaches to learning and optimisation. We then summarise the applications of ML to medicine. In particular, we showcase recent diagnostic performances, and caveats, in the fields of dermatology, radiology, pathology and general microscopy.
Article
Full-text available
Objectives Ameloblastomas and keratocystic odontogenic tumors (KCOTs) are important odontogenic tumors of the jaw. While their radiological findings are similar, the behaviors of these two types of tumors are different. Precise preoperative diagnosis of these tumors can help oral and maxillofacial surgeons plan appropriate treatment. In this study, we created a convolutional neural network (CNN) for the detection of ameloblastomas and KCOTs. Methods Five hundred digital panoramic images of ameloblastomas and KCOTs were retrospectively collected from a hospital information system, whose patient information could not be identified, and preprocessed by inverse logarithm and histogram equalization. To overcome the imbalance of data entry, we focused our study on 2 tumors with equal distributions of input data. We implemented a transfer learning strategy to overcome the problem of limited patient data. Transfer learning used a 16-layer CNN (VGG-16) of the large sample dataset and was refined with our secondary training dataset comprising 400 images. A separate test dataset comprising 100 images was evaluated to compare the performance of CNN with diagnosis results produced by oral and maxillofacial specialists. Results The sensitivity, specificity, accuracy, and diagnostic time were 81.8%, 83.3%, 83.0%, and 38 seconds, respectively, for the CNN. These values for the oral and maxillofacial specialist were 81.1%, 83.2%, 82.9%, and 23.1 minutes, respectively. Conclusions Ameloblastomas and KCOTs could be detected based on digital panoramic radiographic images using CNN with accuracy comparable to that of manual diagnosis by oral maxillofacial specialists. These results demonstrate that CNN may aid in screening for ameloblastomas and KCOTs in a substantially shorter time.
Article
Full-text available
Objectives: The purpose of this study was to assess the relationship of the maxillary third molars to the maxillary sinus using cone-beam computed tomography (CBCT) in a Turkish population. Materials and methods: A total of 300 right and 307 left maxillary third molars were examined using CBCT images obtained from 394 patients. Data including the age, gender, the angulation type, depth of the third molars, and horizontal and vertical positions of the maxillary sinus relative to the third molars were examined. Results: Among 394 patients, 215 (54.6%) were male and 179 (45.4%) were female. The most common angulation of impaction was vertical (80.2%). Based on the depth of the third molars in relation to the adjacent second molar, Class A was the most common. Regarding the relationships of the third molars with the maxillary sinus, vertical Type I (43.5%) and horizontal Type II (59.3%) were seen most frequently. There was a significant difference between the vertical and horizontal relationships (P < 0.05). Conclusions: Knowledge of the anatomical relationship between the maxillary sinus floor and maxillary third molar roots is important for removing a maxillary third molar. CBCT evaluation could be valuable when performing dental procedures involving the maxillary third molars.
Article
Full-text available
A tooth normally erupts when half to three-quarters of its final root length has developed. Tooth impaction is usually diagnosed well after this period and is generally asymptomatic. It is principally for this reason that patients seek treatment later than optimal. Tooth impaction is a common problem in daily orthodontic practice and, in most cases, it is recognized by chance in a routine dental examination. Therefore, it is very important that dental practitioners are aware of this condition, since early detection and intervention may help to prevent many harmful complications. The treatment of impacted teeth requires multidisciplinary cooperation between orthodontists, oral surgeons and sometimes periodontists. Orthodontic treatment and surgical exposure of impacted teeth are performed in order to bring the impacted tooth into the line of the arch. The treatment is long, more complicated and challenging. This article presents an overview of the prevalence, etiology, diagnosis, treatment and complications associated with the management of impacted teeth.
Article
Full-text available
Background: Radiographs, adjunct to clinical examination are always valuable complementary methods for dental caries detection. Recently, progressing in digital imaging system provides possibility of software designing for automatically dental caries detection. Objectives: The aim of this study was to develop and assess the function of diagnostic computer software designed for evaluation of approximal caries in posterior teeth. This software should be able to indicate the depth and location of caries on digital radiographic images. Materials and methods: Digital radiographs were obtained of 93 teeth including 183 proximal surfaces. These images were used as a database for designing the software and training the software designer. In the design phase, considering the summed density of pixels in rows and columns of the images, the teeth were separated from each other and the unnecessary regions; for example, the root area in the alveolar bone was eliminated. Therefore, based on summed intensities, each image was segmented such that each segment contained only one tooth. Subsequently, based on the fuzzy logic, a well-known data-clustering algorithm named fuzzy c-means (FCM) was applied to the images to cluster or segment each tooth. This algorithm is referred to as a soft clustering method, which assigns data elements to one or more clusters with a specific membership function. Using the extracted clusters, the tooth border was determined and assessed for cavity. The results of histological analysis were used as the gold standard for comparison with the results obtained from the software. Depth of caries was measured, and finally Intraclass Correlation Coefficient (ICC) and Bland-Altman plot were used to show the agreement between the methods. Results: The software diagnosed 60% of enamel caries. The ICC (for detection of enamel caries) between the computer software and histological analysis results was determined as 0.609 (95% confidence interval [CI] = 0.159-0.849) (P = 0.006). Also, the computer program diagnosed 97% of dentin caries and the ICC between the software and histological analysis results for dentin caries was determined as 0.937 (95% CI=0.906-0.958) (P < 0.001). Bland-Altman plot showed an acceptable agreement for measuring the depth of caries in enamel and dentin. Conclusions: The designed software was able to detect a significant number of dentin caries and acceptable measuring of the depth of carious lesions in enamel and dentin. However, the software had limited ability in detecting enamel lesions.
Article
Full-text available
Objectives: The aim of this study is to evaluate the position of impacted third molars based on the classifications of Pell & Gregory and Winter in a sample of Iranian patients. Study design: In this retrospective study, up to 1020 orthopantomograms (OPG) of the patients who were referred to the radiology clinics from October 2007 to January 2011 were evaluated. Data including the age, gender, the angulation type, width and depth of impaction were evaluated by statistical tests. Results: Among 1020 patients, 380(27.3%) were male and 640(62.7%) were female with the sex ratio was 1:1.7. Of the 1020 OPGs, 585 cases showed at least one impacted third molar, with significant difference between males (205; 35.1%) and females (380; 64.9%) (P = 0.0311). Data analysis showed that impacted third molars were 1.9 times more likely to occur in the mandible than in the maxilla (P =0.000). The most common angulation of impaction in the mandible was mesioangular impaction (48.3%) and the most common angulation of impaction in the maxilla was the vertical (45.3%). Impaction in the level IIA was the most common in both maxilla and mandible. There was no significant diffe-rence between the right and left sides in both the maxilla and the mandible. Conclusion: The pattern of third molar impaction in the southeast region of Iran is characterized by a high prevalence of impaction, especially in the mandible. Female more than male have teeth impaction. The most common angulation was the mesioangular in the mandible, and the vertical angulation in the maxilla. The most common level of impaction was the A and there was no any significant difference between the right and left sides in both jaws. Key words:Third molar, impaction, incidence, Iran.
Article
Aim: To verify the diagnostic performance of an artificial intelligence system based on the deep convolutional neural network method to detect periapical pathosis in Cone-Beam Computed Tomography (CBCT) images. Methodology: In total, images of 153 periapical lesions obtained from 109 patients were included. The specific area of the jaw and teeth associated with the periapical lesions were then determined by a human observer. Lesion volumes were calculated using the manual segmentation methods using Fujifilm-Synapse 3D software (Fujifilm Medical Systems, Stamford, Conn, USA). The neural network was then used to determine (1) whether the lesion could be detected; (2) if the lesion was detected, where it was localized (maxilla, mandible, or specific tooth); and (3) lesion volume. Manual segmentation and artificial intelligence (AI) (Diagnocat Inc., San Fransisco, CA, USA) methods were compared using Wilcoxon signed rank test and Bland-Altman analysis. Results: The deep convolutional neural network system was successful in detecting teeth and numbering specific teeth. Only one tooth was incorrectly identified. The AI system was able to detect 142 of a total of 153 periapical lesions. The reliability of correctly detecting a periapical lesion was 92.8%. The deep convolutional neural network volumetric measurements of the lesions were similar to those with manual segmentation. There was no significant difference between the two measurements methods (p> 0,05). Conclusions: Volume measurements performed by humans and by AI systems were comparable to each other. AI systems based on deep learning methods can be useful in detecting periapical pathosis in CBCT images for clinical application.
Article
Objectives: To evaluate a fully deep learning mask region-based convolutional neural network (R-CNN) method for automated tooth segmentation using individual annotation of panoramic radiographs. Study design: In total, 846 images with tooth annotations from 30 panoramic radiographs were used for training, and 20 panoramic images as the validation and test sets. An oral radiologist manually performed individual tooth annotation on the panoramic radiographs to generate the ground truth of each tooth structure. We used the augmentation technique to reduce overfitting and obtained 1024 training samples from 846 original data points. A fully deep learning method using the mask R-CNN model was implemented through a fine-tuning process to detect and localize the tooth structures. For performance evaluation, the F1 score, mean intersection over union (IoU), and visual analysis were utilized. Results: The proposed method produced an F1 score of 0.875 (precision: 0.858, recall: 0.893) and a mean IoU of 0.877. A visual evaluation of the segmentation method showed a close resemblance to the ground truth. Conclusions: The method achieved high performance for automation of tooth segmentation on dental panoramic images. The proposed method might be applied in the first step of diagnosis automation and in forensic identification, which involves similar segmentation tasks.
Article
Objectives The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for detecting vertical root fracture (VRF) on panoramic radiography. Methods Three hundred panoramic images containing a total of 330 VRF teeth with clearly visible fracture lines were selected from our hospital imaging database. Confirmation of VRF lines was performed by two radiologists and one endodontist. Eighty percent (240 images) of the 300 images were assigned to a training set and 20% (60 images) to a test set. A CNN-based deep learning model for the detection of VRFs was built using DetectNet with DIGITS version 5.0. To defend test data selection bias and increase reliability, fivefold cross-validation was performed. Diagnostic performance was evaluated using recall, precision, and F measure. Results Of the 330 VRFs, 267 were detected. Twenty teeth without fractures were falsely detected. Recall was 0.75, precision 0.93, and F measure 0.83. Conclusions The CNN learning model has shown promise as a tool to detect VRFs on panoramic images and to function as a CAD tool.
Article
Purpose: This study analyzes the risk factors associated with the incidences of inferior alveolar nerve (IAN) injury after surgical removal of impacted mandibular third molar (IMTM) and to evaluate the contribution of these risk factors to postoperative neurosensory deficits. Materials and methods: An exhaustive literature search has been carried out in the COCHRANE library and PubMed electronic databases from January 1990 to March 2019 supplemented by manual searching to identify the related studies. Twenty-three studies out of 693 articles from the initial search were finally included, which summed up a total of 26,427 patients (44,171 teeth). Results: Our results have been compared with other current available papers in the literature review that obtained similar outcomes. Among 44,171 IMTM extractions performed by various grades of operators, 1.20% developed transient IAN deficit and 0.28% developed permanent IAN deficit respectively. Depth of impaction (P<0.001), contact between mandibular canal (MC) and IMTM (P<0.001), surgical technique (P<0.001), intra-operative nerve exposure (P<0.001), and surgeon's experience (P<0.001) were statistically significant as contributing risk factors of IAN deficits. Conclusion: Radiographic findings, such as depth of impaction, proximity of the tooth to the mandibular canal, surgical technique, intra-operative nerve exposure, and surgeon's experience were high risk factors of IAN deficit after surgical removal of IMTMs.
Article
Objectives To investigate the current clinical applications and diagnostic performance of artificial intelligence (AI) in dental and maxillofacial radiology (DMFR). Materials and methods Studies using applications related to DMFR to develop or implement AI models were sought by searching five electronic databases and four selected core journals in the field of DMFR. The customized assessment criteria based on QUADAS-2 were adapted for quality analysis of the studies included. Results The initial electronic search yielded 1862 titles, and 50 studies were eventually included. Most studies focused on AI applications for an automated localization of cephalometric landmarks, diagnosis of osteoporosis, classification/segmentation of maxillofacial cysts and/or tumors, and identification of periodontitis/periapical disease. The performance of AI models varies among different algorithms. Conclusion The AI models proposed in the studies included exhibited wide clinical applications in DMFR. Nevertheless, it is still necessary to further verify the reliability and applicability of the AI models prior to transferring these models into clinical practice.
Article
Aim and scope: Artificial intelligence (AI) in medicine is a fast-growing field. The rise of deep learning algorithms, such as convolutional neural networks (CNNs), offers fascinating perspectives for the automation of medical image analysis. In this systematic review article, we screened the current literature and investigated the following question: "Can deep learning algorithms for image recognition improve visual diagnosis in medicine?" Materials and methods: We provide a systematic review of the articles using CNNs for medical image analysis, published in the medical literature before May 2019. Articles were screened based on the following items: type of image analysis approach (detection or classification), algorithm architecture, dataset used, training phase, test, comparison method (with specialists or other), results (accuracy, sensibility and specificity) and conclusion. Results: We identified 352 articles in the PubMed database and excluded 327 items for which performance was not assessed (review articles) or for which tasks other than detection or classification, such as segmentation, were assessed. The 25 included papers were published from 2013 to 2019 and were related to a vast array of medical specialties. Authors were mostly from North America and Asia. Large amounts of qualitative medical images were necessary to train the CNNs, often resulting from international collaboration. The most common CNNs such as AlexNet and GoogleNet, designed for the analysis of natural images, proved their applicability to medical images. Conclusion: CNNs are not replacement solutions for medical doctors, but will contribute to optimize routine tasks and thus have a potential positive impact on our practice. Specialties with a strong visual component such as radiology and pathology will be deeply transformed. Medical practitioners, including surgeons, have a key role to play in the development and implementation of such devices.
Article
Artificial Intelligence (AI) applications have already invaded our everyday life, and the last 10 years have seen the emergence of very promising applications in the field of medicine. However, the literature dealing with the potential applications of IA in Orthognathic Surgery is remarkably poor to date. Yet, it is very likely that due to its amazing power in image recognition AI will find tremendous applications in dento-facial deformities recognition in a near future. In this article, we point out the state-of-the-art AI applications in medicine and its potential applications in the field of orthognathic surgery. AI is a very powerful tool and it is the responsibility of the entire medical profession to achieve a positive symbiosis between clinical sense and AI.
Article
Objectives To apply a deep-learning system for diagnosis of maxillary sinusitis on panoramic radiography, and to clarify its diagnostic performance. Methods Training data for 400 healthy and 400 inflamed maxillary sinuses were enhanced to 6000 samples in each category by data augmentation. Image patches were input into a deep-learning system, the learning process was repeated for 200 epochs, and a learning model was created. Newly-prepared testing image patches from 60 healthy and 60 inflamed sinuses were input into the learning model, and the diagnostic performance was calculated. Receiver-operating characteristic (ROC) curves were drawn, and the area under the curve (AUC) values were obtained. The results were compared with those of two experienced radiologists and two dental residents. Results The diagnostic performance of the deep-learning system for maxillary sinusitis on panoramic radiographs was high, with accuracy of 87.5%, sensitivity of 86.7%, specificity of 88.3%, and AUC of 0.875. These values showed no significant differences compared with those of the radiologists and were higher than those of the dental residents. Conclusions The diagnostic performance of the deep-learning system for maxillary sinusitis on panoramic radiographs was sufficiently high. Results from the deep-learning system are expected to provide diagnostic support for inexperienced dentists.
Article
Objectives:: The distal root of the mandibular first molar occasionally has an extra root, which can directly affect the outcome of endodontic therapy. In this study, we examined the diagnostic performance of a deep learning system for classification of the root morphology of mandibular first molars on panoramic radiographs. Dental cone-beam CT (CBCT) was used as the gold standard. Methods:: CBCT images and panoramic radiographs of 760 mandibular first molars from 400 patients who had not undergone root canal treatments were analyzed. Distal roots were examined on CBCT images to determine the presence of a single or extra root. Image patches of the roots were segmented from panoramic radiographs and applied to a deep learning system, and its diagnostic performance in the classification of root morphplogy was examined. Results:: Extra roots were observed in 21.4% of distal roots on CBCT images. The deep learning system had diagnostic accuracy of 86.9% for the determination of whether distal roots were single or had extra roots. Conclusions:: The deep learning system showed high accuracy in the differential diagnosis of a single or extra root in the distal roots of mandibular first molars.
Article
Objectives: Deep convolutional neural networks (CNNs) are a rapidly emerging new area of medical research, and have yielded impressive results in diagnosis and prediction in the fields of radiology and pathology. The aim of the current study was to evaluate the efficacy of deep CNN algorithms for detection and diagnosis of dental caries on periapical radiographs. Materials and methods: A total of 3000 periapical radiographic images were divided into a training and validation dataset (n = 2400 [80%]) and a test dataset (n = 600 [20%]). A pre-trained GoogLeNet Inception v3 CNN network was used for preprocessing and transfer learning. The diagnostic accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, and area under the curve (AUC) were calculated for detection and diagnostic performance of the deep CNN algorithm. Results: The diagnostic accuracies of premolar, molar, and both premolar and molar models were 89.0% (80.4-93.3), 88.0% (79.2-93.1), and 82.0% (75.5-87.1), respectively. The deep CNN algorithm achieved an AUC of 0.917 (95% CI 0.860-0.975) on premolar, an AUC of 0.890 (95% CI 0.819-0.961) on molar, and an AUC of 0.845 (95% CI 0.790-0.901) on both premolar and molar models. The premolar model provided the best AUC, which was significantly greater than those for other models (P < 0.001). Conclusions: This study highlighted the potential utility of deep CNN architecture for the detection and diagnosis of dental caries. A deep CNN algorithm provided considerably good performance in detecting dental caries in periapical radiographs. CLINICAL SIGNIfiCANCE: Deep CNN algorithms are expected to be among the most effective and efficient methods for diagnosing dental caries.
Article
In this article, we apply the deep learning technique to medical field for the teeth detection and classification of dental periapical radiographs, which is important for the medical curing and postmortem identification. We detect teeth in an input X-ray image and distinguish them from different position. An adult usually has 32 teeth, and some of them are similar while others have very different shape. So there are 32 teeth position for us to recognize, which is a challenging task. Convolutional neural network is a popular method to do multi-class detection and classification, but it needs a lot of training data to get a good result if used directly. The lack of data is a common case in medical field due to patients' privacy. In this work, limited to the available data, we propose a new method using label tree to give each tooth several labels and decompose the task, which can deal with the lack of data. Then use cascade network structure to do automatic identification on 32 teeth position, which uses several convolutional neural network as its basic module. Meanwhile, several key strategies are utilized to improve the detection and classification performance. Our method can deal with many complex cases such as X-ray images with tooth loss, decayed tooth and filled tooth, which frequently appear on patients. The experiments on our dataset show: for small training dataset, compared to the precision and recall by training a 33-classes (32 teeth and background) state-of-the-art convolutional neural network directly, the proposed approach reaches a high precision and recall of 95.8% and 96.1% in total, which is a big improvement in such a complex task.
Article
Recent advances and future perspectives of machine learning techniques offer promising applications in medical imaging. Machine learning has the potential to improve different steps of the radiology workflow including order scheduling and triage, clinical decision support systems, detection and interpretation of findings, postprocessing and dose estimation, examination quality control, and radiology reporting. In this article, the authors review examples of current applications of machine learning and artificial intelligence techniques in diagnostic radiology. In addition, the future impact and natural extension of these techniques in radiology practice are discussed.
Article
Aim: To develop a machine learning-based model for the binary classification of chest radiography abnormalities, to serve as a retrospective tool in guiding clinician reporting prioritisation. Materials and methods: The open-source machine learning library, Tensorflow, was used to retrain a final layer of the deep convolutional neural network, Inception, to perform binary normality classification on two, anonymised, public image datasets. Re-training was performed on 47,644 images using commodity hardware, with validation testing on 5,505 previously unseen radiographs. Confusion matrix analysis was performed to derive diagnostic utility metrics. Results: A final model accuracy of 94.6% (95% confidence interval [CI]: 94.3-94.7%) based on an unseen testing subset (n=5,505) was obtained, yielding a sensitivity of 94.6% (95% CI: 94.4-94.7%) and a specificity of 93.4% (95% CI: 87.2-96.9%) with a positive predictive value (PPV) of 99.8% (95% CI: 99.7-99.9%) and area under the curve (AUC) of 0.98 (95% CI: 0.97-0.99). Conclusion: This study demonstrates the application of a machine learning-based approach to classify chest radiographs as normal or abnormal. Its application to real-world datasets may be warranted in optimising clinician workload.
Article
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. Full text: https://rdcu.be/O1xz
Article
Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification.
Article
Purpose: To determine the width and morphology of the mandible in the impacted third molar region, and to identify the location of the mandibular canal prior to planning impacted third molar operations. Methods: Cone beam computed tomography (CBCT) data of 87 mandibular third molars from 62 Japanese patients were analyzed in this study. The width of the lingual cortical bone and apex-canal distance were measured from cross-sectional images in which the cortical bone was thinnest at the lingual side in the third molar region. Images were used for measuring the space (distance between the inner border of the lingual cortical bone and outer surface of the third molar root), apex-canal distance (distance from the root of the third molar tooth to the superior border of the inferior alveolar canal) and the cortical bone (width between the inner and outer borders of the lingual cortical bone). Results: The means of the space, apex-canal distance and lingual cortical width were 0.31, 1.99, and 0.68 mm, respectively. Impacted third molar teeth (types A-C) were observed at the following frequencies: type A (angular) 37 %; type B (horizontal), 42 %; type C (vertical), 21 %. The morphology of the mandible at the third molar region (types D-F) was observed as: type D (round), 49 %; type E (lingual extended), 18 %; and type F (lingual concave), 32 %. Conclusions: The width and morphology of the mandible with impacted teeth and the location of the mandibular canal at the third molar region could be clearly determined using cross-sectional CBCT images.
Article
We propose a dental classification and numbering system to effectively segment, classify, and number teeth in dental bitewing radiographs. An image enhancement method that combines homomorphic filtering, homogeneity-based contrast stretching, and adaptive morphological transformation is proposed to improve both contrast and illumination evenness of the radiographs simultaneously. Iterative thresholding and integral projection are adapted to isolate teeth to regions of interest (ROIs) followed by contour extraction of the tooth and the pulp (if available) from each ROI. A binary linear support vector machine using the skew-adjusted relative length/width ratios of both teeth and pulps, and crown size as features is proposed to classify each tooth to molar or premolar. Finally, a numbering scheme that combines a missing teeth detection algorithm and a simplified version of sequence alignment commonly used in bioinformatics is presented to assign each tooth a proper number. Experimental results show that our system has accuracy rates of 95.1% and 98.0% for classification and numbering, respectively, in terms of number of teeth tested, and correctly classifies and numbers the teeth in four images that were reported either misclassified or erroneously numbered, respectively.
Article
To evaluate the course of the inferior alveolar nerve and its branches, the detectable branches were investigated with dental cone beam computed tomography (CBCT). Patients in whom the lower third molar (M3) and inferior alveolar nerve canal showed overlapping in the initial panoramic image were included. One hundred twelve impacted lower M3s were extracted after examination with dental CBCT. The detection ratio, the course of the branches, and their relation with the M3 were retrospectively investigated. One hundred fifty-five branches were observed in 106 cases (94.6%, 106/112) around the M3. Most branches coursed under the M3 (55.5%, 86/155), and 85 branches (54.8%, 85/155) were in contact with the M3. The inferior alveolar nerve canal and branch(es) were mostly in contact with the M3 (57.5%, 61/106). Dental CBCT can detect most tubular structures representing branches in the impacted lower M3 region.
Article
To evaluate if the application of an artificial intelligence model, a multilayer perceptron neural network, improves the radiographic diagnosis of proximal caries. One hundred sixty radiographic images of proximal surfaces of extracted human teeth were assessed regarding the presence of caries by 25 examiners. Examination of the radiographs was used to feed the neural network, and the corresponding teeth were sectioned and assessed under optical microscope (gold standard). This gold standard served to teach the neural network to diagnose caries on the basis of the radiographic exams. To gauge the network's capacity for generalization, i.e., its performance with new cases, data were divided into 3 subgroups for training, test, and cross-validation. The area under the receiver operating characteristic (ROC) curve allowed comparison of efficacy between network and examiner diagnosis. For the best of the 25 examiners, the ROC curve area was 0.717, whereas network diagnosis achieved an ROC curve area of 0.884, indicating a sizeable improvement in proximal caries diagnosis. Considering all examiners, the diagnostic improvement using the neural network was 39.4%.