Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Purpose: The aim of this study was to evaluate the diagnostic performance of artificial intelligence (AI) application evaluating of the impacted third molar teeth in Cone-beam Computed Tomography (CBCT) images. Material and methods: In total, 130 third molar teeth (65 patients) were included in this retrospective study. Impaction detection, Impacted tooth numbers, root/canal numbers of teeth, relationship with adjacent anatomical structures (inferior alveolar canal and maxillary sinus) were compared between the human observer and AI application. Recorded parameters agreement between the human observer and AI application based on the deep-CNN system was evaluated using the Kappa analysis. Results: In total, 112 teeth (86.2%) were detected as impacted by AI. The number of roots was correctly determined in 99 teeth (78.6%) and the number of canals in 82 teeth (68.1%). There was a good agreement in the determination of the inferior alveolar canal in relation to the mandibular impacted third molars (kappa: 0.762) as well as the number of roots detection (kappa: 0.620). Similarly, there was an excellent agreement in relation to maxillary impacted third molar and the maxillary sinus (kappa: 0.860). For the maxillary molar canal number detection, a moderate agreement was found between the human observer and AI examinations (kappa: 0.424). Conclusions: Artificial Intelligence (AI) application showed high accuracy values in the detection of impacted third molar teeth and their relationship to anatomical structures.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... These include tooth numbering, assessing the relationships of third molars to the mandibular canal, planning dental implants, evaluating periapical pathologies, detecting dental caries, evaluating osteoporotic changes, and examining jaw tumors. 13,[15][16][17][18][19][20][21][22][23] The objective of this study was to evaluate the accuracy and efficacy of an AI program in identifying dental conditions using panoramic radiographs (PRs), as well as to assess the appropriateness of its treatment recommendations. ...
... 6 Various published studies have utilized Diagnocat software to explore different aspects of dental practice. [15][16][17] One such study investigated the ability of deep CNNs to detect periapical lesions, and the results demonstrated success. 15 In another study, Orhan et al. 16 examined the effectiveness of AI in diagnosing impacted third molars, assessing their relationship with neighboring anatomical structures, and determining the number of root canals. ...
... [15][16][17] One such study investigated the ability of deep CNNs to detect periapical lesions, and the results demonstrated success. 15 In another study, Orhan et al. 16 examined the effectiveness of AI in diagnosing impacted third molars, assessing their relationship with neighboring anatomical structures, and determining the number of root canals. Using cone-beam computed tomography images as the gold standard, the study indicated that the deep CNN method was successful in detecting impacted third molars and evaluating their relationship with anatomical structures. ...
Article
Full-text available
Purpose: The objective of this study was to evaluate the accuracy and effectiveness of an artificial intelligence (AI) program in identifying dental conditions using panoramic radiographs (PRs), as well as to assess the appropriateness of its treatment recommendations. Materials and Methods: PRs from 100 patients (representing 4497 teeth) with known clinical examination findings were randomly selected from a university database. Three dentomaxillofacial radiologists and the Diagnocat AI software evaluated these PRs. The evaluations were focused on various dental conditions and treatments, including canal filling, caries, cast post and core, dental calculus, fillings, furcation lesions, implants, lack of interproximal tooth contact, open margins, overhangs, periapical lesions, periodontal bone loss, short fillings, voids in root fillings, overfillings, pontics, root fragments, impacted teeth, artificial crowns, missing teeth, and healthy teeth. Results: The AI demonstrated almost perfect agreement (exceeding 0.81) in most of the assessments when compared to the ground truth. The sensitivity was very high (above 0.8) for the evaluation of healthy teeth, artificial crowns, dental calculus, missing teeth, fillings, lack of interproximal contact, periodontal bone loss, and implants. However, the sensitivity was low for the assessment of caries, periapical lesions, pontic voids in the root canal, and overhangs. Conclusion: Despite the limitations of this study, the synthesized data suggest that AI-based decision support systems can serve as a valuable tool in detecting dental conditions, when used with PR for clinical dental applications.
... This pathological condition may also cause dental infections such as pericoronitis, neoplasms, orofacial pain, periodontitis, pathological fractures, TMJ disorders, and cysts in the future. The prevalence of this condition among dental problems varies between 0.8 and 3.6% [1]. ...
... There are a limited number of studies on the detection of impacted teeth. Of these studies, Orhan et al. (2021) conducted a study on the detection of impacted third molar teeth. In the study, 130 impacted third molar teeth belonging to 65 patients were used. ...
... In the study, 130 impacted third molar teeth belonging to 65 patients were used. In the application based on the deep CNN system, 86.2% of the number of impacted teeth, 78.6% of the number of roots, and 68.1% of the number of canals were correctly determined [1]. Faure and Engelbrecht proposed a model for the detection of impacted teeth, using ResNet 101 and Faster R-CNN. ...
Article
Full-text available
Objective: Impacted tooth is a common problem that can occur at any age, causing tooth decay, root resorption, and pain in the later stages. In recent years, major advances have been made in medical imaging segmentation using deep convolutional neural network-based networks. In this study, we report on the development of an artificial intelligence system for the automatic identification of impacted tooth from panoramic dental X-ray images. Methods: Among existing networks, in medical imaging segmentation, U-Net architectures are widely implemented. In this article, for dental X-ray image segmentation, blocks and convolutional block structures using inverted residual blocks are upgraded by taking advantage of U-Net's network capacity-intensive connections. At the same time, we propose a method for jumping connections in which bi-directional convolution long short-term memory is used instead of a simple connection. Assessment of the proposed artificial intelligence model performance was evaluated with accuracy, F1-score, intersection over union, and recall. Results: In the proposed method, experimental results are obtained with 99.82% accuracy, 91.59% F1-score, 84.48% intersection over union, and 90.71% recall. Conclusion: Our findings show that our artificial intelligence system could help with future diagnostic support in clinical practice.
... This automation enables dental professionals to dedicate more time to patient care and less to administrative duties. Various studies have utilized the Diagnocat software to investigate different aspects of dental practice [38][39][40]. For instance, one study successfully explored the software's ability to detect periapical lesions. ...
... For instance, one study successfully explored the software's ability to detect periapical lesions. Another study by Orhan et al. [39] examined the effectiveness of AI in diagnosing impacted third molars, assessing their relationship with adjacent anatomical structures and determining the number of root canals. ...
Article
Full-text available
Objectives: To compare 3D cephalometric analysis performed using AI with that conducted manually by a specialist orthodontist. Methods: The CBCT scans (a field of view of 15 × 15 cm) used in the study were obtained from 30 consecutive patients, aged 18 to 50. The 3D cephalometric analysis was conducted using two methods. The first method involved manual tracing performed with the Invivo 6 software (Anatomage Inc., Santa Clara, CA, USA). The second method involved using AI for cephalometric measurements as part of an orthodontic report generated by the Diagnocat system (Diagnocat Ltd., San Francisco, CA, USA). Results: A statistically significant difference within one standard deviation of the parameter was found in the following measurements: SNA, SNB, and the left interincisal angle. Statistically significant differences within two standard deviations were noted in the following measurements: the right and left gonial angles, the left upper incisor, and the right lower incisor. No statistically significant differences were observed beyond two standard deviations. Conclusions: AI in the form of Diagnocat proved to be effective in assessing the mandibular growth direction, defining the skeletal class, and estimating the overbite, overjet, and Wits parameter.
... However, the research also indicated the need for further improving the efficiency of AI models to better support clinical decisionmaking and surgical planning [4]. Orhan et al. (2021) assessed the diagnostic performance of AI in identifying impacted wisdom teeth from Cone-beam CT images. The research findings demonstrated that AI application showed high accuracy in diagnosing impacted wisdom teeth and their relationships with anatomical structures [5]. ...
... Orhan et al. (2021) assessed the diagnostic performance of AI in identifying impacted wisdom teeth from Cone-beam CT images. The research findings demonstrated that AI application showed high accuracy in diagnosing impacted wisdom teeth and their relationships with anatomical structures [5]. Procházka et al. explored the possibility of using deep learning to identify reflective data in dental medicine. ...
Article
Full-text available
In the field of dental medicine, there is an increasing exploration of the application of Artificial Intelligence (AI) to enhance the efficiency and accuracy of diagnosing, treating, and preventing oral diseases. This paper primarily investigates the current applications and future prospects of AI in the realm of dental medicine. Its purpose is to delve into the multifaceted utilization of AI in dentistry, spanning dental imaging, macrobiotics, genomics research, treatment planning, and patient management. By depicting AI applications in these domains, the article underscores its potential advantages, such as improving diagnostic accuracy, tailoring personalized treatment plans, and monitoring patient health status. Methodologically, the paper references the use of deep learning-based image recognition systems and AI technology in genomic research, highlighting the diverse applications of AI in dental medicine. Key conclusions emphasize the immense potential of AI in the dental medicine field, offering crucial support in diagnostics, treatment planning, and patient management. However, the article also points out challenges in practical implementation, including data privacy, algorithm interpretability, and clinical validation. Therefore, the paper emphasizes the need to overcome these challenges in the future to achieve a broader and more profound impact of AI in dental medicine.
... Therefore, AI algorithms have been developed to analyze panoramic radiographs and predict the eruption status and angulation of mandibular third molars [17]. Orhan et al. [19] used cone beam computed tomography (CBCT) AI to determine mandibular third molar positions. ...
... The AI-based model has been reported to achieve decent accuracy in helping students and residents [19,21]. However, in cases where the comparison was between an experienced radiologist and a surgeon with AI models, the diagnostic accuracy rates were not that significant [27]. ...
Article
Full-text available
Objective: This systematic review aims to summarize the evidence on the use and applicability of AI in impacted mandibular third molars. Methods: Searches were performed in the following databases: PubMed, Scopus, and Google Scholar. The study protocol is registered at the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY202460081). The retrieved articles were subjected to an exhaustive review based on the inclusion and exclusion criteria for the study. Articles on the use of AI for diagnosis, treatment, and treatment planning in patients with impacted mandibular third molars were included. Results: Twenty-one articles were selected and evaluated using the Scottish Intercollegiate Guidelines Network (SIGN) evidence quality scale. Most of the analyzed studies dealt with using AI to determine the relationship between the mandibular canal and the impacted mandibular third molar. The average quality of the articles included in this review was 2+, which indicated that the level of evidence, according to the SIGN protocol, was B. Conclusions: Compared to human observers, AI models have demonstrated decent performance in determining the morphology, anatomy, and relationship of the impaction with the inferior alveolar nerve canal. However, the prediction of eruptions and future horizons of AI models are still in the early developmental stages. Additional studies estimating the eruption in mixed and permanent dentition are warranted to establish a comprehensive model for identifying, diagnosing, and predicting third molar eruptions and determining the treatment outcomes in the case of impacted teeth. This will help clinicians make better decisions and achieve better treatment outcomes.
... Beyond identification and segmentation, CBCT boasts irreplaceable advantages in diagnosis and detection ( Table 2, Ref. [20,[24][25][26][27]). CBCT stands as the gold standard for discerning the number of root canals. ...
... Moreover, CBCT is considered the gold standard for evaluating impacted third molars due to its ability to detect root canal shapes, the relationship between upper jaw teeth and the maxillary sinus, the relationship between lower jaw teeth and the inferior alveolar canal, etc. Orhan et al. [25] utilized a CNN (Diagnocat, Inc.) with annotated information on impacted teeth. The results demonstrated accurate determination of the number of roots in 99 out of 112 teeth (78.6%) and 82 out of 112 canals (68.1%). ...
... IAN damage and subsequent neurosensory disturbances is rare but serious complication [2]. Panoramic radiography is routinely utilized to initially assessment of possible IAN damage with advantages including common availability and low cost however has several mis interpretation regarding with image quality [26]. CBCT scans can provide diagnostic information in different planes without overlap anatomical structures [26]. ...
... Panoramic radiography is routinely utilized to initially assessment of possible IAN damage with advantages including common availability and low cost however has several mis interpretation regarding with image quality [26]. CBCT scans can provide diagnostic information in different planes without overlap anatomical structures [26]. In the literature, it was reported that preoperative CBCT scan did not decrease IAN damage risk in comparison to panoramic radiograph [27]. ...
Article
Full-text available
Objectives: The aim of this retrospective study was to investigate anatomical structure of mandibular canal and the factors those increase the possibility of inferior alveolar nerve damage in mandibular third molar region of Turkish population. Material and Methods: Overall 320 participants with 436 mandibular third molars were included from four different study centers. Following variables were measured: type and depth of third molar impaction, position of mandibular canal in relation to third molars, morphology of mandibular canal, cortication status of mandibular canal, possible contact between the third molars and mandibular canal, thickness and density of superior, buccal, and lingual mandibular canal wall, bucco-lingual and apico-coronal mandibular canal diameters on cone-beam computed tomography scans. Results: Lingual mandibular canal wall density and thickness were decreased significantly as the impaction depth of mandibular third molar was increased (P = 0.045, P = 0.001 respectively). Highest buccal mandibular canal wall density and thickness were observed in lingual position of mandibular canal in relation to mandibular third molar (P = 0.021, P = 0.034 respectively). Mandibular canal with oval/round morphology had higher apico-coronal diameter in comparison to tear drop and dumbbell morphologies (P = 0.018). Additionally, mandibular canals with observed cortication border and no contact with mandibular third molar had denser and thicker lingual mandibular canal wall (P = 0.003, P = 0.001 respectively). Conclusions: Buccal and lingual mandibular canal wall density, thickness and mandibular canal diameter may be related with high-risk indicators of inferior alveolar nerve injury.
... Artificial intelligence (AI) is defined as the capability of a machine to perform functions normally associated with human intelligence such as reasoning, learning, and self-improvement [6]. Deep learning (DL), a subset of AI, uses artificial neural networks and has gained increasing popularity in medicine and dentistry [7,8]. ...
Article
Full-text available
Objectives This study aimed to assess the accuracy of a two-stage deep learning (DL) model for (1) detecting mandibular third molars (MTMs) and the mandibular canal (MC), and (2) classifying the anatomical relationship between these structures (contact/no contact) on panoramic radiographs. Method MTMs and MCs were labeled on panoramic radiographs by a calibrated examiner using bounding boxes. Each bounding box contained MTM and MC on one side. The relationship of MTMs with the MC was assessed on CBCT scans by two independent examiners without the knowledge of the condition of MTM and MC on the corresponding panoramic image, and dichotomized as contact/no contact. Data were split into training, validation, and testing sets with a ratio of 80:10:10. Faster R-CNN was used for detecting MTMs and MCs and ResNeXt for classifying their relationship. AP50 and AP75 were used as outcomes for detecting MTMs and MCs, and accuracy, precision, recall, F1-score, and the area-under-the-receiver-operating-characteristics curve (AUROC) were used to assess classification performance. The training and validation of the models were conducted using the Python programming language with the PyTorch framework. Results Three hundred eighty-seven panoramic radiographs were evaluated. MTMs were present bilaterally on 232 and unilaterally on 155 radiographs. In total, 619 images were collected which included MTMs and MCs. AP50 and AP75 indicating accuracy for detecting MTMs and MCs were 0.99 and 0.90 respectively. Classification accuracy, recall, specificity, F1-score, precision, and AUROC values were 0.85, 0.85, 0.93, 0.84, 0.86, and 0.91, respectively. Conclusion DL can detect MTMs and MCs and accurately assess their anatomical relationship on panoramic radiographs.
... Tomography (CBCT) [19,38,53,64,67,93,[97][98][99][100][101][102][103][104] Endodontics, orthodontics, implant, oral surgery, and oral medicine High resolution 3D volumetric data. ...
Article
Full-text available
Background: Dental care has been transformed by neural networks, introducing advanced methods for improving patient outcomes. By leveraging technological innovation, dental informatics aims to enhance treatment and diagnostic processes. Early diagnosis of dental problems is crucial, as it can substantially reduce dental disease incidence by ensuring timely and appropriate treatment. The use of artificial intelligence (AI) within dental informatics is a pivotal tool that has applications across all dental specialties. This systematic literature review aims to comprehensively summarize existing research on AI implementation in dentistry. It explores various techniques used for detecting oral features such as teeth, fillings, caries, prostheses, crowns, implants, and endodontic treatments. AI plays a vital role in the diagnosis of dental diseases by enabling precise and quick identification of issues that may be difficult to detect through traditional methods. Its ability to analyze large volumes of data enhances diagnostic accuracy and efficiency, leading to better patient outcomes. Methods: An extensive search was conducted across a number of databases, including Science Direct, PubMed (MEDLINE), arXiv.org, MDPI, Nature, Web of Science, Google Scholar, Scopus, and Wiley Online Library. Results: The studies included in this review employed a wide range of neural networks, showcasing their versatility in detecting the dental categories mentioned above. Additionally, the use of diverse datasets underscores the adaptability of these AI models to different clinical scenarios. This study highlights the compatibility, robustness, and heterogeneity among the reviewed studies. This indicates that AI technologies can be effectively integrated into current dental practices. The review also discusses potential challenges and future directions for AI in dentistry. It emphasizes the need for further research to optimize these technologies for broader clinical applications. Conclusions: By providing a detailed overview of AI’s role in dentistry, this review aims to inform practitioners and researchers about the current capabilities and future potential of AI-driven dental care, ultimately contributing to improved patient outcomes and more efficient dental practices.
... 21 A deep CNN system is firstly introduced to examine the location of impacted third molar teeth achieving 86.2% accuracy. 2 The following deep learning systems utilizing models such as AlexNet, VGG-16, and DetectNet were also been applied, with DetectNet achieving the highest accuracy of 96% in identifying impacted supernumerary teeth. 22 Considering from a different angle, Zhang et al 23 evaluated the predictive accuracy of an artificial neural network in post-extraction facial swelling, reaching 98% accuracy with an improved algorithm. ...
Article
Full-text available
Objective The aim is to detect impacted teeth in panoramic radiology by refining the pretrained MedSAM model. Study design Impacted teeth are dental issues that can cause complications and are diagnosed via radiographs. We modified SAM model for individual tooth segmentation using 1016 X-ray images. The dataset was split into training, validation, and testing sets, with a ratio of 16:3:1. We enhanced the SAM model to automatically detect impacted teeth by focusing on the tooth’s center for more accurate results. Results With 200 epochs, batch size equals to 1, and a learning rate of 0.001, random images trained the model. Results on the test set showcased performance up to an accuracy of 86.73%, F1-score of 0.5350, and IoU of 0.3652 on SAM-related models. Conclusion This study fine-tunes MedSAM for impacted tooth segmentation in X-ray images, aiding dental diagnoses. Further improvements on model accuracy and selection are essential for enhancing dental practitioners’ diagnostic capabilities.
... Previous studies have demonstrated the applicability of AI in segmenting impacted teeth, such as supernumerary maxillary teeth on panoramic images [15], mandibular third molars in both panoramic [16] and CBCT images [17], and impacted canines in both panoramic [16] and CBCT images [18]. Although several studies aim to segment teeth on CBCT images, CIRR is given less attention. ...
Article
Full-text available
Objectives Canine-induced root resorption (CIRR) is caused by impacted canines and CBCT images have shown to be more accurate in diagnosing CIRR than panoramic and periapical radiographs with the reported AUCs being 0.95, 0.49, and 0.57, respectively. The aim of this study was to use deep learning to automatically evaluate the diagnosis of CIRR in maxillary incisors using CBCT images. Methods A total of 50 cone beam computed tomography (CBCT) images and 176 incisors were selected for the present study. The maxillary incisors were manually segmented and labeled from the CBCT images by two independent radiologists as either healthy or affected by root resorption induced by the impacted canines. We used five different strategies for training the model: (A) classification using 3D ResNet50 (Baseline), (B) classification of the segmented masks using the outcome of a 3D U-Net pretrained on the 3D MNIST, (C) training a 3D U-Net for the segmentation task and use its outputs for classification, (D) pretraining a 3D U-Net for the segmentation and transfer of the model, and (E) pretraining a 3D U-Net for the segmentation and fine-tuning the model with only the model encoder. The segmentation models were evaluated using the mean intersection over union (mIoU) and Dice coefficient (DSC). The classification models were evaluated in terms of classification accuracy, precision, recall, and F1 score. Results The segmentation model achieved a mean intersection over union (mIoU) of 0.641 and a DSC of 0.901, indicating good performance in segmenting the tooth structures from the CBCT images. For the main classification task of detecting CIRR, Model C (classification of the segmented masks using 3D ResNet) and Model E (pretraining on segmentation followed by fine-tuning for classification) performed the best, both achieving 82% classification accuracy and 0.62 F1-scores on the test set. These results demonstrate the effectiveness of the proposed hierarchical, data-efficient deep learning approaches in improving the accuracy of automated CIRR diagnosis from limited CBCT data compared to the 3D ResNet baseline model. Conclusion The proposed approaches are effective at improving the accuracy of classification tasks and are helpful when the diagnosis is based on the volume and boundaries of an object. While the study demonstrated promising results, future studies with larger sample size are required to validate the effectiveness of the proposed method in enhancing the medical image classification tasks.
... In addition, Orhan et al. [76] in 2021 explored the diagnostic effectiveness of an AI software using CBCT for impacted teeth. The Kappa analysis used for comparison between AI application and human observer was done based on a deep-CNN system. ...
Article
Full-text available
The early-stage detection of dental problems from radiolucency images was important for a healthy oral function. X-Ray image was the most commonly used image modality in the diagnosis by providing detailed and overall contrast of teeths. Medical practitioners are preferring Computer Aided Diagnosis Systems(CADs) over the traditional manual systems. This review presents a comprehensive evaluation of CAD systems for dental radiographic images, with a specific emphasis on key elements including preprocessing, segmentation, and classification. The paper delved to provide an in-depth comparative analysis of different advanced techniques used in each phase of the CADs system. This analysis aims to evaluate their efficacy and potential applications in the field of medical image analysis especially furcation analysis. This study systematically reviews the existing literature to evaluate the performance of machine learning and deep learning-based methods in the three phases of CAD system development: preprocessing, segmentation, and classification. Periodontitis is a chronic inflammatory condition that affects the tissues surrounding and supporting the teeth, including the gums, periodontal ligament, and alveolar bone. It is one of the most prevalent dental diseases globally, with significant implications for oral health and overall well-being. A CAD is necessary so that the accuracy of the models can be increased. Preprocessing includes various methods to deal with different aspects of input images which increases the usability and acceptability of the dataset. A comprehensive investigation of both machine and deep learning denoising methods was carried out during the image preprocessing stage. Segmentation process was another important task to effectively crop or segment out the Region of Interest(ROI). Various AI techniques such as U-Net, Mask R-CNN, and FCN, were critically examined for segmentation for identifying and separating out Region of Interest (ROI). Classification was the last step of the process, for detection and prediction of periodontal problems using features extracted from ROI. In the classification phase, an extensive evaluation of machine learning and deep learning models was conducted. Classic classifiers, such as Support Vector Machines (SVM) and Random Forests(RF), and deep learning architectures, including Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) were analyzed and compared. The three processes, preprocessing, segmentation, classification can be validated and evaluated using different performance metrics like Peak Signal to Noise Ratio (PSNR), Structured Similarity Index (SSIM), dice metric, area overlap, accuracy, top-1/top-5 rate, mean Average Precision(mAP) and Mean Squared Error (MSE) etc. The paper presents the literature survey of the methods, process and evaluations used for detection or prediction of problems in respective radiolucency images datasets. A detailed overview and analytical comparison of different approaches and techniques used in preprocessing, segmentation and classification paradigms. A convolutional autoencoder based U-Net provided the improved results for denoising of medical images. Deep learning based models U-Net, RCNN showed better results than state-of-the-art. For classification, state of art AI models using Deep Neural Networks (DNN) exhibited promising results which increased their usability in the dental field. Further findings and challenges of respective CAD stages were provided, followed by discussion on the implications of AI in Dentistry. In terms of various approaches used for preprocessing,segmentation and classification, the deep learning approaches gave an edge to other paradigms, therefore becoming the root to create new methodologies involving them. Also, various research issues and challenges and future scope based on the study performed were discussed in the paper.
... There was agreement (kappa: 0.860). For the determination of the number of maxillary molar canals, moderate agreement was found between human observer and AI examinations (kappa: 0.424) 28 . ...
Article
Background/Aim: The mandibular canal including the inferior alveolar nerve (IAN) is important in the extraction of the mandibular third molar tooth, which is one of the most frequently performed dentoalveolar surgical procedures in the mandible, and IAN paralysis is the biggest complication during this procedure. Today, deep learning, a subset of artificial intelligence, is in rapid development and has achieved significant success in the field of dentistry. Employing deep learning algorithms on CBCT images, a rare but invaluable resource, for precise mandibular canal identification heralds a significant leap forward in the success of mandibular third molar extractions, marking a promising evolution in dental practices. Material and Methods: The CBCT images of 300 patients were obtained. Labeling the mandibular canal was done and the data sets were divided into two parts: training (n=270) and test data (n=30) sets. Using the nnU-Netv2 architecture, training and validation data sets were applied to estimate and generate appropriate algorithm weight factors. The success of the model was checked with the test data set, and the obtained DICE score gave information about the success of the model. Results: DICE score indicates the overlap between labeled and predicted regions, expresses how effective the overlap area is in an entire combination. In our study, the DICE score found to accurately predict the mandibular canal was 0.768 and showed outstanding success. Conclusions: Segmentation and detection of the mandibular canal on CBCT images allows new approaches applied in dentistry and help practitioners with the diagnostic preoperative and postoperative process.
... A local adaptive threshold was used to separate the repairs. Again, for multiclass classification of the restoration, the Cubic Support Vector Machine technique and Error-Correcting Output Codes were employed in a cross-validation manner [26,27]. [10]'s aim are to create a computerized method to identify & categorize tooth in oral radiographic images enabling automated structured creation of dental files. ...
Article
Full-text available
This study introduces a novel approach for the segmentation and classification of tooth types in 3D dental models, leveraging the Adaptive Enhanced GoogLeNet (AEG) classifier and a Custom Mask Convolutional Neural Network (CM-CNN). Motivated by the complexities and potential misinterpretations in manual or semi-automatic tooth segmentation, our method aims to automate and enhance the accuracy of dental image analysis. The 3D dental models are processed using CM-CNN for segmentation, followed by classification using the AEG classifier. The significance of this work lies in its potential to provide a reliable and efficient solution for precise tooth categorization, contributing to improved diagnostic procedures and treatment planning in dentistry. Comparative analyses demonstrate the superiority of the proposed method, achieving high accuracy, precision, recall, and F-measure when compared to conventional techniques such as traditional geometrical transformation and deep convolutional generative adversarial networks (TD-model) and deep CNN (DCNN). The outcomes suggest the method's practical applicability in real-world dental scenarios, offering a promising contribution to the ongoing integration of AI in dentistry. This study employs a dataset of 3D dental models, and its significance lies in addressing the challenges associated with tooth segmentation and classification in dental images. Dental X-ray pictures play a crucial role in predicting dental disorders, and precise tooth segmentation is essential for effective diagnosis and treatment planning. Manual or semi-automatic segmentation processes are prone to misinterpretations due to image noise and similarities between tooth types. The proposed method, integrating CM-CNN and AEG, aims to automate this process, providing accurate tooth categorization. The significance of achieving this automation is notable in enhancing the overall efficiency of dental image analysis, contributing to quicker and more accurate diagnoses. The outcomes of the study showcase the method's superiority over conventional techniques, indicating its potential impact on improving dental diagnostic procedures and treatment strategies, ultimately benefiting both dental practitioners and patients.
... Imak et al. 38 used ResMIBCU-Net to segment impacted teeth (including impacted canines) on panoramic images and achieved an accuracy of 99.82%. Orhan et al. 39 evaluated the diagnostic performance of a U-Net CNN model for detecting impacted third molar teeth on CBCT images and showed an accuracy of 86.2%. Meanwhile, the findings of the present suggested a high scoring of 0.99 based on both DSC and IoU. ...
Article
Full-text available
The process of creating virtual models of dentomaxillofacial structures through three-dimensional segmentation is a crucial component of most digital dental workflows. This process is typically performed using manual or semi-automated approaches, which can be time-consuming and subject to observer bias. The aim of this study was to train and assess the performance of a convolutional neural network (CNN)-based online cloud platform for automated segmentation of maxillary impacted canine on CBCT image. A total of 100 CBCT images with maxillary canine impactions were randomly allocated into two groups: a training set (n = 50) and a testing set (n = 50). The training set was used to train the CNN model and the testing set was employed to evaluate the model performance. Both tasks were performed on an online cloud-based platform, ‘Virtual patient creator’ (Relu, Leuven, Belgium). The performance was assessed using voxel- and surface-based comparison between automated and semi-automated ground truth segmentations. In addition, the time required for segmentation was also calculated. The automated tool showed high performance for segmenting impacted canines with a dice similarity coefficient of 0.99 ± 0.02. Moreover, it was 24 times faster than semi-automated approach. The proposed CNN model achieved fast, consistent, and precise segmentation of maxillary impacted canines.
... CBCT involves maxillofacial imaging that uses the most advanced technology; it can display three-dimensional images from a cross-sectional view to evaluate the incidence or nonexistence of corticalization between the ITM and MC (Nasser et al., 2018). Deep learning can also identify and categorize ITM with the use of CBCT (Borgonovo et al., 2017;Liu et al., 2022;Orhan et al., 2021). On the other hand, CBCT is expensive and requires larger doses of radiation, thus, panoramic radiography remains a widely used an alternative in the examination of impacted teeth (Whaites and Drage, 2014), in addition as a preliminary step to determining whether CBCT imaging is necessary. ...
Article
Full-text available
Background Mandibular third molar is prone to impaction, resulting in its inability to erupt into the oral cavity. The radiographic examination is required to support the odontectomy of impacted teeth. The use of computer-aided diagnosis based on deep learning is emerging in the field of medical and dentistry with the advancement of artificial intelligence (AI) technology. This review describes the performance and prospects of deep learning for the detection, classification, and evaluation of third molar-mandibular canal relationships on panoramic radiographs. Methods This work was conducted using three databases: PubMed, Google Scholar, and Science Direct. Following the literature selection, 49 articles were reviewed, with the 12 main articles discussed in this review. Results Several models of deep learning are currently used for segmentation and classification of third molar impaction with or without the combination of other techniques. Deep learning has demonstrated significant diagnostic performance in identifying mandibular impacted third molars (ITM) on panoramic radiographs, with an accuracy range of 78.91% to 90.23%. Meanwhile, the accuracy of deep learning in determining the relationship between ITM and the mandibular canal (MC) ranges from 72.32% to 99%. Conclusion Deep learning-based AI with high performance for the detection, classification, and evaluation of the relationship of ITM to the MC using panoramic radiographs has been developed over the past decade. However, deep learning must be improved using large datasets, and the evaluation of diagnostic performance for deep learning models should be aligned with medical diagnostic test protocols. Future studies involving collaboration among oral radiologists, clinicians, and computer scientists are required to identify appropriate AI development models that are accurate, efficient, and applicable to clinical services.
... The mean DSC was 0.774, validating the segmentation performance and demonstrating the potential of the AI tool in CBCT. Orhan et al. [35] segmented the M3 and IAC using U-net to diagnose an impacted M3 in CBCT. Performance was evaluated using Kappa analysis and validated as 0.862. ...
Article
Full-text available
In recent dentistry research, deep learning techniques have been employed for various tasks, including detecting and segmenting third molars and inferior alveolar nerves, as well as classifying their positional relationships. Prior studies using convolutional neural networks (CNNs) have successfully detected the adjacent area of the third molar and automatically classified the relationship between the inferior alveolar nerves. However, deep learning models have limitations in learning the diverse patterns of teeth and nerves due to variations in their shape, angle, and size across individuals. Moreover, unlike object classification, relationship classification is influenced by the proximity of teeth and nerves, making it challenging to accurately interpret the classified samples. To address these challenges, we propose a masking image-based classification system. The primary goal of this system is to enhance the classification performance of the relationship between the third molar and inferior alveolar nerve while providing diagnostic evidence to support the classification. Our proposed system operates by detecting the adjacent areas of the third molar, including the inferior alveolar nerve, in panoramic radiographs (PR). Subsequently, it generates masked images of the inferior alveolar nerve and third molar within the extracted regions of interest. Finally, it performs the classification of the relationship between the third molar and inferior alveolar nerve using these masked images. The system achieved a mean average precision (mAP) of 0.885 in detecting the region of interest in the third molar. Furthermore, the performance of the existing CNN-based positional relationship classification was evaluated using four classification models, resulting in an average accuracy of 0.795. For the segmentation task, the third molar and inferior alveolar nerve in the detected region of interest exhibited a dice similarity coefficient (DSC) of 0.961 and 0.820, respectively. Regarding the proposed masking image-based classification, it demonstrated an accuracy of 0.832, outperforming the existing method by approximately 3%, thus confirming the superiority of our proposed system.
... [31][32][33] Similarly, a deep learning application provided high accuracy in the detection of impacted third molars and their relationship with anatomical structures. 34,35 Ahn et al. 18 investigated the performance of a deep learning model to detect mesiodens on primary or mixed dentition panoramic radiographs and suggested that this method may help clinicians with insufficient clinical experience make more accurate and faster diagnoses. Ha et al. 19 proposed a model based on YOLOv3 to detect mesiodens on panoramic radiographs of primary, mixed and permanent dentition groups. ...
Article
Artificial intelligence (AI) based systems are used in dentistry to make the diagnostic process more accurate and efficient. The objective of this study was to evaluate the performance of a deep learning program for detection and classification of dental structures and treatments on panoramic radiographs of pediatric patients. In total, 4821 anonymized panoramic radiographs of children aged between 5 and 13 years old were analyzed by YOLO V4, a CNN (Convolutional Neural Networks) based object detection model. The ability to make a correct diagnosis was tested samples from pediatric patients examined within the scope of the study. All statistical analyses were performed using SPSS 26.0 (IBM, Chicago, IL, USA). The YOLOV4 model diagnosed the immature teeth, permanent tooth germs and brackets successfully with the high F1 scores like 0.95, 0.90 and 0.76 respectively. Although this model achieved promising results, there were certain limitations for some dental structures and treatments including the filling, root canal treatment, supernumerary tooth. Our architecture achieved reliable results with some specific limitations for detecting dental structures and treatments. Detection of certain dental structures and previous dental treatments on pediatric panoramic x-rays by using a deep learning-based approach may provide early diagnosis of some dental anomalies and help dental practitioners to find more accurate treatment options by saving time and effort.
... Setzer ve ark. 56 'ın periapikal patoloji varlığını belirlemek için yaptıkları çalışmada duyarlılık oranını (% 65) yeterli derecede yüksek olmadığını ancak doğruluk oranını (% 93) yeterli derecede yüksek olduğuna ulaşmışlardır. Lenf nodu metastaz tespiti, osteoporoz risk tespiti, Sjögren sendromu ve ağız kanseri taramaları radyolojide yapay zeka algoritmalarının kullanıldığı diğer konulardır. ...
... In this way, it is possible to preserve important anatomical structures and to complete the operations in a shorter time (3,37). Using CBCT images, Orhan et al. (51) examined the accuracy in identifying impacted third molars using an AI model. The AI model performed with 86.2% accuracy in determining the link of these teeth to anatomical structures. ...
Chapter
Full-text available
Dear Readers, We have completed our book thanks to our friends who do valuable studies in the light of science. The concept of health, which continues to be important in every age, has once again revealed its importance with the covid-19 epidemic that has affected the world for the last few years and the earthquake disaster that our country has experienced deeply. We, the healthcare professionals who believe in the continuity of education, think that societies equipped with knowledge and using intelligence and evidence-based knowledge will pass the health exams with much less injury and loss. For this reason, the aim of the book for us is to shed some light on future studies and to illuminate the darkness by warning its readers through the known information and unknowns in it. We hope that our presented book will be easy to understand and will open new horizons for all humanity as well as supporting scientists from faculties of medicine, dentistry, pharmacy and veterinary medicine. I would like to thank the scientists working in the health sciences and our team of authors who supported our book. Dr. Enes Karaman
... The detection tasks comprised of detection of periapical lesions, [9,19] temporomandibular joint (TMJ) osteoarthritis, [31] and impacted third molars. [32] Unfortunately, most of these detection studies failed to compare human intelligence with that of AI. Moreover, these studies failed to compare AI against an objective gold standard. ...
Article
Full-text available
Background: Artificial intelligence (AI) has the potential to enhance health care efficiency and diagnostic accuracy. Aim: The present study aimed to determine the current performance of AI using cone-beam computed tomography (CBCT) images for detection and segmentation. Materials and methods: A systematic search for scholarly articles written in English was conducted on June 24, 2021, in PubMed, Web of Science, and Google Scholar. Inclusion criteria were peer-reviewed articles that evaluated AI systems using CBCT images for detection and segmentation purposes and achieved reported outcomes in terms of precision and recall, accuracy, based on DICE index and Dice similarity coefficient (DSC). The Cochrane tool for assessing the risk of bias was used to evaluate the studies that were included in this meta-analysis. A random-effects model was used to calculate the pooled effect size. Results: Thirteen studies were included for review and analysis. The pooled performance that measures the included AI models is 0.85 (95%CI: 0.73,0.92) for DICE index/DSC, 0.88 (0.77,0.94) for precision, 0.93 (0.84, 0.97) for recall, and 0.83 (0.68, 0.91) for accuracy percentage. Conclusion: Some limitations are identified in our meta-analysis such as heterogenicity of studies, risk of bias and lack of ground truth. The application of AI for detection and segmentation using CBCT images is comparable to services offered by trained dentists and can potentially expedite and enhance the interpretive process. Implementing AI into clinical dentistry can analyze a large number of CBCT studies and flag the ones with significant findings, thus increasing efficiency. The study protocol was registered in PROSPERO, the international registry for systematic reviews (ID number CRD42021285095).
... Some studies in dentistry applied deep learning to diagnose and make decisions [7][8][9]. Deep learning detects caries and periapical lesions based on periapical and panoramic images [10][11][12]. Applying deep learning to distinguish prosthetic restorations in PR could save more clinical time for clinicians to focus on treatment planning and prosthodontic operations. ...
Article
Full-text available
Aim. This study applied a CNN (convolutional neural network) algorithm to detect prosthetic restorations on panoramic radiographs and to automatically detect these restorations using deep learning systems. Materials and Methods. This study collected a total of 5126 panoramic radiographs of adult patients. During model training, .bmp, .jpeg, and .png files for images and .txt files containing five different types of information are required for the labels. Herein, 10% of panoramic radiographs were used as a test dataset. Owing to labeling, 2988 crowns and 2969 bridges were formed in the dataset. Results. The mAP and mAR values were obtained when the confidence threshold was set at 0.1. TP, FP, FN, precision, recall, and F1 score values were obtained when the confidence threshold was 0.25. The YOLOv4 model demonstrated that accurate results could be obtained quickly. Bridge results were found to be more successful than crown results. Conclusion. The detection of prosthetic restorations with artificial intelligence on panoramic radiography, which is widely preferred in clinical applications, provides convenience to physicians in terms of diagnosis and time management.
... Artificial intelligence (AI) is defined as the ability of a machine to perform complex tasks that mimic humans' cognitive functions, such as problem solving, recognition of objects and words, and decision making [13]. The objective here is to develop machines that can learn through data to solve problems. ...
Article
Full-text available
The present study aims to validate the diagnostic performance and evaluate the reliability of an artificial intelligence system based on the convolutional neural network method for the morphological classification of sella turcica in CBCT (cone-beam computed tomography) images. In this retrospective study, sella segmentation and classification models (CranioCatch, Eskisehir, Türkiye) were applied to sagittal slices of CBCT images, using PyTorch supported by U-Net and TensorFlow 1, and we implemented the GoogleNet Inception V3 algorithm. The AI models achieved successful results for sella turcica segmentation of CBCT images based on the deep learning models. The sensitivity , precision, and F-measure values were 1.0, 1.0, and 1.0, respectively, for segmentation of sella turcica in sagittal slices of CBCT images. The sensitivity, precision, accuracy, and F1-score were 1.0, 0.95, 0.98, and 0.84, respectively, for sella-turcica-flattened classification; 0.95, 0.83, 0.92, and 0.88, respectively, for sella-turcica-oval classification; 0.75, 0.94, 0.90, and 0.83, respectively, for sella-turcica-round classification. It is predicted that detecting anatomical landmarks with orthodontic importance, such as the sella point, with artificial intelligence algorithms will save time for orthodontists and facilitate diagnosis.
... 3D imaging is another workspace in terms of artificial intelligence for the same problem. Orhan et al. [17] designed a 3D study to evaluate the performance of an AI application in determining the impacted third molar teeth and the relationship with neighboring anatomical structures. Similar to the present study, their deep CNN algorithm was based on a U-Net-like architecture for the segmentation of the MCs on CBCT images. ...
Article
Full-text available
The study aimed to generate a fused deep learning algorithm that detects and classifies the relationship between the mandibular third molar and mandibular canal on orthopantomographs. Radiographs (n = 1880) were randomly selected from the hospital archive. Two dentomaxillofacial radiologists annotated the data via MATLAB and classified them into four groups according to the overlap of the root of the mandibular third molar and mandibular canal. Each radiograph was segmented using a U-Net-like architecture. The segmented images were classified by AlexNet. Accuracy, the weighted intersection over union score, the dice coefficient, specificity, sensitivity, and area under curve metrics were used to quantify the performance of the models. Also, three dental practitioners were asked to classify the same test data, their success rate was assessed using the Intraclass Correlation Coefficient. The segmentation network achieved a global accuracy of 0.99 and a weighted intersection over union score of 0.98, average dice score overall images was 0.91. The classification network achieved an accuracy of 0.80, per class sensitivity of 0.74, 0.83, 0.86, 0.67, per class specificity of 0.92, 0.95, 0.88, 0.96 and AUC score of 0.85. The most successful dental practitioner achieved a success rate of 0.79. The fused segmentation and classification networks produced encouraging results. The final model achieved almost the same classification performance as dental practitioners. Better diagnostic accuracy of the combined artificial intelligence tools may help to improve the prediction of the risk factors, especially for recognizing such anatomical variations.
Article
Full-text available
Objectives Artificial intelligence (AI) is an emerging field in dentistry. AI is gradually being integrated into dentistry to improve clinical dental practice. The aims of this scoping review were to investigate the application of AI in image analysis for decision‐making in clinical dentistry and identify trends and research gaps in the current literature. Material and Methods This review followed the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses Extension for Scoping Reviews (PRISMA‐ScR). An electronic literature search was performed through PubMed and Scopus. After removing duplicates, a preliminary screening based on titles and abstracts was performed. A full‐text review and analysis were performed according to predefined inclusion criteria, and data were extracted from eligible articles. Results Of the 1334 articles returned, 276 met the inclusion criteria (consisting of 601,122 images in total) and were included in the qualitative synthesis. Most of the included studies utilized convolutional neural networks (CNNs) on dental radiographs such as orthopantomograms (OPGs) and intraoral radiographs (bitewings and periapicals). AI was applied across all fields of dentistry ‐ particularly oral medicine, oral surgery, and orthodontics ‐ for direct clinical inference and segmentation. AI‐based image analysis was use in several components of the clinical decision‐making process, including diagnosis, detection or classification, prediction, and management. Conclusions A variety of machine learning and deep learning techniques are being used for dental image analysis to assist clinicians in making accurate diagnoses and choosing appropriate interventions in a timely manner.
Article
Full-text available
Introduction Healthcare amelioration is exponential to technological advancement. In the recent era of automation, the consolidation of artificial intelligence (AI) in dentistry has rendered transformation in oral healthcare from a hardware-centric approach to a software-centric approach, leading to enhanced efficiency and improved educational and clinical outcomes. Objectives The aim of this narrative overview is to extend the succinct of the major events and innovations that led to the creation of modern-day AI and dentistry and the applicability of the former in dentistry. This article also prompts oral healthcare workers to endeavor a liable and optimal approach for effective incorporation of AI technology into their practice to promote oral health by exploring the potentials, constraints, and ethical considerations of AI in dentistry. Methods A comprehensive approach for searching the white and grey literature was carried out to collect and assess the data on AI, its use in dentistry, and the associated challenges and ethical concerns. Results AI in dentistry is still in its evolving phase with paramount applicabilities relevant to risk prediction, diagnosis, decision-making, prognosis, tailored treatment plans, patient management, and academia as well as the associated challenges and ethical concerns in its implementation. Conclusion The upsurging advancements in AI have resulted in transformations and promising outcomes across all domains of dentistry. In futurity, AI may be capable of executing a multitude of tasks in the domain of oral healthcare, at the level of or surpassing the ability of mankind. However, AI could be of significant benefit to oral health only if it is utilized under responsibility, ethicality and universality.
Article
The present report describes a rare complication named Kuhn anemia, which happened during local infiltration anesthesia of maxillary wisdom tooth. A 24-year-old male was referred for residual crown of left maxillary wisdom tooth and therefore required extraction. Oral examination revealed a buccal impacted left maxillary wisdom tooth (28), severe caries affecting the dental pulp, and pain with percussion. This case was diagnosed as impacted wisdom tooth 28 and pulpitis. Thus, the authors performed an extraction of the 28. After injection of articaine hydrochloride (68 mg:1.7 mL) containing epinephrine (1:100,000) as local anesthetic, the patient felt pain and heat in the left cheek, and an irregularly shaped pale area appeared on the left cheek. The authors stopped the surgery and comforted him. After a 30-minute break, the cheek turned normal, and abnormal sensation was no longer felt.
Article
Background Panoramic radiography (PR) is available to determine the contact relationship between maxillary molar teeth (MMT) and the maxillary sinus floor (MSF). However, as PRs do not provide clear and detailed anatomical information, advanced imaging methods can be used. Aim The aim of this study was to evaluate the diagnostic performance of deep learning (DL) applications that assess the relationship of the MSF to the first maxillary molar teeth (fMMT) and second maxillary molar teeth (sMMT) on PRs with data confirmed by cone beam computed tomography (CBCT). Methods A total of 2162 fMMT and sMMT were included in this retrospective study. The contact relationship of teeth with MSF was compared among DL methods. Results DL methods, such as GoogLeNet, VGG16, VGG19, DarkNet19, and DarkNet53, were used to evaluate the contact relationship between MMT and MSF, and 85.89% accuracy was achieved by majority voting. In addition, 88.72%, 81.19%, 89.39%, and 83.14% accuracy rates were obtained in right fMMT, right sMMT, left fMMT, and left sMMT, respectively. Conclusion DL models showed high accuracy values in detecting the relationship of fMMT and sMMT with MSF.
Chapter
Image processing is an essential component of the clinical workflow and is used at several stages to improve the diagnostic process and facilitate treatment planning. It can be broadly defined as the application of transformations to digital images, typically with the aim of enhancing the image or extracting relevant information from it. Common examples of medical image processing techniques are (1) image segmentation, i.e., the partitioning of images into several distinct regions; (2) image enhancement, e.g., noise reduction, sharpness enhancement, and artifact correction; (3) tomographic reconstruction, e.g., in computed tomography or magnetic resonance imaging; and (4) registration, i.e., spatial matching of multiple images. Recent innovations in the field of artificial intelligence, especially deep learning (DL), have resulted in a wide variety of potential applications in medical image processing. DL models, mostly trained using supervised learning, can offer several benefits over alternative (semi-)automated or manual approaches, owing to their accuracy, robustness, and speed. This chapter will provide an overview of neural networks used in image-to-image processing and will describe principles and applications of DL and unsupervised clustering in image segmentation. The next chapter will focus on DL applications in image enhancement, reconstruction, and registration.
Article
Background: Artificial intelligence (AI) technology has been increasingly developed in oral and maxillofacial imaging. The aim of this systematic review was to assess the applications and performance of the developed algorithms in different dentomaxillofacial imaging modalities. Study design: A systematic search of PubMed and Scopus databases was performed. The search strategy was set as a combination of the following keywords: "Artificial Intelligence," "Machine Learning," "Deep Learning," "Neural Networks," "Head and Neck Imaging," and "Maxillofacial Imaging." Full-text screening and data extraction were independently conducted by two independent reviewers; any mismatch was resolved by discussion. The risk of bias was assessed by one reviewer and validated by another. Results: The search returned a total of 3,392 articles. After careful evaluation of the titles, abstracts, and full texts, a total number of 194 articles were included. Most studies focused on AI applications for tooth and implant classification and identification, 3-dimensional cephalometric landmark detection, lesion detection (periapical, jaws, and bone), and osteoporosis detection. Conclusion: Despite the AI models' limitations, they showed promising results. Further studies are needed to explore specific applications and real-world scenarios before confidently integrating these models into dental practice.
Chapter
Medicine and dentistry have recently undergone a rapid digitalization process, with the integration of innovative technologies like artificial intelligence (AI). AI systems have emerged as valuable tools in various aspects of dentistry, including diagnosis, treatment planning, prognosis prediction, and education. The objective of this book chapter is to explore the role of AI in oral surgery and periodontics, based on the existing literature.
Chapter
The temporomandibular joint (TMJ) is one of the most complex joints in the human body. TMJ-related diseases are very common, and approximately one-third of adults have various symptoms related to TMJ diseases (TMDs), including jaw pain, joint noise, limited mouth opening, and tinnitus. Diagnosing and managing TMDs are complex tasks that should be performed comprehensively, using different clinical, imaging, and biological findings for an accurate diagnosis. Artificial intelligence (AI) is a technology that imitates human intelligence and can learn, think, interpret, and predict. AI has great potential for use as a diagnostic decision-support tool in dentistry. The aim of this chapter is to present the uses and benefits of AI in TMDs, considering the current literature.
Chapter
Technological changes have brought about significant advancements in the fields of medicine and dentistry. Among the most crucial innovations responsible for this transformation is the artificial intelligence (AI) technology. In the medical and dental domains, AI is expected to gain increasing preference due to its vital contributions to patient healthcare services and the convenience it offers to physicians. One of the key advantages of artificial intelligence is its ability to undergo continuous training and updates using vast datasets. Consequently, it can produce more sensitive, rapid, and reliable results compared to humans. Beyond diagnosis and treatment processes, AI is also employed in areas such as scheduling patient appointments, handling insurance and paperwork, and maintaining medical history records. This chapter provides a concise review of these aspects.
Article
Gömülü diş tespiti, diş hekimliği uygulamalarında önemli bir adımdır ve doğru bir tespit süreci, tedavi planlaması ve teşhislerde büyük önem taşır. Geleneksel yöntemlerin sınırlamaları ve hata olasılıkları göz önüne alındığında, derin öğrenme modelleri gibi yapay zekâ temelli yaklaşımların kullanılması giderek daha yaygın hale gelmektedir. Bu çalışmada panoramik gömülü diş görüntülerinde derin öğrenme modellerinin performansı incelenmiştir Yedi farklı modelin (VGG16-Unet, VGG19-Unet, MobileNetV2, Unet-v1, Unet-v2, Unet-v3 ve Unet-v4) performansı değerlendirilmiştir. VGG16-Unet modelinin AUC (eğri altındaki alan) değeri %94.87 ile diğer modellere kıyasla daha yüksek bulunmuştur. Bu çalışma, diş hekimliği alanında daha doğru ve hassas segmentasyon yöntemleri geliştirilmesine katkı sağlayarak, diş tespiti ve tedavi planlaması süreçlerinde daha güvenilir sonuçlar elde edilmesini desteklemektedir.
Article
Full-text available
Yapay zeka (YZ), insanların bilişsel işlevlerini taklit eden makinelerin öğrenme ve problem çözme kapasitesini temsil eder. Tek bir klinisyenin aksine, YZ sistemleri sınırsız sayıda veriyi eşzamanlı olarak gözlemleyebilir ve hızlı bir şekilde işleyebilir. Makine öğrenimi, beyindeki biyolojik sinir ağlarının mimarisini taklit eden yapay sinir ağları (YSA) adı verilen hesaplama modelleri ve algoritmaları içerir. YZ uygulamaları tıpta olduğu gibi diş hekimliğinin de birçok alanında popüler hale gelmiştir. Bu derleme, kapsamlı bir literatür taraması ile diş hekimliğinin çeşitli alanlarındaki YZ uygulamalarına ve ilgili çalışmalara odaklanmaktadır. YZ, oral ve maksillofasiyal cerrahide (anatomik işaretlerin belirlenmesi, postoperatif komplikasyonların tahmini), periodontoloji (alveolar kemik kaybının ve kemik yoğunluğundaki değişikliklerin belirlenmesi), implantoloji, oral ve maksillofasiyal radyoloji (diş segmentasyonu, ekstra dişlerin tanımlanması, kök kırıkları veya apikal lezyonlar, osteoporozun tahmininde), restoratif diş hekimliği (diş çürüklerinin belirlenmesi) ve ortodonti (tedavi analizi, iskelet sınıflandırması ve büyüme ve gelişme döneminin belirlenmesi) alanlarında kullanılabilir. YZ teknolojisinin diş hekimliğine entegre edilmesi, özellikle klinisyen sayısının az olduğu kliniklerde maliyet ve zamandan tasarruf sağlarken insan kaynaklı hataları en aza indirmeyi amaçlamaktadır.
Article
Full-text available
Introduction Artificial Intelligence (AI), Machine Learning and Deep Learning are playing an increasingly significant role in the medical field in the 21st century. These recent technologies are based on the concept of creating machines that have the potential to function as a human brain. It necessitates the gathering of large quantity of data to be processed. Once processed with AI machines, these data have the potential to streamline and improve the capabilities of the medical field in diagnosis and treatment planning, as well as in the prediction and recognition of diseases. These concepts are new to Orthodontics and are currently limited to image processing and pattern recognition. Objective This article exposes and describes the different methods by which orthodontics may benefit from a more widespread adoption of these technologies. Keywords: Deep learning; Artificial intelligence; Orthodontics
Chapter
Digital radiography is the most common advanced dental technology that patients experience during diagnostic visits. Orthodontists require a cephalometric system and when moving from film to digital, again have two choices: direct digital and indirect digital. There are several principles of radiation safety: as low as reasonably achievable; justification, limitation, optimization, and the use of selection criteria. The use of standard intraoral and extraoral imaging for clinical dentistry has been available for many years and includes caries and periodontal diagnosis, endodontic diagnosis, detection, and evaluation of oral and maxillofacial pathology and evaluation of craniofacial developmental disorders. Quantitative light‐induced fluorescence technology measures the refractive differences between healthy enamel and demineralized, porous enamel with areas of caries and demineralization showing less fluorescence. Cone Beam Computed Tomography is arguably the most exciting advancement in oral radiology since panoramic radiology in the 1950s and 1960s, and perhaps since Roentgen's discovery of X‐rays in 1895.
Article
Introduction Deep learning methods have recently been applied for the processing of medical images, and they have shown promise in a variety of applications. This study aimed to develop a deep learning approach for identifying oral lichen planus lesions using photographic images. Material and Methods Anonymous retrospective photographic images of buccal mucosa with 65 healthy and 72 oral lichen planus lesions were identified using the CranioCatch program (CranioCatch, Eskişehir, Turkey). All images were re-checked and verified by Oral Medicine and Maxillofacial Radiology experts. This data set was divided into training (n =51; n=58), verification (n =7; n=7), and test (n =7; n=7) sets for healthy mucosa and mucosa with the oral lichen planus lesion, respectively. In the study, an artificial intelligence model was developed using Google Inception V3 architecture implemented with Tensorflow, which is a deep learning approach. Results AI deep learning model provided the classification of all test images for both healthy and diseased mucosa with a 100% success rate. Conclusion In the healthcare business, AI offers a wide range of uses and applications. The increased effort increased complexity of the job, and probable doctor fatigue may jeopardize diagnostic abilities and results. Artificial intelligence (AI) components in imaging equipment would lessen this effort and increase efficiency. They can also detect oral lesions and have access to more data than their human counterparts. Our preliminary findings show that deep learning has the potential to handle this significant challenge.
Book
Full-text available
INSAC WORLD HEALTH SCIENCES
Article
Full-text available
Objectives: To analyze all artificial intelligence abstracts presented at the European Congress of Radiology (ECR) 2019 with regard to their topics and their adherence to the Standards for Reporting Diagnostic accuracy studies (STARD) checklist. Methods: A total of 184 abstracts were analyzed with regard to adherence to the STARD criteria for abstracts as well as the reported modality, body region, pathology, and use cases. Results: Major topics of artificial intelligence abstracts were classification tasks in the abdomen, chest, and brain with CT being the most commonly used modality. Out of the 10 STARD for abstract criteria analyzed in the present study, on average, 5.32 (SD = 1.38) were reported by the 184 abstracts. Specifically, the highest adherence with STARD for abstracts was found for general interpretation of results of abstracts (100.0%, 184 of 184), clear study objectives (99.5%, 183 of 184), and estimates of diagnostic accuracy (96.2%, 177 of 184). The lowest STARD adherence was found for eligibility criteria for participants (9.2%, 17 of 184), type of study series (13.6%, 25 of 184), and implications for practice (20.7%, 44 of 184). There was no significant difference in the number of reported STARD criteria between abstracts accepted for oral presentation (M = 5.35, SD = 1.31) and abstracts accepted for the electronic poster session (M = 5.39, SD = 1.45) (p = .86). Conclusions: The adherence with STARD for abstract was low, indicating that providing authors with the related checklist may increase the quality of abstracts.
Article
Full-text available
Accurate localisation of mandibular canals in lower jaws is important in dental implantology, in which the implant position and dimensions are currently determined manually from 3D CT images by medical experts to avoid damaging the mandibular nerve inside the canal. Here we present a deep learning system for automatic localisation of the mandibular canals by applying a fully convolutional neural network segmentation on clinically diverse dataset of 637 cone beam CT volumes, with mandibular canals being coarsely annotated by radiologists, and using a dataset of 15 volumes with accurate voxel-level mandibular canal annotations for model evaluation. We show that our deep learning model, trained on the coarsely annotated volumes, localises mandibular canals of the voxel-level annotated set, highly accurately with the mean curve distance and average symmetric surface distance being 0.56 mm and 0.45 mm, respectively. These unparalleled accurate results highlight that deep learning integrated into dental implantology workflow could significantly reduce manual labour in mandibular canal annotations.
Article
Full-text available
The practicability of deep learning techniques has been demonstrated by their successful implementation in varied fields, including diagnostic imaging for clinicians. In accordance with the increasing demands in the healthcare industry, techniques for automatic prediction and detection are being widely researched. Particularly in dentistry, for various reasons, automated mandibular canal detection has become highly desirable. The positioning of the inferior alveolar nerve (IAN), which is one of the major structures in the mandible, is crucial to prevent nerve injury during surgical procedures. However, automatic segmentation using Cone beam computed tomography (CBCT) poses certain difficulties, such as the complex appearance of the human skull, limited number of datasets, unclear edges, and noisy images. Using work-in-progress automation software, experiments were conducted with models based on 2D SegNet, 2D and 3D U-Nets as preliminary research for a dental segmentation automation tool. The 2D U-Net with adjacent images demonstrates higher global accuracy of 0.82 than naïve U-Net variants. The 2D SegNet showed the second highest global accuracy of 0.96, and the 3D U-Net showed the best global accuracy of 0.99. The automated canal detection system through deep learning will contribute significantly to efficient treatment planning and to reducing patients’ discomfort by a dentist. This study will be a preliminary report and an opportunity to explore the application of deep learning to other dental fields.
Article
Full-text available
Panoramic radiographs and computed tomography (CT) play a paramount role in the accurate diagnosis, treatment planning, and prognostic evaluation of various complex dental pathologies. The advent of cone-beam computed tomography (CBCT) has revolutionized the practice of dentistry, and this technique is now considered the gold standard for imaging the oral and maxillofacial area due to its numerous advantages, including reductions in exposure time, radiation dose, and cost in comparison to other imaging modalities. This review highlights the broad use of CBCT in the dentomaxillofacial region, and also focuses on future software advancements that can further optimize CBCT imaging.
Article
Full-text available
Introduction: We applied deep convolutional neural networks (CNNs) to detect apical lesions (ALs) on panoramic dental radiographs. Methods: Based on a synthesized data set of 2001 tooth segments from panoramic radiographs, a custom-made 7-layer deep neural network, parameterized by a total number of 4,299,651 weights, was trained and validated via 10 times repeated group shuffling. Hyperparameters were tuned using a grid search. Our reference test was the majority vote of 6 independent examiners who detected ALs on an ordinal scale (0, no AL; 1, widened periodontal ligament, uncertain AL; 2, clearly detectable lesion, certain AL). Metrics were the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive/negative predictive values. Subgroup analysis for tooth types was performed, and different margins of agreement of the reference test were applied (base case: 2; sensitivity analysis: 6). Results: The mean (standard deviation) tooth level prevalence of both uncertain and certain ALs was 0.16 (0.03) in the base case. The AUC of the CNN was 0.85 (0.04). Sensitivity and specificity were 0.65 (0.12) and 0.87 (0.04,) respectively. The resulting positive predictive value was 0.49 (0.10), and the negative predictive value was 0.93 (0.03). In molars, sensitivity was significantly higher than in other tooth types, whereas specificity was lower. When only certain ALs were assessed, the AUC was 0.89 (0.04). Increasing the margin of agreement to 6 significantly increased the AUC to 0.95 (0.02), mainly because the sensitivity increased to 0.74 (0.19). Conclusions: A moderately deep CNN trained on a limited amount of image data showed satisfying discriminatory ability to detect ALs on panoramic radiographs.
Article
Full-text available
Purpose Artificial intelligence (AI), represented by deep learning, can be used for real-life problems and is applied across all sectors of society including medical and dental field. The purpose of this study is to review articles about deep learning that were applied to the field of oral and maxillofacial radiology. Materials and Methods A systematic review was performed using Pubmed, Scopus, and IEEE explore databases to identify articles using deep learning in English literature. The variables from 25 articles included network architecture, number of training data, evaluation result, pros and cons, study object and imaging modality. Results Convolutional Neural network (CNN) was used as a main network component. The number of published paper and training datasets tended to increase, dealing with various field of dentistry. Conclusion Dental public datasets need to be constructed and data standardization is necessary for clinical application of deep learning in dental field.
Article
Full-text available
We propose using faster regions with convolutional neural network features (faster R-CNN) in the TensorFlow tool package to detect and number teeth in dental periapical films. To improve detection precisions, we propose three post-processing techniques to supplement the baseline faster R-CNN according to certain prior domain knowledge. First, a filtering algorithm is constructed to delete overlapping boxes detected by faster R-CNN associated with the same tooth. Next, a neural network model is implemented to detect missing teeth. Finally, a rule-base module based on a teeth numbering system is proposed to match labels of detected teeth boxes to modify detected results that violate certain intuitive rules. The intersection-over-union (IOU) value between detected and ground truth boxes are calculated to obtain precisions and recalls on a test dataset. Results demonstrate that both precisions and recalls exceed 90% and the mean value of the IOU between detected boxes and ground truths also reaches 91%. Moreover, three dentists are also invited to manually annotate the test dataset (independently), which are then compared to labels obtained by our proposed algorithms. The results indicate that machines already perform close to the level of a junior dentist.
Article
Full-text available
Objectives:: Analysis of dental radiographs is an important part of the diagnostic process in daily clinical practice. Interpretation by an expert includes teeth detection and numbering. In this project, a novel solution based on convolutional neural networks (CNNs) is proposed that performs this task automatically for panoramic radiographs. Methods:: A data set of 1352 randomly chosen panoramic radiographs of adults was used to train the system. The CNN-based architectures for both teeth detection and numbering tasks were analyzed. The teeth detection module processes the radiograph to define the boundaries of each tooth. It is based on the state-of-the-art Faster R-CNN architecture. The teeth numbering module classifies detected teeth images according to the FDI notation. It utilizes the classical VGG-16 CNN together with the heuristic algorithm to improve results according to the rules for spatial arrangement of teeth. A separate testing set of 222 images was used to evaluate the performance of the system and to compare it to the expert level. Results:: For the teeth detection task, the system achieves the following performance metrics: a sensitivity of 0.9941 and a precision of 0.9945. For teeth numbering, its sensitivity is 0.9800 and specificity is 0.9994. Experts detect teeth with a sensitivity of 0.9980 and a precision of 0.9998. Their sensitivity for tooth numbering is 0.9893 and specificity is 0.9997. The detailed error analysis showed that the developed software system makes errors caused by similar factors as those for experts. Conclusions:: The performance of the proposed computer-aided diagnosis solution is comparable to the level of experts. Based on these findings, the method has the potential for practical application and further evaluation for automated dental radiograph analysis. Computer-aided teeth detection and numbering simplifies the process of filling out digital dental charts. Automation could help to save time and improve the completeness of electronic dental records.
Article
Full-text available
Machine learning (ML) is a form of artificial intelligence which is placed to transform the twenty-first century. Rapid, recent progress in its underlying architecture and algorithms and growth in the size of datasets have led to increasing computer competence across a range of fields. These include driving a vehicle, language translation, chatbots and beyond human performance at complex board games such as Go. Here, we review the fundamentals and algorithms behind machine learning and highlight specific approaches to learning and optimisation. We then summarise the applications of ML to medicine. In particular, we showcase recent diagnostic performances, and caveats, in the fields of dermatology, radiology, pathology and general microscopy.
Article
Full-text available
Objectives Ameloblastomas and keratocystic odontogenic tumors (KCOTs) are important odontogenic tumors of the jaw. While their radiological findings are similar, the behaviors of these two types of tumors are different. Precise preoperative diagnosis of these tumors can help oral and maxillofacial surgeons plan appropriate treatment. In this study, we created a convolutional neural network (CNN) for the detection of ameloblastomas and KCOTs. Methods Five hundred digital panoramic images of ameloblastomas and KCOTs were retrospectively collected from a hospital information system, whose patient information could not be identified, and preprocessed by inverse logarithm and histogram equalization. To overcome the imbalance of data entry, we focused our study on 2 tumors with equal distributions of input data. We implemented a transfer learning strategy to overcome the problem of limited patient data. Transfer learning used a 16-layer CNN (VGG-16) of the large sample dataset and was refined with our secondary training dataset comprising 400 images. A separate test dataset comprising 100 images was evaluated to compare the performance of CNN with diagnosis results produced by oral and maxillofacial specialists. Results The sensitivity, specificity, accuracy, and diagnostic time were 81.8%, 83.3%, 83.0%, and 38 seconds, respectively, for the CNN. These values for the oral and maxillofacial specialist were 81.1%, 83.2%, 82.9%, and 23.1 minutes, respectively. Conclusions Ameloblastomas and KCOTs could be detected based on digital panoramic radiographic images using CNN with accuracy comparable to that of manual diagnosis by oral maxillofacial specialists. These results demonstrate that CNN may aid in screening for ameloblastomas and KCOTs in a substantially shorter time.
Article
Full-text available
Objectives: The purpose of this study was to assess the relationship of the maxillary third molars to the maxillary sinus using cone-beam computed tomography (CBCT) in a Turkish population. Materials and methods: A total of 300 right and 307 left maxillary third molars were examined using CBCT images obtained from 394 patients. Data including the age, gender, the angulation type, depth of the third molars, and horizontal and vertical positions of the maxillary sinus relative to the third molars were examined. Results: Among 394 patients, 215 (54.6%) were male and 179 (45.4%) were female. The most common angulation of impaction was vertical (80.2%). Based on the depth of the third molars in relation to the adjacent second molar, Class A was the most common. Regarding the relationships of the third molars with the maxillary sinus, vertical Type I (43.5%) and horizontal Type II (59.3%) were seen most frequently. There was a significant difference between the vertical and horizontal relationships (P < 0.05). Conclusions: Knowledge of the anatomical relationship between the maxillary sinus floor and maxillary third molar roots is important for removing a maxillary third molar. CBCT evaluation could be valuable when performing dental procedures involving the maxillary third molars.
Article
Full-text available
A tooth normally erupts when half to three-quarters of its final root length has developed. Tooth impaction is usually diagnosed well after this period and is generally asymptomatic. It is principally for this reason that patients seek treatment later than optimal. Tooth impaction is a common problem in daily orthodontic practice and, in most cases, it is recognized by chance in a routine dental examination. Therefore, it is very important that dental practitioners are aware of this condition, since early detection and intervention may help to prevent many harmful complications. The treatment of impacted teeth requires multidisciplinary cooperation between orthodontists, oral surgeons and sometimes periodontists. Orthodontic treatment and surgical exposure of impacted teeth are performed in order to bring the impacted tooth into the line of the arch. The treatment is long, more complicated and challenging. This article presents an overview of the prevalence, etiology, diagnosis, treatment and complications associated with the management of impacted teeth.
Article
Full-text available
Background: Radiographs, adjunct to clinical examination are always valuable complementary methods for dental caries detection. Recently, progressing in digital imaging system provides possibility of software designing for automatically dental caries detection. Objectives: The aim of this study was to develop and assess the function of diagnostic computer software designed for evaluation of approximal caries in posterior teeth. This software should be able to indicate the depth and location of caries on digital radiographic images. Materials and methods: Digital radiographs were obtained of 93 teeth including 183 proximal surfaces. These images were used as a database for designing the software and training the software designer. In the design phase, considering the summed density of pixels in rows and columns of the images, the teeth were separated from each other and the unnecessary regions; for example, the root area in the alveolar bone was eliminated. Therefore, based on summed intensities, each image was segmented such that each segment contained only one tooth. Subsequently, based on the fuzzy logic, a well-known data-clustering algorithm named fuzzy c-means (FCM) was applied to the images to cluster or segment each tooth. This algorithm is referred to as a soft clustering method, which assigns data elements to one or more clusters with a specific membership function. Using the extracted clusters, the tooth border was determined and assessed for cavity. The results of histological analysis were used as the gold standard for comparison with the results obtained from the software. Depth of caries was measured, and finally Intraclass Correlation Coefficient (ICC) and Bland-Altman plot were used to show the agreement between the methods. Results: The software diagnosed 60% of enamel caries. The ICC (for detection of enamel caries) between the computer software and histological analysis results was determined as 0.609 (95% confidence interval [CI] = 0.159-0.849) (P = 0.006). Also, the computer program diagnosed 97% of dentin caries and the ICC between the software and histological analysis results for dentin caries was determined as 0.937 (95% CI=0.906-0.958) (P < 0.001). Bland-Altman plot showed an acceptable agreement for measuring the depth of caries in enamel and dentin. Conclusions: The designed software was able to detect a significant number of dentin caries and acceptable measuring of the depth of carious lesions in enamel and dentin. However, the software had limited ability in detecting enamel lesions.
Article
Full-text available
Objectives: The aim of this study is to evaluate the position of impacted third molars based on the classifications of Pell & Gregory and Winter in a sample of Iranian patients. Study design: In this retrospective study, up to 1020 orthopantomograms (OPG) of the patients who were referred to the radiology clinics from October 2007 to January 2011 were evaluated. Data including the age, gender, the angulation type, width and depth of impaction were evaluated by statistical tests. Results: Among 1020 patients, 380(27.3%) were male and 640(62.7%) were female with the sex ratio was 1:1.7. Of the 1020 OPGs, 585 cases showed at least one impacted third molar, with significant difference between males (205; 35.1%) and females (380; 64.9%) (P = 0.0311). Data analysis showed that impacted third molars were 1.9 times more likely to occur in the mandible than in the maxilla (P =0.000). The most common angulation of impaction in the mandible was mesioangular impaction (48.3%) and the most common angulation of impaction in the maxilla was the vertical (45.3%). Impaction in the level IIA was the most common in both maxilla and mandible. There was no significant diffe-rence between the right and left sides in both the maxilla and the mandible. Conclusion: The pattern of third molar impaction in the southeast region of Iran is characterized by a high prevalence of impaction, especially in the mandible. Female more than male have teeth impaction. The most common angulation was the mesioangular in the mandible, and the vertical angulation in the maxilla. The most common level of impaction was the A and there was no any significant difference between the right and left sides in both jaws. Key words:Third molar, impaction, incidence, Iran.
Article
Aim: To verify the diagnostic performance of an artificial intelligence system based on the deep convolutional neural network method to detect periapical pathosis in Cone-Beam Computed Tomography (CBCT) images. Methodology: In total, images of 153 periapical lesions obtained from 109 patients were included. The specific area of the jaw and teeth associated with the periapical lesions were then determined by a human observer. Lesion volumes were calculated using the manual segmentation methods using Fujifilm-Synapse 3D software (Fujifilm Medical Systems, Stamford, Conn, USA). The neural network was then used to determine (1) whether the lesion could be detected; (2) if the lesion was detected, where it was localized (maxilla, mandible, or specific tooth); and (3) lesion volume. Manual segmentation and artificial intelligence (AI) (Diagnocat Inc., San Fransisco, CA, USA) methods were compared using Wilcoxon signed rank test and Bland-Altman analysis. Results: The deep convolutional neural network system was successful in detecting teeth and numbering specific teeth. Only one tooth was incorrectly identified. The AI system was able to detect 142 of a total of 153 periapical lesions. The reliability of correctly detecting a periapical lesion was 92.8%. The deep convolutional neural network volumetric measurements of the lesions were similar to those with manual segmentation. There was no significant difference between the two measurements methods (p> 0,05). Conclusions: Volume measurements performed by humans and by AI systems were comparable to each other. AI systems based on deep learning methods can be useful in detecting periapical pathosis in CBCT images for clinical application.
Article
Objectives: To evaluate a fully deep learning mask region-based convolutional neural network (R-CNN) method for automated tooth segmentation using individual annotation of panoramic radiographs. Study design: In total, 846 images with tooth annotations from 30 panoramic radiographs were used for training, and 20 panoramic images as the validation and test sets. An oral radiologist manually performed individual tooth annotation on the panoramic radiographs to generate the ground truth of each tooth structure. We used the augmentation technique to reduce overfitting and obtained 1024 training samples from 846 original data points. A fully deep learning method using the mask R-CNN model was implemented through a fine-tuning process to detect and localize the tooth structures. For performance evaluation, the F1 score, mean intersection over union (IoU), and visual analysis were utilized. Results: The proposed method produced an F1 score of 0.875 (precision: 0.858, recall: 0.893) and a mean IoU of 0.877. A visual evaluation of the segmentation method showed a close resemblance to the ground truth. Conclusions: The method achieved high performance for automation of tooth segmentation on dental panoramic images. The proposed method might be applied in the first step of diagnosis automation and in forensic identification, which involves similar segmentation tasks.
Article
Objectives The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for detecting vertical root fracture (VRF) on panoramic radiography. Methods Three hundred panoramic images containing a total of 330 VRF teeth with clearly visible fracture lines were selected from our hospital imaging database. Confirmation of VRF lines was performed by two radiologists and one endodontist. Eighty percent (240 images) of the 300 images were assigned to a training set and 20% (60 images) to a test set. A CNN-based deep learning model for the detection of VRFs was built using DetectNet with DIGITS version 5.0. To defend test data selection bias and increase reliability, fivefold cross-validation was performed. Diagnostic performance was evaluated using recall, precision, and F measure. Results Of the 330 VRFs, 267 were detected. Twenty teeth without fractures were falsely detected. Recall was 0.75, precision 0.93, and F measure 0.83. Conclusions The CNN learning model has shown promise as a tool to detect VRFs on panoramic images and to function as a CAD tool.
Article
Purpose: This study analyzes the risk factors associated with the incidences of inferior alveolar nerve (IAN) injury after surgical removal of impacted mandibular third molar (IMTM) and to evaluate the contribution of these risk factors to postoperative neurosensory deficits. Materials and methods: An exhaustive literature search has been carried out in the COCHRANE library and PubMed electronic databases from January 1990 to March 2019 supplemented by manual searching to identify the related studies. Twenty-three studies out of 693 articles from the initial search were finally included, which summed up a total of 26,427 patients (44,171 teeth). Results: Our results have been compared with other current available papers in the literature review that obtained similar outcomes. Among 44,171 IMTM extractions performed by various grades of operators, 1.20% developed transient IAN deficit and 0.28% developed permanent IAN deficit respectively. Depth of impaction (P<0.001), contact between mandibular canal (MC) and IMTM (P<0.001), surgical technique (P<0.001), intra-operative nerve exposure (P<0.001), and surgeon's experience (P<0.001) were statistically significant as contributing risk factors of IAN deficits. Conclusion: Radiographic findings, such as depth of impaction, proximity of the tooth to the mandibular canal, surgical technique, intra-operative nerve exposure, and surgeon's experience were high risk factors of IAN deficit after surgical removal of IMTMs.
Article
Objectives To investigate the current clinical applications and diagnostic performance of artificial intelligence (AI) in dental and maxillofacial radiology (DMFR). Materials and methods Studies using applications related to DMFR to develop or implement AI models were sought by searching five electronic databases and four selected core journals in the field of DMFR. The customized assessment criteria based on QUADAS-2 were adapted for quality analysis of the studies included. Results The initial electronic search yielded 1862 titles, and 50 studies were eventually included. Most studies focused on AI applications for an automated localization of cephalometric landmarks, diagnosis of osteoporosis, classification/segmentation of maxillofacial cysts and/or tumors, and identification of periodontitis/periapical disease. The performance of AI models varies among different algorithms. Conclusion The AI models proposed in the studies included exhibited wide clinical applications in DMFR. Nevertheless, it is still necessary to further verify the reliability and applicability of the AI models prior to transferring these models into clinical practice.
Article
Aim and scope: Artificial intelligence (AI) in medicine is a fast-growing field. The rise of deep learning algorithms, such as convolutional neural networks (CNNs), offers fascinating perspectives for the automation of medical image analysis. In this systematic review article, we screened the current literature and investigated the following question: "Can deep learning algorithms for image recognition improve visual diagnosis in medicine?" Materials and methods: We provide a systematic review of the articles using CNNs for medical image analysis, published in the medical literature before May 2019. Articles were screened based on the following items: type of image analysis approach (detection or classification), algorithm architecture, dataset used, training phase, test, comparison method (with specialists or other), results (accuracy, sensibility and specificity) and conclusion. Results: We identified 352 articles in the PubMed database and excluded 327 items for which performance was not assessed (review articles) or for which tasks other than detection or classification, such as segmentation, were assessed. The 25 included papers were published from 2013 to 2019 and were related to a vast array of medical specialties. Authors were mostly from North America and Asia. Large amounts of qualitative medical images were necessary to train the CNNs, often resulting from international collaboration. The most common CNNs such as AlexNet and GoogleNet, designed for the analysis of natural images, proved their applicability to medical images. Conclusion: CNNs are not replacement solutions for medical doctors, but will contribute to optimize routine tasks and thus have a potential positive impact on our practice. Specialties with a strong visual component such as radiology and pathology will be deeply transformed. Medical practitioners, including surgeons, have a key role to play in the development and implementation of such devices.
Article
Artificial Intelligence (AI) applications have already invaded our everyday life, and the last 10 years have seen the emergence of very promising applications in the field of medicine. However, the literature dealing with the potential applications of IA in Orthognathic Surgery is remarkably poor to date. Yet, it is very likely that due to its amazing power in image recognition AI will find tremendous applications in dento-facial deformities recognition in a near future. In this article, we point out the state-of-the-art AI applications in medicine and its potential applications in the field of orthognathic surgery. AI is a very powerful tool and it is the responsibility of the entire medical profession to achieve a positive symbiosis between clinical sense and AI.
Article
Objectives To apply a deep-learning system for diagnosis of maxillary sinusitis on panoramic radiography, and to clarify its diagnostic performance. Methods Training data for 400 healthy and 400 inflamed maxillary sinuses were enhanced to 6000 samples in each category by data augmentation. Image patches were input into a deep-learning system, the learning process was repeated for 200 epochs, and a learning model was created. Newly-prepared testing image patches from 60 healthy and 60 inflamed sinuses were input into the learning model, and the diagnostic performance was calculated. Receiver-operating characteristic (ROC) curves were drawn, and the area under the curve (AUC) values were obtained. The results were compared with those of two experienced radiologists and two dental residents. Results The diagnostic performance of the deep-learning system for maxillary sinusitis on panoramic radiographs was high, with accuracy of 87.5%, sensitivity of 86.7%, specificity of 88.3%, and AUC of 0.875. These values showed no significant differences compared with those of the radiologists and were higher than those of the dental residents. Conclusions The diagnostic performance of the deep-learning system for maxillary sinusitis on panoramic radiographs was sufficiently high. Results from the deep-learning system are expected to provide diagnostic support for inexperienced dentists.
Article
Objectives:: The distal root of the mandibular first molar occasionally has an extra root, which can directly affect the outcome of endodontic therapy. In this study, we examined the diagnostic performance of a deep learning system for classification of the root morphology of mandibular first molars on panoramic radiographs. Dental cone-beam CT (CBCT) was used as the gold standard. Methods:: CBCT images and panoramic radiographs of 760 mandibular first molars from 400 patients who had not undergone root canal treatments were analyzed. Distal roots were examined on CBCT images to determine the presence of a single or extra root. Image patches of the roots were segmented from panoramic radiographs and applied to a deep learning system, and its diagnostic performance in the classification of root morphplogy was examined. Results:: Extra roots were observed in 21.4% of distal roots on CBCT images. The deep learning system had diagnostic accuracy of 86.9% for the determination of whether distal roots were single or had extra roots. Conclusions:: The deep learning system showed high accuracy in the differential diagnosis of a single or extra root in the distal roots of mandibular first molars.
Article
Objectives: Deep convolutional neural networks (CNNs) are a rapidly emerging new area of medical research, and have yielded impressive results in diagnosis and prediction in the fields of radiology and pathology. The aim of the current study was to evaluate the efficacy of deep CNN algorithms for detection and diagnosis of dental caries on periapical radiographs. Materials and methods: A total of 3000 periapical radiographic images were divided into a training and validation dataset (n = 2400 [80%]) and a test dataset (n = 600 [20%]). A pre-trained GoogLeNet Inception v3 CNN network was used for preprocessing and transfer learning. The diagnostic accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, and area under the curve (AUC) were calculated for detection and diagnostic performance of the deep CNN algorithm. Results: The diagnostic accuracies of premolar, molar, and both premolar and molar models were 89.0% (80.4-93.3), 88.0% (79.2-93.1), and 82.0% (75.5-87.1), respectively. The deep CNN algorithm achieved an AUC of 0.917 (95% CI 0.860-0.975) on premolar, an AUC of 0.890 (95% CI 0.819-0.961) on molar, and an AUC of 0.845 (95% CI 0.790-0.901) on both premolar and molar models. The premolar model provided the best AUC, which was significantly greater than those for other models (P < 0.001). Conclusions: This study highlighted the potential utility of deep CNN architecture for the detection and diagnosis of dental caries. A deep CNN algorithm provided considerably good performance in detecting dental caries in periapical radiographs. CLINICAL SIGNIfiCANCE: Deep CNN algorithms are expected to be among the most effective and efficient methods for diagnosing dental caries.
Article
In this article, we apply the deep learning technique to medical field for the teeth detection and classification of dental periapical radiographs, which is important for the medical curing and postmortem identification. We detect teeth in an input X-ray image and distinguish them from different position. An adult usually has 32 teeth, and some of them are similar while others have very different shape. So there are 32 teeth position for us to recognize, which is a challenging task. Convolutional neural network is a popular method to do multi-class detection and classification, but it needs a lot of training data to get a good result if used directly. The lack of data is a common case in medical field due to patients' privacy. In this work, limited to the available data, we propose a new method using label tree to give each tooth several labels and decompose the task, which can deal with the lack of data. Then use cascade network structure to do automatic identification on 32 teeth position, which uses several convolutional neural network as its basic module. Meanwhile, several key strategies are utilized to improve the detection and classification performance. Our method can deal with many complex cases such as X-ray images with tooth loss, decayed tooth and filled tooth, which frequently appear on patients. The experiments on our dataset show: for small training dataset, compared to the precision and recall by training a 33-classes (32 teeth and background) state-of-the-art convolutional neural network directly, the proposed approach reaches a high precision and recall of 95.8% and 96.1% in total, which is a big improvement in such a complex task.
Article
Recent advances and future perspectives of machine learning techniques offer promising applications in medical imaging. Machine learning has the potential to improve different steps of the radiology workflow including order scheduling and triage, clinical decision support systems, detection and interpretation of findings, postprocessing and dose estimation, examination quality control, and radiology reporting. In this article, the authors review examples of current applications of machine learning and artificial intelligence techniques in diagnostic radiology. In addition, the future impact and natural extension of these techniques in radiology practice are discussed.
Article
Aim: To develop a machine learning-based model for the binary classification of chest radiography abnormalities, to serve as a retrospective tool in guiding clinician reporting prioritisation. Materials and methods: The open-source machine learning library, Tensorflow, was used to retrain a final layer of the deep convolutional neural network, Inception, to perform binary normality classification on two, anonymised, public image datasets. Re-training was performed on 47,644 images using commodity hardware, with validation testing on 5,505 previously unseen radiographs. Confusion matrix analysis was performed to derive diagnostic utility metrics. Results: A final model accuracy of 94.6% (95% confidence interval [CI]: 94.3-94.7%) based on an unseen testing subset (n=5,505) was obtained, yielding a sensitivity of 94.6% (95% CI: 94.4-94.7%) and a specificity of 93.4% (95% CI: 87.2-96.9%) with a positive predictive value (PPV) of 99.8% (95% CI: 99.7-99.9%) and area under the curve (AUC) of 0.98 (95% CI: 0.97-0.99). Conclusion: This study demonstrates the application of a machine learning-based approach to classify chest radiographs as normal or abnormal. Its application to real-world datasets may be warranted in optimising clinician workload.
Article
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. Full text: https://rdcu.be/O1xz
Article
Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification.
Article
Purpose: To determine the width and morphology of the mandible in the impacted third molar region, and to identify the location of the mandibular canal prior to planning impacted third molar operations. Methods: Cone beam computed tomography (CBCT) data of 87 mandibular third molars from 62 Japanese patients were analyzed in this study. The width of the lingual cortical bone and apex-canal distance were measured from cross-sectional images in which the cortical bone was thinnest at the lingual side in the third molar region. Images were used for measuring the space (distance between the inner border of the lingual cortical bone and outer surface of the third molar root), apex-canal distance (distance from the root of the third molar tooth to the superior border of the inferior alveolar canal) and the cortical bone (width between the inner and outer borders of the lingual cortical bone). Results: The means of the space, apex-canal distance and lingual cortical width were 0.31, 1.99, and 0.68 mm, respectively. Impacted third molar teeth (types A-C) were observed at the following frequencies: type A (angular) 37 %; type B (horizontal), 42 %; type C (vertical), 21 %. The morphology of the mandible at the third molar region (types D-F) was observed as: type D (round), 49 %; type E (lingual extended), 18 %; and type F (lingual concave), 32 %. Conclusions: The width and morphology of the mandible with impacted teeth and the location of the mandibular canal at the third molar region could be clearly determined using cross-sectional CBCT images.
Article
We propose a dental classification and numbering system to effectively segment, classify, and number teeth in dental bitewing radiographs. An image enhancement method that combines homomorphic filtering, homogeneity-based contrast stretching, and adaptive morphological transformation is proposed to improve both contrast and illumination evenness of the radiographs simultaneously. Iterative thresholding and integral projection are adapted to isolate teeth to regions of interest (ROIs) followed by contour extraction of the tooth and the pulp (if available) from each ROI. A binary linear support vector machine using the skew-adjusted relative length/width ratios of both teeth and pulps, and crown size as features is proposed to classify each tooth to molar or premolar. Finally, a numbering scheme that combines a missing teeth detection algorithm and a simplified version of sequence alignment commonly used in bioinformatics is presented to assign each tooth a proper number. Experimental results show that our system has accuracy rates of 95.1% and 98.0% for classification and numbering, respectively, in terms of number of teeth tested, and correctly classifies and numbers the teeth in four images that were reported either misclassified or erroneously numbered, respectively.
Article
To evaluate the course of the inferior alveolar nerve and its branches, the detectable branches were investigated with dental cone beam computed tomography (CBCT). Patients in whom the lower third molar (M3) and inferior alveolar nerve canal showed overlapping in the initial panoramic image were included. One hundred twelve impacted lower M3s were extracted after examination with dental CBCT. The detection ratio, the course of the branches, and their relation with the M3 were retrospectively investigated. One hundred fifty-five branches were observed in 106 cases (94.6%, 106/112) around the M3. Most branches coursed under the M3 (55.5%, 86/155), and 85 branches (54.8%, 85/155) were in contact with the M3. The inferior alveolar nerve canal and branch(es) were mostly in contact with the M3 (57.5%, 61/106). Dental CBCT can detect most tubular structures representing branches in the impacted lower M3 region.
Article
To evaluate if the application of an artificial intelligence model, a multilayer perceptron neural network, improves the radiographic diagnosis of proximal caries. One hundred sixty radiographic images of proximal surfaces of extracted human teeth were assessed regarding the presence of caries by 25 examiners. Examination of the radiographs was used to feed the neural network, and the corresponding teeth were sectioned and assessed under optical microscope (gold standard). This gold standard served to teach the neural network to diagnose caries on the basis of the radiographic exams. To gauge the network's capacity for generalization, i.e., its performance with new cases, data were divided into 3 subgroups for training, test, and cross-validation. The area under the receiver operating characteristic (ROC) curve allowed comparison of efficacy between network and examiner diagnosis. For the best of the 25 examiners, the ROC curve area was 0.717, whereas network diagnosis achieved an ROC curve area of 0.884, indicating a sizeable improvement in proximal caries diagnosis. Considering all examiners, the diagnostic improvement using the neural network was 39.4%.