ChapterPDF Available

Machine learning in expert systems for disease diagnostics in human healthcare

Authors:

Abstract

Expert systems are a rapidly emerging technology in the field of artificial intelligence (AI), having an immense impact on human healthcare. The main objective of a medical expert system is to help medical professionals in arriving at correct diagnostics. Information gathering is an important part of disease diagnosis, and traditional methods of diagnosis are very time-consuming and require a high level of expertise. Expert systems are computer systems that attempt to imitate the ability of human diagnostic decision-making. They employ knowledge about diseases and facts about patients as data and suggest diagnoses by using machine-learning methods. Expert systems offer suggestions to physicians/area experts in order to improve the expert’s ability and increase the consistency and quality of diagnostics. This type of system is also very helpful for patients who are unable to reach a doctor due to cost, being in a remote area, or feeling too ashamed to discuss their circumstances with a doctor. The expert system also helps to improve decision quality, reduce cost, and maintain consistency, reliability, and speed of diagnosis. There are various learning-based expert systems for different diseases, developed to help doctors and to serve in disease diagnosis. This chapter discusses important existing expert systems for human disease diagnosis in detail. It also provides a brief evaluation of various techniques used in the development of expert systems.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Background: About 90% of patients who have diabetes suffer from Type 2 DM (T2DM). Many studies suggest using the significant role of lncRNAs to improve the diagnosis of T2DM. Machine learning and Data Mining techniques are tools that can improve the analysis and interpretation or extraction of knowledge from the data. These techniques may enhance the prognosis and diagnosis associated with reducing diseases such as T2DM. We applied four classification models, including K-nearest neighbor (KNN), support vector machine (SVM), logistic regression, and artificial neural networks (ANN) for diagnosing T2DM, and we compared the diagnostic power of these algorithms with each other. We performed the algorithms on six LncRNA variables (LINC00523, LINC00995, HCG27_201, TPT1-AS1, LY86-AS1, DKFZP) and demographic data. Results: To select the best performance, we considered the AUC, sensitivity, specificity, plotted the ROC curve, and showed the average curve and range. The mean AUC for the KNN algorithm was 91% with 0.09 standard deviation (SD); the mean sensitivity and specificity were 96 and 85%, respectively. After applying the SVM algorithm, the mean AUC obtained 95% after stratified 10-fold cross-validation, and the SD obtained 0.05. The mean sensitivity and specificity were 95 and 86%, respectively. The mean AUC for ANN and the SD were 93% and 0.03, also the mean sensitivity and specificity were 78 and 85%. At last, for the logistic regression algorithm, our results showed 95% of mean AUC, and the SD of 0.05, the mean sensitivity and specificity were 92 and 85%, respectively. According to the ROCs, the Logistic Regression and SVM had a better area under the curve compared to the others. Conclusion: We aimed to find the best data mining approach for the prediction of T2DM using six lncRNA expression. According to the finding, the maximum AUC dedicated to SVM and logistic regression, among others, KNN and ANN also had the high mean AUC and small standard deviations of AUC scores among the approaches, KNN had the highest mean sensitivity and the highest specificity belonged to SVM. This study's result could improve our knowledge about the early detection and diagnosis of T2DM using the lncRNAs as biomarkers.
Article
Full-text available
Background: Several studies highlight the effects of artificial intelligence (AI) systems on healthcare delivery. AI-based tools may improve prognosis, diagnostics, and care planning. It is believed that AI will be an integral part of healthcare services in the near future and will be incorporated into several aspects of clinical care. Thus, many technology companies and governmental projects have invested in producing AI-based clinical tools and medical applications. Patients can be one of the most important beneficiaries and users of AI-based applications whose perceptions may affect the widespread use of AI-based tools. Patients should be ensured that they will not be harmed by AI-based devices, and instead, they will be benefited by using AI technology for healthcare purposes. Although AI can enhance healthcare outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care. Methods: We develop a model mainly based on value perceptions due to the specificity of the healthcare field. This study aims at examining the perceived benefits and risks of AI medical devices with clinical decision support (CDS) features from consumers' perspectives. We use an online survey to collect data from 307 individuals in the United States. Results: The proposed model identifies the sources of motivation and pressure for patients in the development of AI-based devices. The results show that technological, ethical (trust factors), and regulatory concerns significantly contribute to the perceived risks of using AI applications in healthcare. Of the three categories, technological concerns (i.e., performance and communication feature) are found to be the most significant predictors of risk beliefs. Conclusions: This study sheds more light on factors affecting perceived risks and proposes some recommendations on how to practically reduce these concerns. The findings of this study provide implications for research and practice in the area of AI-based CDS. Regulatory agencies, in cooperation with healthcare institutions, should establish normative standard and evaluation guidelines for the implementation and use of AI in healthcare. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI-based services.
Article
Full-text available
Smart cities have emerged as a possible solution to sustainability problems deriving from rapid urbanization. They are considered imperative for a sustainable future. Despite their recent popularity, the literature reveals the lack of conceptual clarity around the term of smart city, due to the plethora of existing definitions. This comprehensive literature review has identified 43 smart city definitions assessed according to the dimensions of sustainability that they consider, environmental, economic or social, and the priority in which they accord the concept of sustainability. The study revealed the common and opposite characteristics of the definitions according to the sustainability dimensions they consider and discussed the limitations they present. Such limitations appear to be related to citizen accessibility, misrepresentation and the particularity of existing urban fabrics. Taking into account these issues, as well as the difference between the smart city vision and its actual implementation, a new updated definition is proposed. The findings of the present study contribute to knowledge and practice by aiding conceptual clarity and, in particular, by drawing attention to underlying assumptions about the role of sustainability in smart city development.
Article
Full-text available
Data Mining [DM] has exceptional and prodigious potential for examining and analyzing the vague data of the medical domain. Where these data are used in clinical prognosis and diagnosis. Nevertheless, the unprocessed medical data are widely scattered, diverse in nature, and voluminous. These data should be accumulated in a sorted out structure. DM innovation and creativity give a customer a situated way to deal with new fashioned and hidden patterns in the data. The advantages of using DM in medical approach are unbounded and it has abundant applications, the most important: it leads to better medical treatment with a lower cost. Consequently, DM algorithms have the main usage in cancer detection and treatment through providing a learning rich environment which can help to improve the quality of clinical decisions. Multi researches are published about the using of DM in different destinations in the medical field. This paper provides an elaborated study about utilization of DM in cancer prediction and classifying, in addition to the main features and challenges in these researches are introduced in this paper for helping apprentice and youthful scientists and showing for them the key principle issues that are still exist around there.
Article
Full-text available
Precision medicine is one of the recent and powerful developments in medical care, which has the potential to improve the traditional symptom-driven practice of medicine, allowing earlier interventions using advanced diagnostics and tailoring better and economically personalized treatments. Identifying the best pathway to personalized and population medicine involves the ability to analyze comprehensive patient information together with broader aspects to monitor and distinguish between sick and relatively healthy people, which will lead to a better understanding of biological indicators that can signal shifts in health. While the complexities of disease at the individual level have made it difficult to utilize healthcare information in clinical decision-making, some of the existing constraints have been greatly minimized by technological advancements. To implement effective precision medicine with enhanced ability to positively impact patient outcomes and provide real-time decision support, it is important to harness the power of electronic health records by integrating disparate data sources and discovering patient-specific patterns of disease progression. Useful analytic tools, technologies, databases, and approaches are required to augment networking and interoperability of clinical, laboratory and public health systems, as well as addressing ethical and social issues related to the privacy and protection of healthcare data with effective balance. Developing multifunctional machine learning platforms for clinical data extraction, aggregation, management and analysis can support clinicians by efficiently stratifying subjects to understand specific scenarios and optimize decision-making. Implementation of artificial intelligence in healthcare is a compelling vision that has the potential in leading to the significant improvements for achieving the goals of providing real-time, better personalized and population medicine at lower costs. In this study, we focused on analyzing and discussing various published artificial intelligence and machine learning solutions, approaches and perspectives, aiming to advance academic solutions in paving the way for a new data-centric era of discovery in healthcare.
Article
Full-text available
The re-kindled fascination in machine learning (ML), observed over the last few decades, has also percolated into natural sciences and engineering. ML algorithms are now used in scientific computing, as well as in data-mining and processing. In this paper, we provide a review of the state-of-the-art in ML for computational science and engineering. We discuss ways of using ML to speed up or improve the quality of simulation techniques such as computational fluid dynamics, molecular dynamics, and structural analysis. We explore the ability of ML to produce computationally efficient surrogate models of physical applications that circumvent the need for the more expensive simulation techniques entirely. We also discuss how ML can be used to process large amounts of data, using as examples many different scientific fields, such as engineering, medicine, astronomy and computing. Finally, we review how ML has been used to create more realistic and responsive virtual reality applications.
Article
Full-text available
Due to the huge amount of biological and medical data available today, along with well-established machine learning algorithms, the design of largely automated drug development pipelines can now be envisioned. These pipelines may guide, or speed up, drug discovery; provide a better understanding of diseases and associated biological phenomena; help planning preclinical wet-lab experiments, and even future clinical trials. This automation of the drug development process might be key to the current issue of low productivity rate that pharmaceutical companies currently face. In this survey, we will particularly focus on two classes of methods: sequential learning and recommender systems, which are active biomedical fields of research.
Chapter
Full-text available
Artificial intelligence (AI) has the potential of detecting significant interactions in a dataset and also it is widely used in several clinical conditions to expect the results, treat, and diagnose. Artificial intelligence (AI) is being used or trialed for a variety of healthcare and research purposes, including detection of disease, management of chronic conditions, delivery of health services, and drug discovery. In this chapter, we will discuss the application of artificial intelligence (AI) in modern healthcare system and the challenges of this system in detail. Different types of artificial intelligence devices are described in this chapter with the help of working mechanism discussion. Alginate, a naturally available polymer found in the cell wall of the brown algae, is used in tissue engineering because of its biocompatibility, low cost, and easy gelation. It is composed of α-L-guluronic and β-D-manuronic acid. To improve the cell-material interaction and erratic degradation, alginate is blended with other polymers. Here, we discuss the relationship of artificial intelligence with alginate in tissue engineering fields.
Article
Full-text available
: An improved computer-aided diagnosis (CAD) system is proposed for the early diagnosis of Alzheimer’s disease (AD) based on the fusion of anatomical (magnetic resonance imaging (MRI)) and functional (8F-fluorodeoxyglucose positron emission tomography (FDG-PET)) multimodal images, and which helps to address the strong ambiguity or the uncertainty produced in brain images. The merit of this fusion is that it provides anatomical information for the accurate detection of pathological areas characterized in functional imaging by physiological abnormalities. First, quantification of brain tissue volumes is proposed based on a fusion scheme in three successive steps: modeling, fusion and decision. (1) Modeling which consists of three sub-steps: the initialization of the centroids of the tissue clusters by applying the Bias corrected Fuzzy C-Means (FCM) clustering algorithm. Then, the optimization of the initial partition is performed by running genetic algorithms. Finally, the creation of white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) tissue maps by applying the Possibilistic FCM clustering algorithm. (2) Fusion using a possibilistic operator to merge the maps of the MRI and PET images highlighting redundancies and managing ambiguities. (3) Decision offering more representative anatomo-functional fusion images. Second, a support vector data description (SVDD) classifier is used that must reliably distinguish AD from normal aging and automatically detects outliers. The "divide and conquer" strategy is then used, which speeds up the SVDD process and reduces the load and cost of the calculating. The robustness of the tissue quantification process is proven against noise (20% level), partial volume effects and when inhomogeneities of spatial intensity are high. Thus, the superiority of the SVDD classifier over competing conventional systems is also demonstrated with the adoption of the 10-fold cross-validation approach for synthetic datasets (Alzheimer disease neuroimaging (ADNI) and Open Access Series of Imaging Studies (OASIS)) and real images. The percentage of classification in terms of accuracy (%), sensitivity (%), specificity (%) and area under ROC curve was 93.65%, 90.08%, 92.75% and 0.973; 91.46%, 92%, 91.78% and 0.967; 85.09%, 86.41%, 84.92% and 0.946 in the case of the ADNI, OASIS and real images respectively.
Chapter
The meaningful data extraction from the biological big data or omics data is a remaining challenge in bioinformatics. The deep learning methods, which can be used for the prediction of hidden information from the biological data, are widely used in the industry and academia. The authors have discussed the similarity and differences in the widely utilized models in deep learning studies. They first discussed the basic structure of various models followed by their applications in biological perspective. They have also discussed the suggestions and limitations of deep learning. They expect that this chapter can serve as significant perspective for continuous development of its theory, algorithm, and application in the established bioinformatics domain.