Article

Cloud-based decision support system for the detection and classification of malignant cells in breast cancer using breast cytology images

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Traditional computing infrastructure do no support the storage, analysis, collaboration of the data, and care coordination [26], yet advanced computational technologies, such as CC, are required for these purposes [18]. Previous studies have shown improved cancer care using CC technology [27][28][29][30][31]. ...
... Just half of the studies included in the current review focused on the use of cloud technology in the cancer management of clinical data [27][28][29][30][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65] followed by image data (30%) [31,[66][67][68][69][70][71][72][73][74][75][76][77], and genomic data (20%) [21,23,[78][79][80][81][82][83][84]. The estimation of cancer progression and prognosis requires access to various types of cancer data (i.e. ...
... MCC is the new paradigm for integrating cloud resources into the mobile technology environment to overcome the computing and storage resources of mobile devices [89]. The objectives of combining the CC technology into mobile environments included improving the accuracy and predictability of cancer [31,47] and promoting physician-patient interactions to enhance medication adherence by patients [48]. Hence, MCC-based systems combine CC with mobile devices to enhance mobile storage and computational resources aimed at facilitating communications, sharing patients' data among authorized stakeholders, and improving care provided to patients. ...
Article
Full-text available
Objective There has been exponential growth in the volume, variety, and velocity of cancer data. Cloud computing (CC) plays a pivotal role in the managing of cancer big data by providing powerful processing, infinite storage, accessibility, scalability, and cost-effectiveness computational resources. This study aims to systematically review and summarize studies on CC applications in cancer information management. Material and methods A structured search was conducted according to PRISMA statement guideline in the Web of Science, PubMed, Scopus, and IEEE databases as well as the Google Scholar for studies published until July 7, 2021. Of the total 498 yielded study, 48 articles were included in the review, and their results were classified and coded. The button–up Grounded theory method was used to analyze the data of the final articles. According axial and open coding approaches, three categories, eight subcategories and 47 themes were identified. Results Half of the studies focused on cancer clinical data. Medical imaging management was the most common application, and software as a service (SaaS) was the most prevalent service model of CC in the cancer domain. The majority of the studies deployed public cloud for cancer management. Data processing was the most common cloud usage in managing cancer data. Accurate and early cancer diagnosis was the most frequent reason for using CC in cancer care management. Physicians were the main users of cancer data in the cloud platform. Breast cancer and lung cancer were the most prevalent types of cancer managed using the cloud-based technology. Technical and organizational opportunities were the main incentives for applying CC in cancer information management. Conclusions CC, by providing available, ubiquitous, distributed, scalable, and flexible resource pooling, unlimited storage, and an approach of lower costs, enhances computational functionalities, enables collaborative analysis and care, and finally improves the cancer care continuum. Furthermore, the results of this study by summarizing the scientific literature related to CC in cancer information management, introducing the application areas, the role of CC in improving cancer diagnosis and treatment and technology users, has provided comprehensive information to acquire basic knowledge to use CC in cancer information management.
... In the case of X-ray images, a neural network can extract significant characteristics through a transfer process [10]. Transfer learning feature engineering also allows researchers to customize pre-trained networks for specific tasks, such as identifying lung nodules or detecting pneumonia [11]. Overall, transfer learning feature engineering [12] has shown promising results in improving the accuracy of Xray image recognition and is likely to continue to be a valuable tool in medical imaging research. ...
... However, by applying our CRK feature engineering approach, the feature space becomes more conducive to linear separation, as Figure 12(b) visualizes. The performance improvement achieved through the CRK approach in this study is confirmed by the feature 10 VOLUME 11,2023 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. ...
Article
Full-text available
Osteoarthritis is a deteriorating joint disease affecting millions worldwide. Osteoarthritis is a chronic condition that develops over time due to joint wear and tears. The degeneration of joint cartilage is the underlying cause of osteoarthritis, resulting in bone-to-bone contact and contributing to stiffness, discomfort, and restricted movement. People with osteoarthritis struggle to perform simple tasks such as walking, standing, or climbing stairs. Moreover, osteoarthritis can also cause psychological distress, including depression and anxiety, due to the chronic pain and disability associated with the condition. Improving the quality of life requires the development of efficient methods for early detection. Our study aims to create a model that can effectively diagnose osteoarthritis in knee X-ray images at an early stage. The advanced deep learning-based Convolutional Neural Network (CNN) and several machine learning-based techniques are applied in comparison. A novel transfer learning-based feature engineering technique CRK (CNN Random forest K-neighbors) is proposed to detect osteoarthritis with high performance. Using a 2D-CNN, the proposed CRK smartly extracts the spatial features from the X-ray images. The spatial features are input to the random forest and k-neighbors techniques, creating a probabilistic feature set. The probabilistic feature set is utilized to build the applied machine learning-based techniques. Extensive study experiments demonstrate that the proposed model outperformed with a 99% accuracy score for predicting osteoarthritis. The performance of each applied model is validated using hyperparameter optimization and k-fold-based cross-validation. The proposed study has the potential to revolutionize the prediction of osteoarthritis from X-ray images with high-performance scores.
... e model was trained using the cross-entropy loss function and claimed 82.24%, 77.89%, and 78.38% of F1-score, sensitivity, and accuracy. Saba et al. [22] produced a cloud-based decision support system for the identification and categorization of malignant cells in breast cancer by extracting shape-based information from a breast cytology images. To identify breast cancer, naive Bayesian and artificial neural networks are used and 98% accuracy is reported. ...
... Dataset Classifiers Accuracy (%) [2] WBCD ANN 98.68 [18] S-WAVE Random forest and support vector machine. 95 [19] MIAS CNN models 90.47 [20] -YOLO and RetinaNet models 91 [21] MIAS UNet (DenseNet) with attention gates (AG) 78.38 [22] -Naive bayesian and artificial neural networks 98 [23] OASBUD Decision tree, KNN,97 method is universally accepted for all images because every imaging technology and procedure have specific limitations [33]. e region growing method is used for tumor segmentation in the proposed technique. ...
Article
Full-text available
Breast cancer is the primary health issue that women may face at some point in their lifetime. This may lead to death in severe cases. A mammography procedure is used for finding suspicious masses in the breast. Teleradiology is employed for online treatment and diagnostics processes due to the unavailability and shortage of trained radiologists in backward and remote areas. The availability of online radiologists is uncertain due to inadequate network coverage in rural areas. In such circumstances, the Computer-Aided Diagnosis (CAD) framework is useful for identifying breast abnormalities without expert radiologists. This research presents a decision-making system based on IoMT (Internet of Medical Things) to identify breast anomalies. The proposed technique encompasses the region growing algorithm to segment tumor that extracts suspicious part. Then, texture and shape-based features are employed to characterize breast lesions. The extracted features include first and second-order statistics, center-symmetric local binary pattern (CS-LBP), a histogram of oriented gradients (HOG), and shape-based techniques used to obtain various features from the mammograms. Finally, a fusion of machine learning algorithms including K-Nearest Neighbor (KNN), Support Vector Machine (SVM), and Linear Discriminant Analysis (LDA are employed to classify breast cancer using composite feature vectors. The experimental results exhibit the proposed framework’s efficacy that separates the cancerous lesions from the benign ones using 10-fold cross-validations. The accuracy, sensitivity, and specificity attained are 96.3%, 94.1%, and 98.2%, respectively, through shape-based features from the MIAS database. Finally, this research contributes a model with the ability for earlier and improved accuracy of breast tumor detection.
... Lastly, the Deep Belief Network fused with an extreme learning machine and claimed 99.99% and 99.12% accuracy using Breast Cancer Wisconsin original (WBCO) and WDBC datasets. Saba et al. [15] addressed the application of cytology images to breast cancer detection and classification using Naive Bayesian and the Artificial Neural Network. They claimed 98% accuracy on breast cytology images. ...
... The whole process is carried out on the cloud for cross-validation from experts. Moreover, the patients from remote areas will be capable to input radiology images into the cloud system, which is [22] Xception and Deeplabv3+ 95+ Saba et al. [20] AlexNet and DenseNet201 92.8 Mohiyuddin et al. [21] YOLOv5 96.5 Darweesh et al. [38] LBP + random forest 85 Yu et al. [39] VGG16 89.06 Shi et al. [40] CNN 83.6 Saba et al. [15] Naive Bayesian and artificial neural network 98 Saraswathi et al. [41] Swarm intelligence 92 ...
Article
Full-text available
Breast cancer is common among women all over the world. Early identification of breast cancer lowers death rates. However, it is difficult to determine whether these are cancerous or noncancerous lesions due to their inconsistencies in image appearance. Machine learning techniques are widely employed in imaging analysis as a diagnostic method for breast cancer classification. However, patients cannot take advantage of remote areas as these systems are unavailable on clouds. Thus, breast cancer detection for remote patients is indispensable, which can only be possible through cloud computing. The user is allowed to feed images into the cloud system, which is further investigated through the computer aided diagnosis (CAD) system. Such systems could also be used to track patients, older adults, especially with disabilities, particularly in remote areas of developing countries that do not have medical facilities and paramedic staff. In the proposed CAD system, a fusion of AlexNet architecture and GLCM (gray-level cooccurrence matrix) features are used to extract distinguishable texture features from breast tissues. Finally, to attain higher precision, an ensemble of MK-SVM is used. For testing purposes, the proposed model is applied to the MIAS dataset, a commonly used breast image database, and achieved 96.26% accuracy.
... The development of computer and Internet-oriented tools has changed the nature of healthcare services using cloud computing [19]- [21]. Conventional patient-physician detection has been reached to a stronger cutting-edge notion of electronic health, in which remote online/offline therapy and detection is possible [22], [23]. Therefore, the information age's appearance has produced novel chances for data gathering, computing, and representation in cancer studies [24], [25]. ...
... Saba, et al. [22] have introduced an outline using a cloudoriented DSS to diagnose and categorize malevolent cells in breast cancer employing breast cytology images. They have used shape-oriented characteristics for the diagnosis of tumor cells. ...
Article
Full-text available
The advances in wireless-based technologies and intelligent diagnostics and forecasting such as cloud computing have significantly affected our lifestyle, observed in many fields, especially healthcare. Also, since the number of new cases of cancer has become very high, there is a need to investigate this matter deeply. Still, there is no systematic review on the application or implementation of the cloud in cancer-care services. Hence, this paper has introduced a comprehensive review of a cloud-centered healthcare system that emphasizes treatment ways for different types of cancer until Sep 2021. The results have shown that the largest study was about the relationship between cancer and the cloud associated with breast cancer. Also, the results have shown that cloud computing eases data protection, privacy, and medical record access. Using cloud computing in hospitals, physicians will use advanced programs and tools, and nurses will quickly access patients’ information with new Wireless-based technologies. A strong understanding of the practical features of cloud computing will help scientists navigate the massive data ecosystems in cancer research. So, by highlighting the advantages and drawbacks of analyzed articles, this study provides a comprehensive and up-to-date report on the field of cloud-based cancer studies to fill the previous gaps.
... Cancer is a major global problem that affects people worldwide, and it is the second most common cancer in the United States (Ferlay et al., 2010). In 2018, the Global Research Agency recorded 9.6 million malignancy deaths and 18.1 thousand young cancer cases (Ferlay et al., 2019;Saba et al., 2019). Breast cancer is widespread cancer and a leading cause of death; 627,000 people died out of 2.1 million cases recorded (Daniluk et al., 2020;Makki, 2015). ...
... While several datasets are available for IDC (Lopez et al., 2013), our findings show there is no reported dataset for IDC histologic scoring. As a result, the aim/objective of this study was to create a well-organized microscopy histopathological images IDC classification (Ramzan et al., 2020;Saba et al., 2019). The photographs were taken of H&E-stained breast tissues and are labeled according to their grade and level of magnification. ...
Article
The visual inspection of histopathological samples is the benchmark for detecting breast cancer, but a strenuous and complicated process takes a long time of the pathologist practice. Deep learning models have shown excellent outcomes in clinical diagnosis and image processing and advances in various fields, including drug development, frequency simulation, and optimization techniques. However, the resemblance of histopathologic images of breast cancer and the inclusion of stable and infected tissues in different areas make detecting and classifying tumors on entire slide images more difficult. In breast cancer, a correct diagnosis is needed for complete care in a limited amount of time. An effective detection can relieve the pathologist's workload and mitigate diagnostic subjectivity. Therefore, this research work investigates improved the pre‐trained xception and deeplabv3+ design semantic model. The model has been trained on input images with ground masks on the tuned parameters that significantly improve the segmentation of ultrasound breast images into respective classes, that is, benign/malignant. The segmentation model delivered an accuracy of greater than 99% to prove the model's effectiveness. The segmented images and histopathological breast images are transferred to the 4‐qubit‐quantum circuit with six‐layered architecture to detect breast malignancy. The proposed framework achieved remarkable performance as contrasted to currently published methodologies. Highlights This research proposed hybrid semantic model using pre‐trained xception and deeplabv3 for breast microscopic cancer classification in to benign and malignant classes at accuracy of 95% accuracy, 99% accuracy for detection of breast malignancy.
... Fine needle aspiration breast cytology was used of outpatients in 89. 6 ...
... Согласно результатам оценки эффективности цитологического метода диагностики новообразований молочной железы, чувствительность метода составляет 80-100 %, специфичность -99 %, прогностическая ценность положительного теста -99 %, прогностическая ценность отрицательного теста -96 %, точность -98 % [1,[3][4][5][6][7]. ...
Article
Full-text available
Retrospective analysis of the usage of fine needle aspiration breast cytolology has been represented in the present work. The potentialities of cytological diagnostics according to Yokohama system with characteristics of C1–C5 categories were estimated. The results of cytological conclusions of 4778 patients with breast lesions who had been examined in the Altay oncological dispensary during the year were studied. Fine needle aspiration breast cytology was used of outpatients in 89.6 % cases. The largest number of patients with pathological changes in the breast was noted in category C2 with benign processes (75.7 % of all cases). Difficult cases for cytological study, where the method could not guarantee the accuracy of the diagnosis, belong to the C3 and C4 categories (1.9 % of all cases). The cytological conclusion recommended the compulsory usage of the core biopsy. Malignant tumors were identified in 853 (19.9 %) patients with an indication of the histological type of tumors. Thus, the cytological technique (as a part of Triple test) should be chosen for outpatients with breast diseases using the Yokohama writing system (C1–C5 categories) of fine needle aspiration cytology.
... Technology also involves creating support for the systems [78] and developing mobile devices that can be used in the healthcare sector, such as wearable sensors [79]. Further, it includes the use of Internet of Things (IoT) [80], cloud storage [81], and Big Data [82] in the areas of e-health. ...
... It is vital that users are confident that the systems used are protected [100] and authenticated to ensure the privacy of stored data [51,101]. The cloud system facilitates data storage at a low cost while also being secure and private [42,81,84]. ...
Article
Full-text available
E-health can be defined as a set of technologies applied with the help of the internet, in which healthcare services are provided to improve quality of life and facilitate healthcare delivery. As there is a lack of similar studies on the topic, this analysis uses a systematic literature review of articles published from 2014 to 2019 to identify the most common e-health practices used worldwide, as well as the main services provided, diseases treated, and the associated technologies that assist in e-health practices. Some of the key results were the identification of the four most common practices used (mhealth or mobile health; telehealth or telemedicine; technology; and others) and the most widely used technologies associated with e-health (IoT, cloud computing, Big Data, security, and systems).
... Furthermore, machine learning algorithms are widely employed in healthcare to diagnose and predict infectious diseases at early stage. Another feature that makes machine learning techniques popular among practitioners is their capacity to handle complicated and extensive datasets [15,16]. Healthcare academics and industry use cutting-edge statistical tools to help and even direct practitioners. ...
... This setup allowed us to directly compare against prior studies, providing a benchmark for the community. Many medical researchers have used this split validation approach [35][36][37][38]. The images in the dataset have a size of 176 × 208 pixels. ...
Article
Full-text available
Alzheimer’s disease is a common neurological disorder and mental disability that causes memory loss and cognitive decline, presenting a major challenge to public health due to its impact on millions of individuals worldwide. It is crucial to diagnose and treat Alzheimer’s in a timely manner to improve the quality of life of both patients and caregivers. In the recent past, machine learning techniques have showed potential in detecting Alzheimer’s disease by examining neuroimaging data, especially Magnetic Resonance Imaging (MRI). This research proposes an attention-based mechanism that employs the vision transformer approach to detect Alzheimer’s using MRI images. The presented technique applies preprocessing to the MRI images and forwards them to a vision transformer network for classification. This network is trained on the publicly available Kaggle dataset, and it illustrated impressive results with an accuracy of 99.06%, precision of 99.06%, recall of 99.14%, and F1-score of 99.1%. Furthermore, a comparative study is also conducted to evaluate the performance of the proposed method against various state-of-the-art techniques on diverse datasets. The proposed method demonstrated superior performance, outperforming other published methods when applied to the Kaggle dataset.
... 2) Malignant tumor: Development of Carcinoma Cancer cells is the building blocks of malignant brain tumors, which often do not have well-defined borders. The rapid development of these tumors and their ability to invade adjacent brain tissue [4] has led experts to the grim conclusion that they provide a significant risk of death. The following is a list of traits that malignant tumors have malignancy that is quickly spreading, with broad metastases in both spinal and cerebral. ...
... Healthcare Cyber-Physical Systems (HCPS) integrate a network of medical equipment in a way that's essential to patient care [1,2]. These systems are being implemented in hospitals one by one to provide a consistently high level of healthcare [3,4]. An integrated CPS in Manuscript contemporary healthcare includes IoT, cloud storage, and interconnected devices [5,6]. ...
... Data visualization is representing data and information through graphs and maps to gain deeper insights [34,31]. It can be a powerful tool for analyzing and interpreting data related to ischemic disease. ...
Article
Full-text available
Ischemic Cardiovascular diseases are one of the deadliest diseases in the world. However, the mortality rate can be significantly reduced if we can detect the disease precisely and effectively. Machine Learning (ML) models offer substantial assistance to individuals requiring early treatment and disease detection in the realm of cardiovascular health. In response to this critical need, this study developed a robust system to predict ischemic disease accurately using ML-based algorithms. The dataset obtained from Kaggle encompasses a comprehensive collection of over 918 observations, encompassing 12 essential features crucial for predicting ischemic disease. In contrast, much-existing research relies primarily on datasets comprising only 303 instances from the UCI repository. Six ML-based algorithms, including K Nearest Neighbors (KNN), Random Forest (RF), Logistic Regression (LR), Support Vector Machine (SVM), Gaussian Naïve Bayes (GNB), and Decision Trees (DT), are trained on the ischemic heart data. The effectiveness of the proposed methodologies is meticulously evaluated and benchmarked against cutting-edge techniques, employing a range of performance criteria. The empirical findings manifest that the KNN classifier produced optimized results with 91.8% accuracy, 91.4% recall, 91.9% F1 score, 92.5% precision, and AUC of 90.27%.
... In the study, Eq. 1 is used to assess the accuracy of each ML Model. Accuracy, defined as the proximity of a measurement to the true value being sought, is a crucial metric in evaluating the models [49]. ...
Preprint
Full-text available
Numerous educational institutions utilize data mining techniques to manage student records, particularly those related to academic achievements , which are essential in improving learning experiences and overall 1 Springer Nature 2021 L A T E X template outcomes. Educational Data Mining (EDM) is a thriving research field that employs data mining and machine learning methods to extract valuable insights from educational databases, primarily focused on predicting students’ academic performance. This study proposes a novel Federated Learning (FL) standard that ensures the confidentiality of the dataset and allows predicting student grades, categorized into four levels: Low, Good, Average, and Drop. To enhance model precision, optimized features are incorporated into the training process. This study evaluates the optimized dataset using five machine learning (ML) algorithms , namely Support Vector Machine (SVM), Decision Tree, Nã A¯ve Bayes, K-Nearest Neighbours, and the proposed Federated Learning Model. The models’ performance is assessed regarding accuracy, precision , recall, and F1 score, followed by a comprehensive comparative analysis. The results reveal that FL and SVM outperform the alternative models, demonstrating superior predictive performance for student grade classification. This study showcases the potential of Federated Learning in effectively utilizing educational data from various institutes while maintaining data privacy, contributing to educational data mining and machine learning advancements for student performance prediction.
... The machine-assisted system's most crucial stage for improving accuracy and lowering false positives for the presence of abnormality is segmentation [42]. The GLCM approach was advocated by many studies [43,44,45] to define texture-based features. Like the previous method, LBP is a wonderful tool for separating benign from malignant tumors using texture extraction [46]. ...
Preprint
Computer-aided diagnosis (CAD), a vibrant medical imaging research field, is expanding quickly. Because errors in medical diagnostic systems might lead to seriously misleading medical treatments, major efforts have been made in recent years to improve computer-aided diagnostics applications. The use of machine learning in computer-aided diagnosis is crucial. A simple equation may result in a false indication of items like organs. Therefore, learning from examples is a vital component of pattern recognition. Pattern recognition and machine learning in the biomedical area promise to increase the precision of disease detection and diagnosis. They also support the decision-making process's objectivity. Machine learning provides a practical method for creating elegant and autonomous algorithms to analyze high-dimensional and multimodal bio-medical data. This review article examines machine-learning algorithms for detecting diseases, including hepatitis, diabetes, liver disease, dengue fever, and heart disease. It draws attention to the collection of machine learning techniques and algorithms employed in studying conditions and the ensuing decision-making process.
... The naï ve bayes (NB) algorithm is a supervised learning method for classification issues that is based on the Bayes theorem [41] . The Bayes theorem, commonly referred to as Bayes' law or Bayes' rule, is used to calculate the likelihood of a hypothesis given certain previous information. ...
Article
Full-text available
p>Nowadays, social media has become a forum for people to express their views on issues such as sexual orientation, legislation, and taxes. Sexual orientation refers to individuals with whom you are attracted and wish to be engaged. In the world, many people are regarded as having different sexual orientations. People categorized as lesbian, gay, bisexual, transgender, queer, and many more (LGBTQ+) have many sexual orientations. Because of the public stigmatization of LGBTQ+ persons, many turn to social media to express themselves, sometimes anonymously. The present study aims to use natural language processing (NLP) and machine learning (ML) approaches to assess the experiences of LGBTQ+ persons. To train the data, the study used lexicon-based sentiment analysis (SA) and six distinct machine classifiers, including logistic regression (LR), support vector machine (SVM), naïve bayes (NB), decision tree (DT), random forest (RF), and gradient boosting (GB). Individuals are positive about LGBTQ concerns, according to the SA results; yet, prejudice and harsh statements against the LGBTQ people persist in many regions where they live, according to the negative sentiment ratings. Furthermore, using LR, SVM, NB, DT, RF, and GB, the ML classifiers attained considerable accuracy values of 97%, 96%, 88%, 100%, 92%, and 91%, respectively. The performance assessment metrics used obtained significant recall and precision values. This study will assist the government, non-governmental organizations, and rights advocacy groups make educated decisions about LGBTQ+ concerns in order to ensure a sustainable future and peaceful coexistence.</p
... It consists of interconnected nodes or "neurons" organized into layers. ANN has been used for various breast cancer diagnostic tasks, such as classification, prediction, and risk assessment [51], [52]. Deep neural networks, a type of ANN with multiple hidden layers, have shown particular promise in breast cancer diagnosis, especially when applied to medical imaging analysis. ...
Article
Breast cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. Machine learning algorithms have emerged as powerful tools for analyzing complex medical data and aiding in the diagnosis of breast cancer. This paper provides an overview of the application of machine learning algorithms in breast cancer diagnosis. The findings indicate that machine learning algorithms, such as support vector machines (SVM), random forests, artificial neural networks (ANN), and deep learning models, have been extensively explored for breast cancer diagnosis. These algorithms leverage the vast amounts of available data, including patient demographics, medical history, imaging data (mammography, ultrasound, MRI), and genetic profiles, to identify patterns and make predictions. One of the primary applications of machine learning algorithms in breast cancer diagnosis is the classification of tumors as malignant or benign. By training on labeled datasets, these algorithms can learn to differentiate between cancerous and non-cancerous cases, thus assisting in accurate tumor diagnosis. Additionally, machine learning algorithms can be used to predict the likelihood of cancer recurrence, which helps guide treatment decisions and post-treatment monitoring. Feature selection and extraction techniques also play a vital role in breast cancer diagnosis using machine learning algorithms. These techniques aim to identify the most relevant features or biomarkers associated with breast cancer, reducing the dimensionality of the data and enhancing the performance of the models. Feature selection algorithms, such as recursive feature elimination and correlation-based feature selection, contribute to the identification of critical indicators for accurate diagnosis. Furthermore, the integration of different data sources and modalities, such as combining clinical data with imaging data or genetic data, has shown promise in improving breast cancer diagnosis accuracy. By fusing multiple types of information, machine learning algorithms can leverage the complementary nature of these data sources to enhance diagnostic capabilities. Despite the advancements made, challenges remain in the field of breast cancer diagnosis using machine learning algorithms. Issues such as data quality, interpretability of models, and generalizability to diverse populations need to be addressed to ensure the reliable and equitable application of these algorithms in clinical practice.
... Although many conversion techniques have been developed and reported in state of the art, they still pose several critical problems, such as [8,9]: ...
Article
Full-text available
Color information is useless for distinguishing significant edges and features in numerous applications. In image processing, a gray image discards much-unrequired data in a color image. The primary drawback of colour-to-grey conversion is eliminating the visually significant image pixels. A current proposal is a novel approach for transforming an RGB image into a grayscale image based on singular value decomposition (SVD). A specific factor magnifies one of the color channels (Red, Green, and Blue). A vector of three values (Red, Green, Blue) of each pixel in an image is decomposed using SVD into three matrices. The norm of the diagonal matrix was determined and then divided by a specific factor to obtain the grey value of the corresponding pixel. The contribution of the proposed method gives the user high flexibility to produce many versions of gray images with varying contrasts, which is very helpful in many applications. Furthermore, SVD allows for image reconstruction by combining the weighting of each channel with the singular value matrix. This results in a grayscale image that more accurately captures the actual intensity values of the image and preserves more color information than traditional grayscale conversion methods, resulting in loss of color information. The proposed method was compared with a similar method (converting the color image into grayscale) and was found to be the most efficient.
... Thus, precise automated systems for rice disease diagnosis are in crucial demand [6]. To enhance rice crop production, several studies have used computer vision technologies to control crop diseases and proposed traditional/ deep learning models in the literature [7,8]. ...
... The machine-assisted system's most crucial stage for improving accuracy and lowering false positives for the presence of abnormality is segmentation [42]. The GLCM approach was advocated by many studies [43,44,45] to define texture-based features. Like the previous method, LBP is a wonderful tool for separating benign from malignant tumors using texture extraction [46]. ...
... In general, the automated breast cancer detection methods can be divided into two general categories: 1) methods based on classification, and 2) methods based on segmentation, masking, or localization. The former category mainly focuses on classifying cancerous masses as benign or malignant, and in some cases just to detect whether cancerous masses exist in an image [34][35][36][37]; whereas the latter category segments the nuclei or masks them for various purposes [38,39]. In the following, some of the works of both categories are reviewed in chronological order and the datasets they used are mentioned along with the review. ...
Article
Cancer detection in its early stages may allow patients to receive the proper treatment and save lives along with recovering the routine lifestyles. Breast cancer is of the top leading causes of mortality among women all around the globe. A source to find these cancerous nuclei is through analyzing histopathology images. These images, however, are very complex and large. Thus, locating the cancerous nuclei in them is very challenging. Hence, if an expert fails to diagnose their patients via these images, the situation may be exacerbated. Therefore, this study aims to introduce a method to mask as many cancer nuclei on histopathology images as possible with a high visual aesthetic to make them distinguishable by experts easily. A tailored residual fully convolutional encoder-decoder neural network based on end-to-end learning is proposed to issue the matter. The proposed method is evaluated quantitatively and qualitatively on ER + BCa H&E-stained dataset. The average detection accuracy achieved by the method is 98.61%, which is much better than that of competitors.
... It is challenging to compare different ML and deep-learning-based facial recognition strategies due to the different setups, datasets and machines used [95,96]. However, the latest comparison of different approaches is presented in Table 3. Tables 3 and 4, deep-learning-based approaches outperform conventional approaches. ...
Article
Full-text available
Facial emotion recognition (FER) is an emerging and significant research area in the pattern recognition domain. In daily life, the role of non-verbal communication is significant, and in overall communication, its involvement is around 55% to 93%. Facial emotion analysis is efficiently used in surveillance videos, expression analysis, gesture recognition, smart homes, computer games, depression treatment, patient monitoring, anxiety, detecting lies, psychoanalysis, paralinguistic communication, detecting operator fatigue and robotics. In this paper, we present a detailed review on FER. The literature is collected from different reputable research published during the current decade. This review is based on conventional machine learning (ML) and various deep learning (DL) approaches. Further, different FER datasets for evaluation metrics that are publicly available are discussed and compared with benchmark results. This paper provides a holistic review of FER using traditional ML and DL methods to highlight the future gap in this domain for new researchers. Finally, this review work is a guidebook and very helpful for young researchers in the FER area, providing a general understating and basic knowledge of the current state-of-the-art methods, and to experienced researchers looking for productive directions for future work.
... It has been shown that in invasive ductal carcinomas the apparent lymphocyte response is related with negative ER and PR, and as such, it may be useful in determining treatment choices in the neoadjuvant situation. But, in many investigations, this criteria did not offer adequate cyto-histocorrelation 21 . Sampling inaccuracy has an impact on the assessment of the different criteria for categorising aspirate swabs. ...
Article
Introduction: Breast cancer is the most common cancer in women throughout the globe, and the modern therapeutic available options have been widely researched currently. For a developing nation with limited resources, such as Pakistan, a quick, affordable, and least invasive technique like FNAC may give useful information. Aim: The aim of this study was; 1 Perform and evaluate cytomorphological classification of all breast cancer patients using the Robinson's three-tier grading system. 2 Histopathological evaluation and classification of all samples of MRM by using a modified BR classification system. 3 Correlation of histo-morphological and cytomorphological classification systems. Place and Duration: In the Pathology department of Khyber Teaching Hospital Peshawar for six months duration from February 2018 to July 2018. Material and methods: A prospective study was conducted in which two independent observers classified all breast cancer patients on the basis of cytology and histology of the tissue and cyto-histological correlation. Results: This study provided good individual correlation and inter-observer agreement in the cytopathological classification of breast cancer aspirates. Conclusion: Cytopathological classification may be useful for these patients and should therefore be included in routine reports. Keywords: Breast carcinoma; Cytopathology; Robinson classification system
... These techniques are helpful to learn the deep spatial features of input images from the training dataset (Denton, Chintala, Szlam, & Fergus, 2015), cycle adversarial generative network (CycleGAN) for low-to-high-resolution, and image-to-image transformation using non-paired of input data, and an unsupervised model for image-to-image translation (UNIT) (Liu & Tuzel, 2016). Various medical imaging problems have been covered by using GAN-based models (Fahad, Khan, Saba, Rehman, & Iqbal, 2018;Ullah et al., 2019;Saba, Khan, Islam, et al., 2019). ...
Article
Full-text available
With the evolution of deep learning technologies, computer vision‐related tasks achieved tremendous success in the biomedical domain. For supervised deep learning training, we need a large number of labeled datasets. The task of achieving a large number of label dataset is a challenging. The availability of data makes it difficult to achieve and enhance an automated disease diagnosis model's performance. To synthesize data and improve the disease diagnosis model's accuracy, we proposed a novel approach for the generation of images for three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. The proposed model out‐perform in synthesis of brain positron emission tomography images for all three stages of Alzheimer disease. The three‐stage of Alzheimer's disease is normal control, mild cognitive impairment, and Alzheimer's disease. The model performance is measured using a classification model that achieved an accuracy of 72% against synthetic images. We also experimented with quantitative measures, that is, peak signal‐to‐noise (PSNR) and structural similarity index measure (SSIM). We achieved average PSNR score values of 82 for AD, 72 for CN, and 73 for MCI and SSIM average score values of 25.6 for AD, 22.6 for CN, and 22.8 for MCI. Proposed approach detects three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. Proposed model out‐performs in synthesis of brain PET images for all Alzheimer disease stages on ADNI datasetx.
... Later, the three algorithms that had the best accuracy were ensembled in the cloud environment. Saba et al. [27] discussed a framework in which breast cancer cells can be detected and classified using cytology images. Furthermore, features that incorporate shape were used to detect tumor cells using ANNs and an NB classifier. ...
Article
Full-text available
Globally, breast cancer is one of the most significant causes of death among women. Early detection accompanied by prompt treatment can reduce the risk of death due to breast cancer. Currently , machine learning in cloud computing plays a pivotal role in disease diagnosis, but predominantly among the people living in remote areas where medical facilities are scarce. Diagnosis systems based on machine learning act as secondary readers and assist radiologists in the proper diagnosis of diseases, whereas cloud-based systems can support telehealth services and remote diag-nostics. Techniques based on artificial neural networks (ANN) have attracted many researchers to explore their capability for disease diagnosis. Extreme learning machine (ELM) is one of the variants of ANN that has a huge potential for solving various classification problems. The framework proposed in this paper amalgamates three research domains: Firstly, ELM is applied for the diagnosis of breast cancer. Secondly, to eliminate insignificant features, the gain ratio feature selection method is employed. Lastly, a cloud computing-based system for remote diagnosis of breast cancer using ELM is proposed. The performance of the cloud-based ELM is compared with some state-of-the-art technologies for disease diagnosis. The results achieved on the Wisconsin Diagnostic Breast Cancer (WBCD) dataset indicate that the cloud-based ELM technique outperforms other results. The best performance results of ELM were found for both the standalone and cloud environments, which were compared. The important findings of the experimental results indicate that the accuracy achieved is 0.9868, the recall is 0.9130, the precision is 0.9054, and the F1-score is 0.8129.
... Later, the three algorithms that had the best accuracy were ensembled in the cloud environment. Saba et al. [27] discussed a framework in which breast cancer cells can be detected and classified using cytology images. Furthermore, features that incorporate shape were used to detect tumor cells using ANNs and an NB classifier. ...
Article
Full-text available
Globally, breast cancer is one of the most significant causes of death among women. Early detection accompanied by prompt treatment can reduce the risk of death due to breast cancer. Currently , machine learning in cloud computing plays a pivotal role in disease diagnosis, but predominantly among the people living in remote areas where medical facilities are scarce. Diagnosis systems based on machine learning act as secondary readers and assist radiologists in the proper diagnosis of diseases, whereas cloud-based systems can support telehealth services and remote diag-nostics. Techniques based on artificial neural networks (ANN) have attracted many researchers to explore their capability for disease diagnosis. Extreme learning machine (ELM) is one of the variants of ANN that has a huge potential for solving various classification problems. The framework proposed in this paper amalgamates three research domains: Firstly, ELM is applied for the diagnosis of breast cancer. Secondly, to eliminate insignificant features, the gain ratio feature selection method is employed. Lastly, a cloud computing-based system for remote diagnosis of breast cancer using ELM is proposed. The performance of the cloud-based ELM is compared with some state-of-the-art technologies for disease diagnosis. The results achieved on the Wisconsin Diagnostic Breast Cancer (WBCD) dataset indicate that the cloud-based ELM technique outperforms other results. The best performance results of ELM were found for both the standalone and cloud environments, which were compared. The important findings of the experimental results indicate that the accuracy achieved is 0.9868, the recall is 0.9130, the precision is 0.9054, and the F1-score is 0.8129.
... In their review study, Long and Ehrenfeld (2020) claimed that prediction through artificial intelligence methods might reduce the effects of this pandemic crisis. Accordingly, several automatic classifications of infection detections and forecasting models are reported in the literature with different scope (Saba, Bokhari, Sharif, Yasmin, & Raza, 2018;Saba, Khan, et al., 2019;Mashood Nasir, et al., 2020). ...
Article
Full-text available
COVID‐19 has impacted the world in many ways, including loss of lives, economic downturn and social isolation. COVID‐19 was emerged due to the SARS‐CoV‐2 that is highly infectious pandemic. Every country tried to control the COVID‐19 spread by imposing different types of lockdowns. Therefore, there is an urgent need to forecast the daily confirmed infected cases and deaths in different types of lockdown to select the most appropriate lockdown strategies to control the intensity of this pandemic and reduce the burden in hospitals. Currently are imposed three types of lockdown (partial, herd, complete) in different countries. In this study, three countries from every type of lockdown were studied by applying time‐series and machine learning models, named as random forests, K‐nearest neighbors, SVM, decision trees (DTs), polynomial regression, Holt winter, ARIMA, and SARIMA to forecast daily confirm infected cases and deaths due to COVID‐19. The models' accuracy and effectiveness were evaluated by error based on three performance criteria. Actually, a single forecasting model could not capture all data sets' trends due to the varying nature of data sets and lockdown types. Three top‐ranked models were used to predict the confirmed infected cases and deaths, the outperformed models were also adopted for the out‐of‐sample prediction and obtained very close results to the actual values of cumulative infected cases and deaths due to COVID‐19. This study has proposed the auspicious models for forecasting and the best lockdown strategy to mitigate the causalities of COVID‐19. Optimized data set model is proposed to predict 10 thinspace days ahead of lung infection and death cases due to COVID‐19. The model predicted results very close to the actual values. The best policy to control infection and death cases was the Herd lockdown strategy.
Article
Diabetic retinopathy (DR) is a prevalent cause of global visual impairment, contributing to approximately 4.8% of blindness cases worldwide as reported by the World Health Organization (WHO). The condition is characterized by pathological abnormalities in the retinal layer, including microaneurysms, vitreous hemorrhages, and exudates. Microscopic analysis of retinal images is crucial in diagnosing and treating DR. This article proposes a novel method for early DR screening using segmentation and unsupervised learning techniques. The approach integrates a neural network energy‐based model into the Fuzzy C‐Means (FCM) algorithm to enhance convergence criteria, aiming to improve the accuracy and efficiency of automated DR screening tools. The evaluation of results includes the primary dataset from the Shiva Netralaya Centre, IDRiD, and DIARETDB1. The performance of the proposed method is compared against FCM, EFCM, FLICM, and M‐FLICM techniques, utilizing metrics such as accuracy in noiseless and noisy conditions and average execution time. The results showcase auspicious performance on both primary and secondary datasets, achieving accuracy rates of 99.03% in noiseless conditions and 93.13% in noisy images, with an average execution time of 16.1 s. The proposed method holds significant potential in medical image analysis and could pave the way for future advancements in automated DR diagnosis and management. Research Highlights A novel approach is proposed in the article, integrating a neural network energy‐based model into the FCM algorithm to enhance the convergence criteria and the accuracy of automated DR screening tools. By leveraging the microscopic characteristics of retinal images, the proposed method significantly improves the accuracy of lesion segmentation, facilitating early detection and monitoring of DR. The evaluation of the method's performance includes primary datasets from reputable sources such as the Shiva Netralaya Centre, IDRiD, and DIARETDB1, demonstrating its effectiveness in comparison to other techniques (FCM, EFCM, FLICM, and M‐FLICM) in terms of accuracy in both noiseless and noisy conditions. It achieves impressive accuracy rates of 99.03% in noiseless conditions and 93.13% in noisy images, with an average execution time of 16.1 s.
Article
Cancer is a fatal illness frequently caused by a variety of obsessive changes and genetic disorders. Cancer cells knowing as abnormal cells can grow in any part of the human body. A preliminary diagnosis of cancer is necessary as cancer is one of the most alarming diseases. Detecting cancer and treating it in the initial stage can decrease the death rate. Our aim of this study is to analyze and review various relevant research papers published over the last 5 years for cancer detection using Machine Learning (ML) and Deep Learning (DL) techniques. We have mainly considered the techniques developed for Brain Tumor detection, Cervical Cancer detection, Breast Cancer detection, Skin Cancer detection and Lung Cancer detection. Recent statistics show that these cancers are causing higher mortality rates among men and women in comparison to the other types of cancers. In this review article, various recent ML and DL models developed to detect these cancers are analyzed and discussed on the most important metrics such as accuracy, specificity, sensitivity, F-score, precision, recall etc. which are tested on several datasets in the literature. At last, open research challenges in each cancer category are also pointed out for the purpose of future research work opportunities.
Article
Wireless communication systems offer a dynamic infrastructure with efficient data sensing and forwarding services using digital networks and the Internet of Things (IoT). Many schemes have been proposed to cope with a smart communication system by integrating medical devices. However, lowering the processing overheads with efficient utilization of network services are challenging tasks. Thus, this paper presents a smart health analysis system using machine learning techniques for IoT network, which aims to handle big data with balancing the communication load for green technologies. Firstly, using regression prediction, the proposed system offers quality-aware services, secondly, by exploring intelligent methods it provides a delay-tolerant scheme to give the least overhead communication paradigm using mobile agents. Finally, the big data is secured using cryptographic techniques and collaborative devices to maintain its trustworthiness with the cloud. The proposed system has revealed a noteworthy performance in terms of network parameters against existing studies.
Chapter
Full-text available
Tuberculosis is a major health threat in many regions of the world. Opportunistic infections in immunocompromised HIV/AIDS patients and multi-drugresistant bacterial strains have exacerbated the problem, while diagnosing tuberculosis remains challenging. Medical images have made a high impact on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. This chapter presents a novel X-ray of lungs segmentation method using the U-net model. First, we construct the U-net which combine the lungs and mask. Then, we convert to problem of positive and negative TB lungs into the segmentation of lungs, and extract the lungs by subtracting the chest from the radiography. In experiment, the proposed model achieves 97.62% on the public dataset of collection by Shenzhen Hospital, China and Montgomery County X-ray Set.
Chapter
Full-text available
Melanoma is one of the riskiest diseases that extensively influence the quality of life and can be dangerous or even fatal. Skin lesion classification methods faced challenges in varying scenarios. The available hand-crafted features could not generate better results when the skin lesion images contain low contrast, under and over-segmented images. The hand-crafted features for skin lesions did not discriminate well between the two significantly different densities. The pigmented network feature vector and deep feature vector have been fused using a parallel fusion method to increase classification accuracy. This optimized fused feature vector has been fed to machine learning classifiers that accurately classify the dermoscopic images into two categories as benign and malignant melanoma. The statistical performance measures were used to assess the proposed fused feature vector on three skin lesion datasets (ISBI 2016, ISIC 2017, and PH2). The proposed fused feature vector accurately classified the skin lesion with the highest accuracy of 99.8% for the ISBI 2016.
Chapter
COVID19 is a respiratory illness that is extremely infectious and is spreading at an alarming rate at the moment. Chest radiography images play an important part in the automated diagnosis of COVID19, which is accomplished via the use of several machine learning approaches. This chapter examines prognostic models for COVID-19 patients’ survival prediction based on clinical data and lung/lesion radiometric characteristics retrieved from chest imaging. While it seems that there are various early indicators of prognosis, we will discuss prognostic models or scoring systems that are useful exclusively to individuals who have received confirmation of their cancer diagnosis. A summary of some of the research work and strategies based on machine learning and computer vision that have been applied for the identification of COVID19 have been presented in this chapter. Some strategies based on pre-processing, segmentation, handmade features, deep features, and classification have been discussed, as well as some other techniques. Apart from that, a few relevant datasets have been provided, along with a few research gaps and challenges in the respective sector that have been identified, all of which will be useful for future study efforts.KeywordsBrain tumorDeep learning and Transfer learningHealthcareHealth risksPublic health
Article
Cancer is the deadliest disease. Early cancer cell detection is critical for saving human lives and ensuring successful treatment. By removing the benign cells, cancer cells can be prevented from spreading to other parts of the body (cancer cells in the early stage). Malignant tumours, on the other hand, which have an aggressive spreading capacity, affect various other parts of the body and are uncontrollable. This paper presents a study on five different types of cancer, namely breast, brain, lung, liver, and skin cancers, as well as various published techniques for detecting these cancers, which aids the researcher in understanding current techniques and aids in the development of new structures that provide better and more accurate results.It also focuses on cancer cell segmentation (ACM, PSO, UNet, watershed, and so on), feature extraction, and feature reduction (PCA, LDA, SVD). It also discusses different cancer classification methods such as machine learning and clustering (SVM, KNN, Bayesian, Neuro fuzzy, k-mean algorithm, GANs, and so on), deep learning (CNN, ResNet, VGG, and so on), and different evaluating methods.
Preprint
Full-text available
Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been developed for it. Breast cancer is one of the most common cancer types. According to the National Breast Cancer foundation, in 2020 alone, more than 276,000 new cases of invasive breast cancer and more than 48,000 non-invasive cases were diagnosed in the US. To put these figures in perspective, 64% of these cases are diagnosed early in the disease's cycle, giving patients a 99% chance of survival. Artificial intelligence and machine learning have been used effectively in detection and treatment of several dangerous diseases, helping in early diagnosis and treatment, and thus increasing the patient's chance of survival. Deep learning has been designed to analyze the most important features affecting detection and treatment of serious diseases. For example, breast cancer can be detected using genes or histopathological imaging. Analysis at the genetic level is very expensive, so histopathological imaging is the most common approach used to detect breast cancer. In this research work, we systematically reviewed previous work done on detection and treatment of breast cancer using genetic sequencing or histopathological imaging with the help of deep learning and machine learning. We also provide recommendations to researchers who will work in this field
Article
Full-text available
Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been developed for it. Breast cancer is one of the most common cancer types. According to the National Breast Cancer Foundation, in 2020 alone, more than 276,000 new cases of invasive breast cancer and more than 48,000 non-invasive cases were diagnosed in the US. To put these figures in perspective, 64% of these cases are diagnosed early in the disease's cycle, giving patients a 99% chance of survival. Artificial intelligence and machine learning have been used effectively in detection and treatment of several dangerous diseases, helping in early diagnosis and treatment, and thus increasing the patient's chance of survival. Deep learning has been designed to analyze the most important features affecting detection and treatment of serious diseases. For example, breast cancer can be detected using genes or histopathological imaging. Analysis at the genetic level is very expensive, so histopathological imaging is the most common approach used to detect breast cancer. In this research work, we systematically reviewed previous work done on detection and treatment of breast cancer using genetic sequencing or histopathological imaging with the help of deep learning and machine learning. We also provide recommendations to researchers who will work in this field.
Article
Lung's cancer is the leading cause of cancer‐related deaths worldwide. Recently cancer mortality rate and incidence increased exponentially. Many patients with lung cancer are diagnosed late, so the survival rate is shallow. Machine learning approaches have been widely used to increase the effectiveness of cancer detection at an early stage. Even while these methods are efficient in detecting specific forms of cancer, there is no known technique that could be used universally and consistently to identify new malignancies. As a result, cancer diagnosis via machine learning algorithms is still fresh area of research. Computed tomography (CT) images are frequently employed for early cancer detection and diagnosis because they contain significant information. In this research, an automated lung cancer detection and classification framework is proposed which consists of preprocessing, three patches local binary pattern feature encoding, local binary pattern, histogram of oriented gradients features are extracted and fused. The fast learning network (FLN) is a novel machine‐learning technique that is fast to train and economical in terms of processing resources. However, the FLN's internal power parameters (weight and basis) are randomly initialized, resulting it an unstable algorithm. Therefore, to enhance accuracy, FLN is hybrid with K‐nearest neighbors to classify texture and appearance‐based features of lung chest CT scans from Kaggle dataset into cancerous and non‐cancerous images. The proposed model performance is evaluated using accuracy, sensitivity, specificity on the Kaggle benchmark dataset that is found comparable in state of the art using simple machine learning strategies. Research highlights Fast learning network and K‐nearest neighbor hybrid classifier proposed first time for lung cancer classification using handcrafted features including three patches local binary pattern, local binary pattern, and histogram of oriented gradients. Promising results obtained from novel simple combination.
Article
Early breast cancer detection is vital to enhance the human survival rate. Hence, this study aimed to segment cancer affected areas using Transfer Learning based Cancer Segmentation (TL-CAN-Seg) methodology. It also intended to extract the relevant features and then store them in the cloud. In addition, it introduced a novel MLP (Multi-layer perceptron) with tuned Levenberg Marquadrt algorithm (LM) to effectively classify the breast cancer affected-area. Initially, the mammogram image taken and image pre-processing is performed. Subsequently, segmentation is carried out using novel TL-CAN-Seg and feature extraction is performed, stored in cloud. Finally, classification is performed to classify the breast cancer. The proposed MLP with LM has the ability to converge faster and the proposed TL-CAN-Seg also has ability to learn the complex image patterns that eventually increases accuracy of breast cancer diagnosis than the traditional methods. This is confirmed through the comparative analysis in the results.
Article
Full-text available
Artificial intelligence (AI) is the usage of scientific techniques to simulate human intellectual skills and to tackle complex medical issues involving complicated genetic defects such as cancer. The rapid expansion of AI in the past era has paved the way to optimum judgment-making by superior intellect, where the human brain is constrained to manage large information in a limited period. Cancer is a complicated ailment along with several genomic variants. AI-centred systems carry enormous potential in detecting these genomic alterations and abnormal protein communications at a very initial phase. The contemporary biomedical study is also dedicated to bringing AI expertise to hospitals securely and ethically. AI-centred support to diagnosticians and doctors can be the big surge ahead for the forecast of illness threat, identification, diagnosis, and therapies. The applications related to AI and Machine Learning (ML) in the identification of cancer and its therapy possess the potential to provide therapeutic support for quicker planning of a novel therapy for each person. Through the utilization of AI- based methods, scientists can work together in real-time and distribute their expertise digitally to possibly cure billions. In this review, the focus was on the study of linking biology with AI and describe how AI-centred support could assist oncologists in accurate therapy. It is essential to identify new biomarkers that inject drug defiance and discover medicinal goals to improve medication methods. The advent of the “next-generation sequencing” (NGS) programs resolves these challenges and has transformed the prospect of “Precision Oncology” (PO). NGS delivers numerous medical functions which are vital for hazard prediction, initial diagnosis of infection, “Sequence” identification and “Medical Imaging” (MI), precise diagnosis, “biomarker” detection, and recognition of medicinal goals for innovation in medicine. NGS creates a huge repository that requires specific “bioinformatics” sources to examine the information that is pertinent and medically important. The malignancy diagnostics and analytical forecast are improved with NGS and MI that provide superior quality images via AI technology. Irrespective of the advancements in technology, AI faces a few problems and constraints, and the clinical application of NGS continues to be authenticated. Through the steady progress of invention and expertise, the prospects of AI and PO look promising. The purpose of this review was to assess, evaluate, classify, and tackle the present developments in cancer diagnosis utilizing AI methods for breast, lung, liver, skin cancer, and leukaemia. The research emphasizes in what way cancer identification, the treatment procedure is aided by utilizing AI with supervised, unsupervised, and deep learning (DL) methods. Numerous AI methods were assessed on benchmark datasets with respect to “accuracy”, “sensitivity”, “specificity”, and “false-positive” (FP) metrics. Lastly, challenges along with future work were discussed.
Chapter
Breast cancer (BC) is one of the most leading malignancies amongst women globally; consequently, one in eight women is infected during the lifespan. The malignancy growth is reported in the breast glandular epithelium. The breast-cancer-infected individuals' prediction would be enhanced due to the required early detection and diagnosis process. Regardless of vast medical evolution, breast cancer has the second primary cause of mortality yet. Hence, initial diagnosis plays a preeminent role in decreasing the mortality rate. Several breast cancer detection procedures include computed tomography, mammography, X-rays, ultrasound, magnetic resonance imaging, thermography and more. Deep learning techniques were commonly used for medical imaging recognition for the past several years. convolutional neural network (CNN) was developed for accurate image recognition and classification, including breast cancer images due to its automatic detection and disease classification. This chapter aims to provide an extensive analysis of breast cancer, abnormalities, diagnosis, treatments and prevention strategies using image processing and CNN techniques. Additionally, available datasets description and critical statistical analysis based on CNN and different modalities for further study and future challenges are also highlighted.
Article
Females are approximately half of the total population worldwide, and most of them are victims of breast cancer (BC). Computer‐aided diagnosis (CAD) frameworks can help radiologists to find breast density (BD), which further helps in BC detection precisely. This research detects BD automatically using mammogram images based on Internet of Medical Things (IoMT) supported devices. Two pretrained deep convolutional neural network models called DenseNet201 and ResNet50 were applied through a transfer learning approach. A total of 322 mammogram images containing 106 fatty, 112 dense, and 104 glandular cases were obtained from the Mammogram Image Analysis Society dataset. The pruning out irrelevant regions and enhancing target regions is performed in preprocessing. The overall classification accuracy of the BD task is performed and accomplished 90.47% through DensNet201 model. Such a framework is beneficial in identifying BD more rapidly to assist radiologists and patients without delay. The study proposed and evaluated IoMT based breast density detection from mammogram images. Two pre‐trained CNN models: DenseNet201 and ResNet50 were applied through a transfer learning approach. Promising results achieved on a benchmark dataset. Graphical representation of the research architecture.
Chapter
Crucial in the development of a technological solution for managing insulin-dependent patients is an appropriate method of software engineering (SE) process. The SE process aims at providing supports for the design and implementation of a high-quality system by engaging patients and other healthcare professionals in knowing the exact specification for development. Primarily, insulin-dependent patients have a big challenge in determining the right dose of insulin to be injected according to measurements of glucose level and estimated carbohydrate from food intake. The result of this is often hypoglycaemia and hyperglycaemia, which may lead to coma and cause unfortunate death. The objective of this research work is to develop an adaptive model dedicated to the adjustment of insulin dose requirements in patients with type 1 diabetes. A mathematical model is formulated based on insulin sensitivity, basal insulin, bolus insulin, correction factor, carbohydrate counts, bolus on board equations and continuous blood glucose levels. We have evaluated the model by using root-mean-square error (RMSE) and mean absolute error (MAE) as parameters. The results showed the correct insulin dose values within the range of 4.76–0.18 IU for the fifty-two (52) patients with T1DM. Consequently, the developed model suggests a reduction in the incidence of hypoglycaemia, which can be used to predict the next insulin dose.
Article
Full-text available
Background: In digital mammography, finding accurate breast profile segmentation of women's mammogram is considered a challenging task. The existence of the pectoral muscle may mislead the diagnosis of cancer due to its high-level similarity to breast body. In addition, some other challenges due to manifestation of the breast body pectoral muscle in the mammogram data include inaccurate estimation of the density level and assessment of the cancer cell. The discrete differentiation operator has been proven to eliminate the pectoral muscle before the analysis processing. Methods: We propose a novel approach to remove the pectoral muscle in terms of the mediolateral-oblique observation of a mammogram using a discrete differentiation operator. This is used to detect the edges boundaries and to approximate the gradient value of the intensity function. Further refinement is achieved using a convex hull technique. This method is implemented on dataset provided by MIAS and 20 contrast enhanced digital mammographic images. Results: To assess the performance of the proposed method, visual inspections by radiologist as well as calculation based on well-known metrics are observed. For calculation of performance metrics, the given pixels in pectoral muscle region of the input scans are calculated as ground truth. Conclusions: Our approach tolerates an extensive variety of the pectoral muscle geometries with minimum risk of bias in breast profile than existing techniques.
Article
Full-text available
Auscultation of heart dispenses identification of the cardiac valves. An electronic stethoscope is used for the acquisition of heart murmurs that is further classified into normal or abnormal murmurs. The process of heart sound segmentation involves discrete wavelet transform to obtain individual components of the heart signal and its separation into systole and diastole intervals. This research presents a novel scheme to develop a semi-automatic cardiac valve disorder diagnosis system. Accordingly, features are extracted using wavelet transform and spectral analysis of input signals. The proposed classification scheme is the fusion of adaptive-neuro fuzzy inference system (ANFIS) and HMM. Both classifiers are trained using the extracted features to correctly identify normal and abnormal heart murmurs. Experimental results thus achieved exhibit that proposed system furnishes promising classification accuracy with excellent specificity and sensitivity. However, the proposed system has fewer classification errors, fewer computations, and lower dimensional feature set to build an intelligent system for detection and classification of heart murmurs.
Article
Full-text available
A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research.
Article
Full-text available
Medical imaging plays an integral role in the identification, segmentation, and classification of brain tumors. The invention of MRI has opened new horizons for brain-related research. Recently, researchers have shifted their focus towards applying digital image processing techniques to extract, analyze and categorize brain tumors from MRI. Categorization of brain tumors is defined in a hierarchical way moving from major to minor ones. A plethora of work could be seen in literature related to the classification of brain tumors in categories such as benign and malignant. However, there are only a few works reported on the multiclass classification of brain images where each part of the image containing tumor is tagged with major and minor categories. The precise classification is difficult to achieve due to ambiguities in images and overlapping characteristics of different type of tumors. In the current study, a comprehensive review of recent research on brain tumors multiclass classification using MRI is provided. These multiclass classification studies are categorized into two major groups: XX and YY and each group are further divided into three sub-groups. A set of common parameters from the reviewed works is extracted and compared to highlight the merits and demerits of individual works. Based on our analysis, we provide a set of recommendations for researchers and professionals working in the area of brain tumors classification.
Article
Full-text available
Currently, in the medical imaging, equipment produces 3D results for better visualization and cure. Likewise, need of capable devices for representation of these 3D medicinal images is expanding as it is valuable to view human tissues or organs straightforwardly and is an immense support to medical staff. Consequently, 3D images recreation from 2D images has attracted several researchers. The CT images normally composed of bones soft tissue and background. This paper presents 3D segmentation of leg’s bones in Computed Tomography images (CT scan). The bone section is extracted from each 2D slice and is placed in the 3D space. Surface rendering is applied on 2D bone slices. The data is visualized via multi-planar reformatting, surface rendering, and hardware-accelerated volume rendering. At last, the paper presents an enhanced technique to reproduce 3D bone segmentation in medicinal images that exhibits promising outcomes as compared to the techniques reported in literature. The proposed technique involves contour extraction, image enhancement, segmentation, outlier reduction and 3D modelling.
Article
Full-text available
With the advent of voluminous medical database, healthcare analytics in big data have become a major research area. Healthcare analytics are playing an important role in big data analysis issues by predicting valuable information through data mining and machine learning techniques. This prediction helps physicians in making right decisions for successful diagnosis and prognosis of various diseases. In this paper, an evolution based hybrid methodology is used to develop a healthcare analytic model exploiting data mining and machine learning algorithms Support Vector Machine (SVM), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The proposed model may assist physicians to diagnose various types of heart diseases and to identify the associated risk factors with high accuracy. The developed model is evaluated with the results reported by the literature algorithms in diagnosing heart diseases by taking the case study of Cleveland heart disease database. A great prospective of conducting this research is to diagnose any disease in less time with less number of factors or symptoms. The proposed healthcare analytic model is capable of reducing the search space significantly while analyzing the big data, therefore less number of computing resources will be consumed.
Article
Full-text available
With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art.
Article
Full-text available
Making deductions and expectations about climate has been a challenge all through mankind’s history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.
Article
Full-text available
Segmentation of objects from a noisy and complex image is still a challenging task that needs to be addressed. This article proposed a new method to detect and segment nuclei to determine whether they are malignant or not (determination of the region of interest, noise removal, enhance the image, candidate detection is employed on the centroid transform to evaluate the centroid of each object, the level set [LS] is applied to segment the nuclei). The proposed method consists of three main stages: preprocessing, seed detection, and segmentation. Preprocessing stage involves the preparation of the image conditions to ensure that they meet the segmentation requirements. Seed detection detects the seed point to be used in the segmentation stage, which refers to the process of segmenting the nuclei using the LS method. In this research work, 58 H&E breast cancer images from the UCSB Bio-Segmentation Benchmark dataset are evaluated. The proposed method reveals the high performance and accuracy in comparison to the techniques reported in literature. The experimental results are also harmonized with the ground truth images.
Article
Full-text available
Malaria parasitemia is the quantitative measurement of the parasites in the blood to grade the degree of infection. Light microscopy is the most well-known method used to examine the blood for parasitemia quantification. The visual quantification of malaria parasitemia is laborious, time-consuming and subjective. Although automating the process is a good solution, the available techniques are unable to evaluate the same cases such as anemia and hemoglobinopathies due to deviation from normal RBCs’ morphology. The main aim of this research is to examine the microscopic images of stained thin blood smears using a variety of computer vision techniques, grading malaria parasitemia on independent factors (RBC’s morphology). The proposed methodology is based on inductive approach, color segmentation of malaria parasites through adaptive algorithm of Gaussian mixture model (GMM). The quantification accuracy of RBCs is improved, splitting the occlusions of RBCs with distance transform and local maxima. Further, the classification of infected and non-infected RBCs has been made to properly grade parasitemia. The training and evaluation have been carried out on image dataset with respect to ground truth data, determining the degree of infection with the sensitivity of 98 % and specificity of 97 %. The accuracy and efficiency of the proposed scheme in the context of being automatic were proved experimentally, surpassing other state-of-the-art schemes. In addition, this research addressed the process with independent factors (RBCs’ morphology). Eventually, this can be considered as low-cost solutions for malaria parasitemia quantification in massive examinations.
Article
Full-text available
PurposeDue to the large quantity of data that are recorded in energy efficient buildings, understanding the behavior of various underlying operations has become a complex and challenging task. This paper proposes a method to support analysis of energy systems and validates it using operational data from a cold water chiller. The method automatically detects various operation patterns in the energy system. Methods The use of k-means clustering is being proposed to automatically identify the On (operational) cycles of a system operating with a duty cycle. The latter’s data is subsequently transformed to symbolic representations by using the symbolic aggregate approximation method. Afterward, the symbols are converted to bag of words representation (BoWR) for hierarchical clustering. A gap statistics method is used to find the best number of clusters in the data. Finally, operation patterns of the energy system are grouped together in each cluster. An adsorption chiller, operating under real life conditions, supplies the reference data for validation. ResultsThe proposed method has been compared with dynamic time warping (DTW) method using cophenetic coefficients and it has been shown that the BoWR has produced better results as compared to DTW. The results of BoWR are further investigated and for finding the optimal number of clusters, gap statistics have been used. At the end, interesting patterns of each cluster are discussed in detail. Conclusion The main goal of this research work is to provide analysis algorithms that automatically find the various patterns in the energy system of a building using as little configuration or field knowledge as possible. A bag of word representation method with hierarchical clustering has been proposed to assess the performance of a building energy system.
Article
Full-text available
Purpose: Histopathology is the clinical standard for tissue diagnosis; however, it requires tissue processing, laboratory personnel and infrastructure, and a highly trained pathologist to diagnose the tissue. Optical microscopy can provide real-time diagnosis, which could be used to inform the management of breast cancer. The goal of this work is to obtain images of tissue morphology through fluorescence microscopy and vital fluorescent stains and to develop a strategy to segment and quantify breast tissue features in order to enable automated tissue diagnosis. Methods: We combined acriflavine staining, fluorescence microscopy, and a technique called sparse component analysis to segment nuclei and nucleoli, which are collectively referred to as acriflavine positive features (APFs). A series of variables, which included the density, area fraction, diameter, and spacing of APFs, were quantified from images taken from clinical core needle breast biopsies and used to create a multivariate classification model. The model was developed using a training data set and validated using an independent testing data set. Results: The top performing classification model included the density and area fraction of smaller APFs (those less than 7 µm in diameter, which likely correspond to stained nucleoli).When applied to the independent testing set composed of 25 biopsy panels, the model achieved a sensitivity of 82 %, a specificity of 79 %, and an overall accuracy of 80 %. Conclusions: These results indicate that our quantitative microscopy toolbox is a potentially viable approach for detecting the presence of malignancy in clinical core needle breast biopsies.
Conference Paper
Full-text available
To achieve energy efficiency in buildings, a lot of raw data is recorded, during the operation of buildings. This recorded raw data is further used for the analysis of the performance of buildings and its different components e.g. Heating, Ventilation and Air-Conditioning (HVAC). To save time and energy it is required to ensure resilience of the data by detecting and replacing outliers (i.e. data samples that are not plausible) in the data before detailed analysis. This paper discusses the steps involved for detecting outliers in the data obtained from absorption chiller using their On/Off state information. It also proposes a method for automatic detection of On/Off and/or Missing Data status of the chiller. The technique uses two layer K-Means clustering for detecting On/Off as well as Missing Data state of the chiller. After automatic detection of the chiller On/Off cycle, a method for outlier detection is proposed using Z-Score normalization based on the On/Off cycle state of chillers and clustering outliers by Expectation Maximization clustering algorithm. Moreover, the results of filling the missing values with regression and linear interpolation for short and long periods are elaborated. All proposed methods are applied to real building data and the results are discussed.
Article
Full-text available
In this paper investigation of the performance criterion of a machine learning tool, Naive Bayes Classifier with a new weighted approach in classifying breast cancer is done. Naive Bayes is one of the most effective classification algorithms. In many decision making system, ranking performance is an interesting and desirable concept than just classification. So to extend traditional Naive Bayes, and to improve its performance, weighted concept is incorporated. Exploration of Domain knowledge based weight assignment on UCI machine learning repository dataset of breast cancer is performed. As Breast cancer is considered to be second leading cause of death in women today. The experiments show that a weighted naive bayes approach outperforms naive bayes.
Article
Full-text available
We present a workflow for data sanitation and analysis of operation data with the goal of increasing energy efficiency and reliability in the operation of building-related energy systems. The workflow makes use of machine learning algorithms and innovative visualizations. The environment, in which monitoring data for energy systems are created, requires low configuration effort for data analysis. Therefore the focus lies on methods that operate automatically and require little or no configuration. As a result a generic workflow is created that is applicable to various energy-related time series data; it starts with data accessibility, followed by automated detection of duty cycles where applicable. The detection of outliers in the data and the sanitation of gaps ensure that the data quality is sufficient for an analysis by domain experts, in our case the analysis of system energy efficiency. To prove the feasibility of the approach, the sanitation and analysis workflow is implemented and applied to the recorded data of a solar driven adsorption chiller.
Article
Full-text available
Application of personalized medicine requires integration of different data to determine each patient’s unique clinical constitution. The automated analysis of medical data is a growing field where different machine learning techniques are used to minimize the time-consuming task of manual analysis. The evaluation, and often training, of automated classifiers requires manually labelled data as ground truth. In many cases such labelling is not perfect, either because of the data being ambiguous even for a trained expert or because of mistakes. Here we investigated the interobserver variability of image data comprising fluorescently stained circulating tumor cells and its effect on the performance of two automated classifiers, a random forest and a support vector machine. We found that uncertainty in annotation between observers limited the performance of the automated classifiers, especially when it was included in the test set on which classifier performance was measured. The random forest classifier turned out to be resilient to uncertainty in the training data while the support vector machine’s performance is highly dependent on the amount of uncertainty in the training data. We finally introduced the consensus data set as a possible solution for evaluation of automated classifiers that minimizes the penalty of interobserver variability.
Article
Full-text available
A framework for automated detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features is proposed and examined. The various stages involved in the proposed methodology include enhancement of microscopic images, segmentation of background cells, features extraction, and finally the classification. An appropriate and efficient method is employed in each of the design steps of the proposed framework after making a comparative analysis of commonly used method in each category. For highlighting the details of the tissue and structures, the contrast limited adaptive histogram equalization approach is used. For the segmentation of background cells, k -means segmentation algorithm is used because it performs better in comparison to other commonly used segmentation methods. In feature extraction phase, it is proposed to extract various biologically interpretable and clinically significant shapes as well as morphology based features from the segmented images. These include gray level texture features, color based features, color gray level texture features, Law’s Texture Energy based features, Tamura’s features, and wavelet features. Finally, the K -nearest neighborhood method is used for classification of images into normal and cancerous categories because it is performing better in comparison to other commonly used methods for this application. The performance of the proposed framework is evaluated using well-known parameters for four fundamental tissues (connective, epithelial, muscular, and nervous) of randomly selected 1000 microscopic biopsy images.
Article
Full-text available
Health telematics is a growing up issue that is becoming a major improvement on patient lives, especially in elderly, disabled, and chronically ill. In recent years, information an d communication technologies improvements, along with mobile Internet, offering anywhere and anytime connectivity, play a key role on modern healthcare solutions. In this context, mobile health (m-Health) delivers healthcare services,overcoming geographical, temporal, and even organizational barriers. M-Health solutions address emerging problems on health services, including, the increasing number of chronic diseases related to lifestyle, high costs of existing national health services, the need to empower patients and families to self-care and handle their own healthcare, and the need to provide direct access to health services, regardless of time and place. Then, this paper presents a comprehensive review of the state of the art on m-Health services and applications. It surveys the most significant research work and presents a deep analysis of the top and novel m-Health services and applications proposed by industry. A discussion considering the European Union and United States approaches addressing the m-Health paradigm and directives already published is also considered. Open and challenging issues on emerging m-Health solutions are proposed for further works. Copyright © 2015. Published by Elsevier Inc.
Article
Full-text available
Offline cursive script recognition and their associated issues are still fresh despite of last few decades' research. This paper presents an annotated comparison of proposed and recently published preprocessing techniques with reported work in the offline cursive script recognition. Normally, in the offline script analysis, the input is a paper image or a word or a digit and the desired output is ASCII text. This task involves several preprocessing steps, and some of them are quite hard such as line removal from text, skew removal, reference line detection (lower/upper baselines), slant removal, scaling, noise elimination, contour smoothing and skeleton. Moreover, subsequent stage of segmentation (if any) and recognition is also highly dependent on these preprocessing techniques. This paper presents an analysis and annotated comparison of latest preprocessing techniques proposed by authors with those reported in the literature on IAM/CEDAR benchmark databases. Finally, future work and persist problems are highlighted.
Article
Full-text available
This paper presents a new, simple and fast approach for character segmen-tation of unconstrained handwritten words. The proposed approach first seeks the possible character boundaries based on characters geometric features analysis. However, due to inherited ambiguity and a lack of context, few characters are over-segmented. To increase the efficiency of the proposed approach, an Artificial Neural Network is trained with sig-nificant number of valid segmentation points for cursive handwritten words. Trained neural network extracts incorrect segmented points efficiently with high speed. For fair comparison, benchmark database CEDAR is used. The experimental results are promis-ing from complexity and accuracy points of view.
Article
Full-text available
Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized.
Article
Full-text available
Medical images have made a great impact on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. Many image segmentation methods for medical image analysis have been presented in this paper. In this paper, we have described the latest segmentation methods applied in medical image analysis. The advantages and disadvantages of each method are described besides examination of each algorithm with its application in Magnetic Resonance Imaging and Computed Tomography image analysis. Each algorithm is explained separately with its ability and features for the analysis of grey-level images. In order to evaluate the segmentation results, some popular benchmark measurements are presented in the final section.
Article
Full-text available
Mobile Cloud Computing (MCC) is aimed at integrating mobile devices with cloud computing. It is one of the most important concepts that have emerged in the last few years. Mobile devices, in the traditional agent-client architecture of MCC, only utilize resources in the cloud to enhance their functionalities. However, modern mobile devices have many more resources than before. As a result, researchers have begun to consider the possibility of mobile devices themselves sharing resources. This is called the cooperation-based architecture of MCC. Resource discovery is one of the most important issues that need to be solved to achieve this goal. Most of the existing work on resource discovery has adopted a fixed choice of centralized or flooding strategies. Many improved versions of energy-efficient methods based on both strategies have been proposed by researchers due to the limited battery life of mobile devices. This paper proposes a novel adaptive method of resource discovery from a different point of view to distinguish it from existing work. The proposed method automatically transforms between centralized and flooding strategies to save energy according to different network environments. Theoretical models of both energy consumption and the quality of response information are presented in this paper. A heuristic algorithm was also designed to implement the new adaptive method of resource discovery. The results from simulations demonstrated the effectiveness of the strategy and the efficiency of the proposed heuristic method.
Article
Visual inspection for the quantification of malaria parasitaemiain (MP) and classification of life cycle stage are hard and time taking. Even though, automated techniques for the quantification of MP and their classification are reported in the literature. However, either reported techniques are imperfect or cannot deal with special issues such as anemia and hemoglobinopathies due to clumps of red blood cells (RBCs). The focus of the current work is to examine the thin blood smear microscopic images stained with Giemsa by digital image processing techniques, grading MP on independent factors (RBCs morphology) and classification of its life cycle stage. For the classification of the life cycle of malaria parasite the k‐nearest neighbor, Naïve Bayes and multi‐class support vector machine are employed for classification based on histograms of oriented gradients and local binary pattern features. The proposed methodology is based on inductive technique, segment malaria parasites through the adaptive machine learning techniques. The quantification accuracy of RBCs is enhanced; RBCs clumps are split by analysis of concavity regions for focal points. Further, classification of infected and non‐infected RBCs has been made to grade MP precisely. The training and testing of the proposed approach on benchmark dataset with respect to ground truth data, yield 96.75% MP sensitivity and 94.59% specificity. Additionally, the proposed approach addresses the process with independent factors (RBCs morphology). Finally, it is an economical solution for MP grading in immense testing.
Article
Glaucoma is a neurodegenerative illness and is considered as a standout amongst the most widely recognized reasons for visual impairment. Nerve's degeneration is an irretrievable procedure, so the diagnosis of the illness at an early stage is an absolute requirement to stay away from lasting loss of vision. Glaucoma effected mainly because of increased intraocular pressure, if it is not distinguished and looked early, it can result in visual impairment. There are not generally evident side effects of glaucoma; thus, patients attempt to get treatment just when the seriousness of malady is advanced altogether. Determination of glaucoma often comprises of review of the basic crumbling of the nerve in conjunction with the examination of visual function capacity. This article shows the persistent illustration of glaucoma, its side effects, and the potential people inclined to this malady. The essence of this article is on different classification methods being utilized and proposed by various scientists for the identification of glaucoma. This article audits a few division and segmentation methodologies that are exceptionally useful for recognizable proof, identification, and diagnosis of glaucoma. The research related to the findings and the treatment is likewise evaluated in this article.
Article
Identifying abnormality using breast mammography is a challenging task for radiologists due to its nature. A more consistent and precise imaging based CAD system plays a vital role in the classification of doubtful breast masses. In the proposed CAD system, pre-processing is performed to suppress the noise in the mammographic image. Then segmentation locates the tumor in mammograms using the cascading of Fuzzy C-Means (FCM) and region-growing (RG) algorithm called FCMRG. Features extraction step involves identification of important and distinct elements using Local Binary Pattern Gray-Level Co-occurrence Matrix (LBP-GLCM) and Local Phase Quantization (LPQ). The hybrid features are obtained from these techniques. The mRMR algorithm is employed to choose suitable features from individual and hybrid feature sets. The nominated feature sets are analysed through various machine learning procedures to isolate the malignant tumors from the benign ones. The classifiers are probed on 109 and 72 images of MIAS and DDSM databases respectively using k-fold (10-fold) cross-validation method. The enhanced classification accuracy of 98.2% is achieved for MIAS dataset using hybrid features classified by Decision Tree. Whereas 95.8% accuracy is obtained for DDSM dataset using KNN classifier applied on LPQ features.
Article
Background: Knee bone diseases are rare but might be highly destructive. Magnetic Resonance Imaging (MRI) is the main approach to identify knee cancer and its treatment. Normally, the knee cancers are detected with the help of different MRI analysis techniques and later image analysis strategies assess these images. Discussion: Computer-based medical image analysis is getting researcher’s interest due to its advantages of speed and accuracy as compared to traditional techniques. The focus of current research is MRI-based medical image analysis for knee bone disease detection. Accordingly, several approaches for features extraction and segmentation for knee bone cancer are analyzed and compared on benchmark database. Conclusion: Finally, the current state of the art is investigated and future directions are proposed.
Article
Malaria parasitemia diagnosis and grading is hard and still far from perfection. Inaccurate diagnosis and grading has caused tremendous deaths rate particularly in young children worldwide. The current research deeply reviews automated malaria parasitemia diagnosis and grading in thin blood smear digital images through image analysis and computer vision based techniques. Actually, state‐of‐the‐art reveals that current proposed practices present partially or morphology dependent solutions to the problem of computer vision based microscopy diagnosis of malaria parasitemia. Accordingly, a deep appraisal of the current practices is investigated, compared and analyzed on benchmark datasets. The open gaps are highlighted and the future directions are laid down for a complete automated microscopy diagnosis for malaria parasitemia based on those factors that have not been affected by other diseases. Moreover, a general computer vision framework to perform malaria parasitemia estimation/grading is constructed in universal directions. Finally, remaining problems are highlighted and possible directions are suggested. Research Highlights The current research presents a microscopic malaria parasitemia diagnosis and grading of malaria in thin blood smear digital images through image analysis and computer vision based techniques. The open gaps are highlighted and future directions for a complete automated microscopy diagnosis of malaria parasitemia mentioned.
Article
Splitting the rouleaux RBCs from single RBCs and its further subdivision is a challenging area in computer‐assisted diagnosis of blood. This phenomenon is applied in complete blood count, anemia, leukemia, and malaria tests. Several automated techniques are reported in the state of art for this task but face either under or over splitting problems. The current research presents a novel approach to split Rouleaux red blood cells (chains of RBCs) precisely, which are frequently observed in the thin blood smear images. Accordingly, this research address the rouleaux splitting problem in a realistic, efficient and automated way by considering the distance transform and local maxima of the rouleaux RBCs. Rouleaux RBCs are splitted by taking their local maxima as the centres to draw circles by mid‐point circle algorithm. The resulting circles are further mapped with single RBC in Rouleaux to preserve its original shape. The results of the proposed approach on standard data set are presented and analyzed statistically by achieving an average recall of 0.059, an average precision of 0.067 and F‐measure 0.063 are achieved through ground truth with visual inspection. Rouleaux RBCs are splitted by taking their local maxima as the centres to draw circles employing mid‐point circle algorithm. The resulting circles are further mapped with single RBC into Rouleaux at high accuracy, while preserving original shape.
Article
Arabic writer identification and associated tasks are still fresh due to huge variety of Arabic writer's styles. Current research presents a fusion of statistical features, extracted from fragments of Arabic handwriting samples to identify the writer using fuzzy ARTMAP classifier. Fuzzy ARTMP is supervised neural model, especially suited to classification problems. It is faster to train and need less number of training epochs to "learn" from input data for generalization. The extracted features are fed to Fuzzy ARTMP for training and testing. Fuzzy ARTMAP is employed for the first time along with a novel fusion of statistical features for Arabic writer identification. The entire IFN/ENIT database is used in experiments such that 75% handwritten Arabic words from 411 writers are employed in training and 25% for testing the system at random. Several combinations of extracted features are tested using fuzzy ARTMAP classifier and finally one combination exhibited promising accuracy of 94.724% for Arabic writer identification on IFN/ENIT benchmark database.
Article
Building Your Next Big Thing with Google Cloud Platform shows you how to take advantage of the Google Cloud Platform technologies to build all kinds of cloud-hosted software and services for both public and private consumption. Whether you need a simple virtual server to run your legacy application or you need to architect a sophisticated high-traffic web application, Cloud Platform provides all the tools and products required to create innovative applications and a robust infrastructure to manage them. Google is known for the scalability, reliability, and efficiency of its various online products, from Google Search to Gmail. And, the results are impressive. Google Search, for example, returns results literally within fractions of second. How is this possible? Google custom-builds both hardware and software, including servers, switches, networks, data centers, the operating system’s stack, application frameworks, applications, and APIs. Have you ever imagined what you could build if you were able to tap the same infrastructure that Google uses to create and manage its products? Now you can! Building Your Next Big Thing with Google Cloud Platform shows you how to take advantage of the Google Cloud Platform technologies to build all kinds of cloud-hosted software and services for both public and private consumption. Whether you need a simple virtual server to run your legacy application or you need to architect a sophisticated high-traffic web application, Cloud Platform provides all the tools and products required to create innovative applications and a robust infrastructure to manage them. Using this book as your compass, you can navigate your way through the Google Cloud Platform and turn your ideas into reality. The authors, both Google Developer Experts in Google Cloud Platform, systematically introduce various Cloud Platform products one at a time and discuss their strengths and scenarios where they are a suitable fit. But rather than a manual-like "tell all" approach, the emphasis is on how to Get Things Done so that you get up to speed with Google Cloud Platform as quickly as possible. You will learn how to use the following technologies, among others: • Google Compute Engine • Google App Engine • Google Container Engine • Google App Engine Managed VMs • Google Cloud SQL • Google Cloud Storage • Google Cloud Datastore • Google BigQuery • Google Cloud Dataflow • Google Cloud DNS • Google Cloud Pub/Sub • Google Cloud Endpoints • Google Cloud Deployment Manager • Author on Google Cloud Platform • Google APIs and Translate API Using real-world examples, the authors first walk you through the basics of cloud computing, cloud terminologies and public cloud services. Then they dive right into Google Cloud Platform and how you can use it to tackle your challenges, build new products, analyze big data, and much more. Whether you’re an independent developer, startup, or Fortune 500 company, you have never had easier to access to world-class production, product development, and infrastructure tools. Google Cloud Platform is your ticket to leveraging your skills and knowledge into making reliable, scalable, and efficient products—just like how Google builds its own products.