Article

Cloud-based decision support system for the detection and classification of malignant cells in breast cancer using breast cytology images

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Technology also involves creating support for the systems [78] and developing mobile devices that can be used in the healthcare sector, such as wearable sensors [79]. Further, it includes the use of Internet of Things (IoT) [80], cloud storage [81], and Big Data [82] in the areas of e-health. ...
... It is vital that users are confident that the systems used are protected [100] and authenticated to ensure the privacy of stored data [51,101]. The cloud system facilitates data storage at a low cost while also being secure and private [42,81,84]. ...
Article
Full-text available
E-health can be defined as a set of technologies applied with the help of the internet, in which healthcare services are provided to improve quality of life and facilitate healthcare delivery. As there is a lack of similar studies on the topic, this analysis uses a systematic literature review of articles published from 2014 to 2019 to identify the most common e-health practices used worldwide, as well as the main services provided, diseases treated, and the associated technologies that assist in e-health practices. Some of the key results were the identification of the four most common practices used (mhealth or mobile health; telehealth or telemedicine; technology; and others) and the most widely used technologies associated with e-health (IoT, cloud computing, Big Data, security, and systems).
... Medical imaging analysis plays a significant role to detect abnormality in different organs of the body such as blood cancer [1][2][3][4][5][6][7], skin cancer [10,[22][23][24][25][26], breast cancer [11,16,17,19], brain tumor [8,9,[13][14][15]18,20,77], lung cancer [12,21], retina [27][28][29]50] etc. The organ abnormality mostly results in tumor growth quickly which is the main death cause worldwide [30]. ...
... The process of segmentation is the most important step of the machine-assisted system for enhancing accuracy and to reduce false positive of the existence of abnormality [17]. Numerous studies recommended GLCM method to describe texture-based features [11,40,64]. Similarly, LBP is another remarkable mechanism used for texture extraction to isolate benign masses from malignant ones [112]. ...
Article
Full-text available
Cancer is a fatal illness often caused by genetic disorder aggregation and a variety of pathological changes. Cancerous cells are abnormal areas often growing in any part of human body that are life-threatening. Cancer also known as tumor must be quickly and correctly detected in the initial stage to identify what might be beneficial for its cure. Even though modality has different considerations, such as complicated history, improper diagnostics and treatement that are main causes of deaths. The aim of the research is to analyze, review, categorize and address the current developments of human body cancer detection using machine learning techniques for breast, brain, lung, liver, skin cancer leukemia. The study highlights how cancer diagnosis, cure process is assisted using machine learning with supervised, unsupervised and deep learning techniques. Several state of art techniques are categorized under the same cluster and results are compared on benchmark datasets from accuracy, sensitivity, specificity, false-positive metrics. Finally, challenges are also highlighted for possible future work.
... Fine needle aspiration breast cytology was used of outpatients in 89. 6 ...
... Согласно результатам оценки эффективности цитологического метода диагностики новообразований молочной железы, чувствительность метода составляет 80-100 %, специфичность -99 %, прогностическая ценность положительного теста -99 %, прогностическая ценность отрицательного теста -96 %, точность -98 % [1,[3][4][5][6][7]. ...
Article
Retrospective analysis of the usage of fine needle aspiration breast cytolology has been represented in the present work. The potentialities of cytological diagnostics according to Yokohama system with characteristics of C1–C5 categories were estimated. The results of cytological conclusions of 4778 patients with breast lesions who had been examined in the Altay oncological dispensary during the year were studied. Fine needle aspiration breast cytology was used of outpatients in 89.6 % cases. The largest number of patients with pathological changes in the breast was noted in category C2 with benign processes (75.7 % of all cases). Difficult cases for cytological study, where the method could not guarantee the accuracy of the diagnosis, belong to the C3 and C4 categories (1.9 % of all cases). The cytological conclusion recommended the compulsory usage of the core biopsy. Malignant tumors were identified in 853 (19.9 %) patients with an indication of the histological type of tumors. Thus, the cytological technique (as a part of Triple test) should be chosen for outpatients with breast diseases using the Yokohama writing system (C1–C5 categories) of fine needle aspiration cytology.
... Giemsa basically stains nuclei, stained RBCs in pink color and give blue tone to chromatin ( Abbas et al., 2015). In several fields, it is recorded that manual microscopy is not counted as a dependable screening method when carried out by nonexperts ( Rad, Rahim, Rehman, Altameem, & Saba, 2013;Saba et al., 2019;Soleimanizadeh, Mohamad, Saba, & Rehman, 2015;Ullah et al., 2019). ...
Article
Malaria is a serious worldwide disease, caused by a bite of a female Anopheles mosquito. The parasite transferred into complex life round in which it is grown and reproduces into the human body. The detection and recognition of Plasmodium species are possible and efficient through a process called staining (Giemsa). The staining process slightly colorizes the red blood cells (RBCs) but highlights Plasmodium parasites, white blood cells and artifacts. Giemsa stains nuclei, chromatin in blue tone and RBCs in pink color. It has been reported in numerous studies that manual microscopy is not a trustworthy screening technique when performed by nonexperts. Malaria parasites host in RBCs when it enters the bloodstream. This paper presents segmentation of Plasmodium parasite from the thin blood smear points on region growing and dynamic convolution based filtering algorithm. After segmentation, malaria parasite classified into four Plasmodium species: Plasmodium falciparum, Plasmodium ovale, Plasmodium vivax, and Plasmodium malaria. The random forest and K‐nearest neighbor are used for classification base on local binary pattern and hue saturation value features. The sensitivity for malaria parasitemia (MP) is 96.75% on training and testing of the proposed approach while specificity is 94.59%. Beside these, the comparisons of the two features are added to the proposed work for classification having sensitivity is 83.60% while having specificity is 94.90% through random forest classifier based on local binary pattern feature. Plasmodium parasite from the thin blood smear is segmented through region growing and dynamic convolution based filtering algorithms. To classify, local binary pattern and hue saturation value features are extracted from Plasmodium parasite and fed to random forest and K‐nearest neighbor classifiers.
... The deep learning is the emerging technology in machine learning and has attracted significant attention in medical image processing especially for brain tumor detection (Iqbal, Ghani, Saba, & Rehman, 2018;Khan et al., 2020). Additionally, in the area of medical image processing, machine learning has attracted high attention and comes out with notable performance for tumor detection for various modalities such as dermoscopy (Afza, Khan, Sharif, & Rehman, 2019;Javed, Rahim, Saba, & Rehman, 2020), MRI (Nazir, Khan, Saba, & Rehman, 2019;), Mammography (Marie-Sainte, Saba, Alsaleh, Alotaibi, & Bin, 2019;Saba, Khan, Islam, et al., 2019), and so on. Saba, Mohamed, et al. (2020) (2019) introduced a deep learning model for different brain modalities segmentation. ...
Article
Brain tumor is one of the most dreadful natures of cancer and caused a huge number of deaths among kids and adults from the past few years. According to WHO standard, the 700,000 humans are being with a brain tumor and around 86,000 are diagnosed since 2019. While the total number of deaths due to brain tumors is 16,830 since 2019 and the average survival rate is 35%. Therefore, automated techniques are needed to grade brain tumors precisely from MRI scans. In this work, a new deep learning‐based method is proposed for microscopic brain tumor detection and tumor type classification. A 3D convolutional neural network (CNN) architecture is designed at the first step to extract brain tumor and extracted tumors are passed to a pretrained CNN model for feature extraction. The extracted features are transferred to the correlation‐based selection method and as the output, the best features are selected. These selected features are validated through feed‐forward neural network for final classification. Three BraTS datasets 2015, 2017, and 2018 are utilized for experiments, validation, and accomplished an accuracy of 98.32, 96.97, and 92.67%, respectively. A comparison with existing techniques shows the proposed design yields comparable accuracy.
... Experiments were conducted on the IFN/ENIT database and an accuracy of 94.72% reported. Based on the complexities of Arabic cursive handwriting styles, speed, touching, overlapping and inherent properties, a pre-processing strategy is needed to collaborate with intelligent techniques to enhance the accuracy such as neural networks [33][34][35][36][37], GA [38][39][40], SVM [41][42][43]. However, character boundaries detection in touched and overlapped consecutive characters made this issue more crucial. ...
Preprint
Online Arabic cursive character recognition is still a big challenge due to the existing complexities including Arabic cursive script styles, writing speed, writer mood and so forth. Due to these unavoidable constraints, the accuracy of online Arabic character's recognition is still low and retain space for improvement. In this research, an enhanced method of detecting the desired critical points from vertical and horizontal direction-length of handwriting stroke features of online Arabic script recognition is proposed. Each extracted stroke feature divides every isolated character into some meaningful pattern known as tokens. A minimum feature set is extracted from these tokens for classification of characters using a multilayer perceptron with a back-propagation learning algorithm and modified sigmoid function-based activation function. In this work, two milestones are achieved; firstly, attain a fixed number of tokens, secondly, minimize the number of the most repetitive tokens. For experiments, handwritten Arabic characters are selected from the OHASD benchmark dataset to test and evaluate the proposed method. The proposed method achieves an average accuracy of 98.6% comparable in state of art character recognition techniques.
... 1 In a global network, more security issues may arise due to the Internet of Things (IoT) enabled devices as online systems are more attractive to intruders, e.g., microwave, blender, wearable devices, clothing, cognitive buildings, etc., allows access to the internet openly. 2 Therefore, extensive research in the IoT security is the need of the day. This requires intelligent systems to ensure reliability and to protect network infrastructure against intrusion. ...
Article
Network security is the main issue in the Internet of Things networks because most manufacturers do not focus on security standards during design. The performance of firewalls, intrusion prevention systems, and intrusion detection systems depend upon accuracy, thus its essentials, to enhance the detection rate to lessen false alarms. The objective of the firewall is to find violations as per predefined rules and block the incoming risky traffic. However, it is very tough to differentiate between malicious and regular traffic due to advanced techniques of attack. To address these issues, a two-stage hybrid method is proposed. First, the genetic algorithm (GA) is applied to select appropriate features to improve the accuracy of the proposed framework. Next, the well-known machine learning (ML) algorithm, including the support vector machine (SVM), ensemble classifier, and decision tree are employed. The achieved accuracy is 99.8% through 10-fold cross-validation using a multiclass NSL-KDD database.
... In their review study, Long and Ehrenfeld (2020) claimed that prediction through artificial intelligence methods might reduce the effects of this pandemic crisis. Accordingly, several automatic classifications of infection detections and forecasting models are reported in the literature with different scope (Saba, Bokhari, Sharif, Yasmin, & Raza, 2018;Saba, Khan, et al., 2019;Mashood Nasir, et al., 2020). ...
Article
COVID‐19 has impacted the world in many ways, including loss of lives, economic downturn and social isolation. COVID‐19 was emerged due to the SARS‐CoV‐2 that is highly infectious pandemic. Every country tried to control the COVID‐19 spread by imposing different types of lockdowns. Therefore, there is an urgent need to forecast the daily confirmed infected cases and deaths in different types of lockdown to select the most appropriate lockdown strategies to control the intensity of this pandemic and reduce the burden in hospitals. Currently are imposed three types of lockdown (partial, herd, complete) in different countries. In this study, three countries from every type of lockdown were studied by applying time‐series and machine learning models, named as random forests, K‐nearest neighbors, SVM, decision trees (DTs), polynomial regression, Holt winter, ARIMA, and SARIMA to forecast daily confirm infected cases and deaths due to COVID‐19. The models' accuracy and effectiveness were evaluated by error based on three performance criteria. Actually, a single forecasting model could not capture all data sets' trends due to the varying nature of data sets and lockdown types. Three top‐ranked models were used to predict the confirmed infected cases and deaths, the outperformed models were also adopted for the out‐of‐sample prediction and obtained very close results to the actual values of cumulative infected cases and deaths due to COVID‐19. This study has proposed the auspicious models for forecasting and the best lockdown strategy to mitigate the causalities of COVID‐19. Optimized data set model is proposed to predict 10 thinspace days ahead of lung infection and death cases due to COVID‐19. The model predicted results very close to the actual values. The best policy to control infection and death cases was the Herd lockdown strategy.
... These techniques are helpful to learn the deep spatial features of input images from the training dataset (Denton, Chintala, Szlam, & Fergus, 2015), cycle adversarial generative network (CycleGAN) for low-to-high-resolution, and image-to-image transformation using non-paired of input data, and an unsupervised model for image-to-image translation (UNIT) (Liu & Tuzel, 2016). Various medical imaging problems have been covered by using GAN-based models (Fahad, Khan, Saba, Rehman, & Iqbal, 2018;Ullah et al., 2019;Saba, Khan, Islam, et al., 2019). ...
Article
Full-text available
With the evolution of deep learning technologies, computer vision‐related tasks achieved tremendous success in the biomedical domain. For supervised deep learning training, we need a large number of labeled datasets. The task of achieving a large number of label dataset is a challenging. The availability of data makes it difficult to achieve and enhance an automated disease diagnosis model's performance. To synthesize data and improve the disease diagnosis model's accuracy, we proposed a novel approach for the generation of images for three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. The proposed model out‐perform in synthesis of brain positron emission tomography images for all three stages of Alzheimer disease. The three‐stage of Alzheimer's disease is normal control, mild cognitive impairment, and Alzheimer's disease. The model performance is measured using a classification model that achieved an accuracy of 72% against synthetic images. We also experimented with quantitative measures, that is, peak signal‐to‐noise (PSNR) and structural similarity index measure (SSIM). We achieved average PSNR score values of 82 for AD, 72 for CN, and 73 for MCI and SSIM average score values of 25.6 for AD, 22.6 for CN, and 22.8 for MCI. Proposed approach detects three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. Proposed model out‐performs in synthesis of brain PET images for all Alzheimer disease stages on ADNI datasetx.
... In the area of machine learning (ML), the performance of a system is dependent on the extraction of most strong kind features [41][42][43][44][45]. In medical imaging, mostly color, texture, and shape features are used for the classification process [46][47][48][49][50], but these techniques do not perform well when the processed data is vast in number. ...
Article
Full-text available
Cancer is one of the leading causes of deaths in the last two decades. It is either diagnosed malignant or benign – depending upon the severity of the infection and the current stage. The conventional methods require a detailed physical inspection by an expert dermatologist, which is time-consuming and imprecise. Therefore, several computer vision methods are introduced lately, which are cost-effective and somewhat accurate. In this work, we propose a new automated approach for skin lesion detection and recognition using a deep convolutional neural network (DCNN). The proposed cascaded design incorporates three fundamental steps including; a) contrast enhancement through fast local Laplacian filtering (FlLpF) along HSV color transformation; b) lesion boundary extraction using color CNN approach by following XOR operation; c) in-depth features extraction by applying transfer learning using Inception V3 model prior to feature fusion using hamming distance (HD) approach. An entropy controlled feature selection method is also introduced for the selection of the most discriminant features. The proposed method is tested on PH2 and ISIC 2017 datasets, whereas the recognition phase is validated on PH2, ISBI 2016, and ISBI 2017 datasets. From the results, it is concluded that the proposed method outperforms several existing methods and attained accuracy 98.4% on PH2 dataset, 95.1% on ISBI dataset and 94.8% on ISBI 2017 dataset.
... Later, the three algorithms that had the best accuracy were ensembled in the cloud environment. Saba et al. [27] discussed a framework in which breast cancer cells can be detected and classified using cytology images. Furthermore, features that incorporate shape were used to detect tumor cells using ANNs and an NB classifier. ...
Article
Full-text available
Globally, breast cancer is one of the most significant causes of death among women. Early detection accompanied by prompt treatment can reduce the risk of death due to breast cancer. Currently , machine learning in cloud computing plays a pivotal role in disease diagnosis, but predominantly among the people living in remote areas where medical facilities are scarce. Diagnosis systems based on machine learning act as secondary readers and assist radiologists in the proper diagnosis of diseases, whereas cloud-based systems can support telehealth services and remote diag-nostics. Techniques based on artificial neural networks (ANN) have attracted many researchers to explore their capability for disease diagnosis. Extreme learning machine (ELM) is one of the variants of ANN that has a huge potential for solving various classification problems. The framework proposed in this paper amalgamates three research domains: Firstly, ELM is applied for the diagnosis of breast cancer. Secondly, to eliminate insignificant features, the gain ratio feature selection method is employed. Lastly, a cloud computing-based system for remote diagnosis of breast cancer using ELM is proposed. The performance of the cloud-based ELM is compared with some state-of-the-art technologies for disease diagnosis. The results achieved on the Wisconsin Diagnostic Breast Cancer (WBCD) dataset indicate that the cloud-based ELM technique outperforms other results. The best performance results of ELM were found for both the standalone and cloud environments, which were compared. The important findings of the experimental results indicate that the accuracy achieved is 0.9868, the recall is 0.9130, the precision is 0.9054, and the F1-score is 0.8129.
... The results exhibit that the automated DL method using Kmean clustering with MSVM is showing better performance than the model of decision tree. This study [18] Proposed the framework for the cloud-based decision support system for classification and detection of cells which are malignant in breast cancer, when using the images of breast cytology. The proposed system utilized shape-based features for tumour cells detection. ...
Article
Full-text available
Breast cancer is nowadays becoming a serious problem and acts as a main reason for death of women around the world. Hence various devices are being utilized for the detection of breast cancer at an earlier stage and diagnosing it in an earlier stage might even results in complete cure of the disease. Among the wide range of devices available, mammogram is one of the commonly employed and most effective approaches involved in the detection of breast cancer. It records the affected area in the form of mammogram images and these images are processed through image processing techniques for the detection of cancer affected regions. In this paper, novelties have given in all the image processing aspects such as filtering, segmentation, feature extraction and classification. The salt and pepper noises in the mammogram images are eliminated by the usage of novel decision based partial median filter. Then the filtered images are segmented based utilizing a novel technique which is formed on integrating the deep learning techniques of VGG-16 and series network. Features of the segmented images have extracted through BAT-SURF feature extraction, where the orientation of the interest points are extracted using Bat optimization algorithm along with SURF (i.e.) Speeded up Robust Features. It extract most important key points from SURF features and then the extracted image has classified by using the novel Gradient descent decision tree classifier in which a stable learning path provided for easy convergence. Then the performance of the proposed system has analyzed based on the performance metrics like accuracy, specificity, sensitivity, recall, precision, Jaccard coefficient, F score and missed classification. Based on the results obtained, the conclusion of the proposed work has attained enhanced results on comparing with other state of the art approaches. The accuracy value of the proposed hybrid VGG-16 and series network segmentation technique determined as 96.45 and similarly the accuracy value of the proposed Gradient Descent Decision Tree Classification technique has value shows 95.15.
... To test the performance of the designed model, different brain tumor data sets have been used by researchers Abbas et al., 2018b;Saba et al., 2019) & Saba, 2017;Jamal et al., 2017;Mughal, Muhammad, et al., 2017a;Mughal, Muhammad, Sharif, Rehman, & Saba, 2018;Mughal, Sharif, et al., 2017b;Muhsin, Rehman, Altameem, Saba, & Uddin, 2014;Rad, Rahim, Rehman, Altameem, & Saba, 2013;Waheed et al., 2016). Such models include encoder-decoder models (Castillo et al., 2017;Saba, 2018;Saba, Rehman, & Sulong, 2011a;Sharif et al., 2017), multiple networks to extract the different level of features (Sedlar, 2017), models using convolutional networks with some postprocessing (Shaikh et al., 2017) and hybrid architectures with varying numbers of layers (Mundher, Muhamad, Rehman, Saba, & Kausar, 2014;Neamah, Mohamad, Saba, & Rehman, 2014;Saba & Alqahtani, 2013). ...
Article
Automatic and precise segmentation and classification of tumor area in medical images is still a challenging task in medical research. Most of the conventional neural network based models usefully connected or convolutional neural networks to perform segmentation and classification. In this research, we present deep learning models using long short term memory (LSTM) and convolutional neural networks (ConvNet) for accurate brain tumor delineation from benchmark medical images. The two different models, that is, ConvNet and LSTM networks are trained using the same data set and combined to form an ensemble to improve the results. We used publicly available MICCAI BRATS 2015 brain cancer data set consisting of MRI images of four modalities T1, T2, T1c, and FLAIR. To enhance the quality of input images, multiple combinations of preprocessing methods such as noise removal, histogram equalization, and edge enhancement are formulated and best performer combination is applied. To cope with the class imbalance problem, class weighting is used in proposed models. The trained models are tested on validation data set taken from the same image set and results obtained from each model are reported. The individual score (accuracy) of ConvNet is found 75% whereas for LSTM based network produced 80% and ensemble fusion produced 82.29% accuracy. ConvNet, LSTMNet and their fusion is employed to improve the tumor segmentation of human brain MRI images taken from publically available multimodal MICCAI BRATS 2015 training data set. Ensemble network provided best results.
... These segmentation points are divided into correct and incorrect categories and are stored in a training file. Data is preprocessed prior to its use for BPN training as BPN takes uniform data [39][40][41][42][43]. ...
Preprint
The cursive nature of multilingual characters segmentation and recognition of Arabic, Persian, Urdu languages have attracted researchers from academia and industry. However, despite several decades of research, still multilingual characters classification accuracy is not up to the mark. This paper presents an automated approach for multilingual characters segmentation and recognition. The proposed methodology explores character based on their geometric features. However, due to uncertainty and without dictionary support few characters are over-divided. To expand the productivity of the proposed methodology a BPN is prepared with countless division focuses for cursive multilingual characters. Prepared BPN separates off base portioned indicates effectively with rapid upgrade character acknowledgment precision. For reasonable examination, only benchmark dataset is utilized.
... A. Khan, Akram, Sharif, Awais, et al., 2018;M. A. Khan, Akram, Sharif, Shahzad, et al., 2018;Nasir et al., 2018;Rehman, Abbas, Saba, Mahmood, & Kolivand, 2018;Rehman, Abbas, Saba, Rahman, et al., 2018;Saba et al., 2019;Yousaf et al., 2019). Majority of the previous research work has focused on the early detection of lungs cancer using the texture-based interpretation of chest CTs . ...
Article
The emergence of cloud infrastructure has the potential to provide significant benefits in a variety of areas in the medical imaging field. The driving force behind the extensive use of cloud infrastructure for medical image processing is the exponential increase in the size of computed tomography (CT) and magnetic resonance imaging (MRI) data. The size of a single CT/MRI image has increased manifold since the inception of these imagery techniques. This demand for the introduction of effective and efficient frameworks for extracting relevant and most suitable information (features) from these sizeable images. As early detection of lungs cancer can significantly increase the chances of survival of a lung scanner patient, an effective and efficient nodule detection system can play a vital role. In this article, we have proposed a novel classification framework for lungs nodule classification with less false positive rates (FPRs), high accuracy, sensitivity rate, less computationally expensive and uses a small set of features while preserving edge and texture information. The proposed framework comprises multiple phases that include image contrast enhancement, segmentation, feature extraction, followed by an employment of these features for training and testing of a selected classifier. Image preprocessing and feature selection being the primary steps-playing their vital role in achieving improved classification accuracy. We have empirically tested the efficacy of our technique by utilizing the well-known Lungs Image Consortium Database dataset. The results prove that the technique is highly effective for reducing FPRs with an impressive sensitivity rate of 97.45%. K E Y W O R D S computed tomography, feature selection, lungs segmentation, pulmonary nodules, wavelet features
... Different error measures are used to evaluate their scheme's performance, in which their schemes have better accuracy then existing ones. Computer based approach in data mining for medical science is presented in [91], in which authors use the shape based feature of tumor cells to diagnose the patients. Classification accuracy performance is evaluated as 98% through ANN and Naive Bayesian, while comparing with physical methods for detection of malignant cell. ...
Thesis
The energy demand is increasing every day due to the huge amount of residential appliance’s energy consumption. As the energy demand for consumption is comparably higher than the generation of energy, which produce the shortage of energy. Industrial and commercial areas are also consuming large amount of energy, however residential energy demand is more flexible as compared to other two. Many of the existing schemes are developed to fulfill the consumer demand. Nowadays, many of the techniques are presented for scheduling of residential appliances to reduce the Peak to Average Ratio (PAR) and consumer delay time. However, they did not consider the total cost of electricity and waiting time for the consumer. This critical issue has been resolved by Demand Side Management (DSM) in the Smart Grid (SG), Smart Home (SH) is considered to manage with many solutions. Home Energy Management System (HEMS) is able to use different kind of meta-heuristic techniques for scheduling of home appliances. In this thesis, we reduce the cost through load shifting techniques. In order to achieve above objective, we employed some feature of Jaya Algorithm (JA) on Bat Algorithm (BA) to develop Candidate Solution Updation Algorithm (CSUA). Simulation was conducted to compare the result of existing BA and JA for single smart home with 15 smart appliances. We used Time of Use (ToU) and Critical Peak Price (CPP) for electricity cost estimation. The result depicts that successful achievement of load shifting from higher price time slot to lower price time slot, which basically bring out the reduction in electricity bills. In this thesis, we also proposed our another meta-heuristic algorithm Runner Updation Optimization Algorithm (RUOA) to arrange the consumption behavior of residential appliances. We compared the results of our scheme with other meta-heuristic algorithms Strawberry Algorithm (SBA) and Firefly Algorithm (FA). CPP and Real Time Price (RTP) are the two electricity pricing scheme that we used in this thesis to calculate cost of electricity. The main objective of this thesis is to reduce the electricity cost and PAR. However, consumer comfort is not satisfied. Afterward, when our system done with an efficient optimization at consumption side, then come across the area of utility to generate electricity with better and accurate prediction of their price. Like in the electricity market, electricity price has some complicated features like high volatility, non-linearity and non-stationarity that make very difficult to predict the accurate price. However, it is necessary for markets and companies to predict accurate electricity price. In this paper, we enhanced the forecasting accuracy by combined approaches of Kernel Extreme Learning Machine (KELM) and Autoregression Moving Average (ARMA) along with unique and enhanced features of both models. Wavelet transform is applied on prices series to decompose them, afterward test has performed on decomposed series for providing stationary series to AMRA-model and non-stationary series to KELM-model. At the end series are tuned with our combine approach of enhanced price prediction. The performance of our enhanced combined method is evaluated by electricity price dataset of New South Wales (NSW), Australian market. The simulation results show that combined method has more accurate prediction than individual methods.
... Authors [78] proposed a relevant data selection methodology for consumption of energy prediction. In [79], the authors used two classifiers to train the data. ANN and naive bayesian classifiers are performed in the cloud based system. ...
Thesis
Electricity is the basic demand of consumers. With the passage of time, the demand for electricity is increasing day by day. Smart grid (SG) is evolved to satisfy the demand of consumers. To manage electricity load from peak hours to low peak hours, consumer needs to control their appliances by home energy management system (HEMS). HEMS schedule the appliances according to customers need. Energy management using demand-side management (DSM) techniques play an important role in SG domain. Smart meters (SM) and energy management controllers (EMC) are the important components of the SG. Intelligent energy optimization techniques play a vital role in the reduction of the electricity bill via scheduling home appliances. Through appliance’s scheduling, the consumer gets a feasible cost for consumed electricity. DSM provides the facility for consumers to schedule their appliances for the reduction of power price and rebate in peak loads. HEMS is allowed to remotely shut down their appliances in emergency conditions through direct load control programs. Meta-heuristic algorithms have been used for the optimization of the user energy consumption in an efficient way. Electricity load forecasting plays a vital role in improving the use of energy through customers to make decisions efficiently. The accuracy of load prediction is a challenging task because of randomness and noise disturbance. In this thesis, efficient algorithms are proposed to control the load in residential units. Our proposed schemes are used to minimize the user comfort delay time. Customers waiting time is inversely proportional to the total cost and peak to average ratio (PAR). The aim of the current research is to manage the power of the residential units in an optimized way and predict the exact load. Simulation results show the minimum user waiting time, however, the total cost is compromised due to the high demand of the load and predict the exact load for users. In the end, our proposed schemes show better result through simulation results. In this thesis, we proposed new schemes which are used to lower the electricity price, PAR and user discomfort in electricity consumption side. The proposed schemes performed better than existing benchmark schemes. The proposed schemes used real-time price (RTP) signal for calculating the electricity cost and PAR. Simulation results also show that the proposed algorithms have met the objective of DSM. For prediction, the proposed scheme is performed better than benchmark schemes and predict the exact electricity load.
... Later, the three algorithms that had the best accuracy were ensembled in the cloud environment. Saba et al. [27] discussed a framework in which breast cancer cells can be detected and classified using cytology images. Furthermore, features that incorporate shape were used to detect tumor cells using ANNs and an NB classifier. ...
Article
Full-text available
Globally, breast cancer is one of the most significant causes of death among women. Early detection accompanied by prompt treatment can reduce the risk of death due to breast cancer. Currently , machine learning in cloud computing plays a pivotal role in disease diagnosis, but predominantly among the people living in remote areas where medical facilities are scarce. Diagnosis systems based on machine learning act as secondary readers and assist radiologists in the proper diagnosis of diseases, whereas cloud-based systems can support telehealth services and remote diag-nostics. Techniques based on artificial neural networks (ANN) have attracted many researchers to explore their capability for disease diagnosis. Extreme learning machine (ELM) is one of the variants of ANN that has a huge potential for solving various classification problems. The framework proposed in this paper amalgamates three research domains: Firstly, ELM is applied for the diagnosis of breast cancer. Secondly, to eliminate insignificant features, the gain ratio feature selection method is employed. Lastly, a cloud computing-based system for remote diagnosis of breast cancer using ELM is proposed. The performance of the cloud-based ELM is compared with some state-of-the-art technologies for disease diagnosis. The results achieved on the Wisconsin Diagnostic Breast Cancer (WBCD) dataset indicate that the cloud-based ELM technique outperforms other results. The best performance results of ELM were found for both the standalone and cloud environments, which were compared. The important findings of the experimental results indicate that the accuracy achieved is 0.9868, the recall is 0.9130, the precision is 0.9054, and the F1-score is 0.8129.
Article
The numbers of diagnosed patients by melanoma are drastic and contribute more deaths annually among young peoples. An approximately 192,310 new cases of skin cancer are diagnosed in 2019, which shows the importance of automated systems for the diagnosis process. Accordingly, this article presents an automated method for skin lesions detection and recognition using pixel‐based seed segmented images fusion and multilevel features reduction. The proposed method involves four key steps: (a) mean‐based function is implemented and fed input to top‐hat and bottom‐hat filters which later fused for contrast stretching, (b) seed region growing and graph‐cut method‐based lesion segmentation and fused both segmented lesions through pixel‐based fusion, (c) multilevel features such as histogram oriented gradient (HOG), speeded up robust features (SURF), and color are extracted and simple concatenation is performed, and (d) finally variance precise entropy‐based features reduction and classification through SVM via cubic kernel function. Two different experiments are performed for the evaluation of this method. The segmentation performance is evaluated on PH2, ISBI2016, and ISIC2017 with an accuracy of 95.86, 94.79, and 94.92%, respectively. The classification performance is evaluated on PH2 and ISBI2016 dataset with an accuracy of 98.20 and 95.42%, respectively. The results of the proposed automated systems are outstanding as compared to the current techniques reported in state of art, which demonstrate the validity of the proposed method. Current research proposed a new auto system for skin lesions detection, recognition using pixel‐based seed segmented images fusion, and multilevel features reduction. Finally, using a SVM via cubic kernel functions, skin lesions are classified.
Article
A brain tumor is an uncontrolled development of brain cells in brain cancer if not detected at an early stage. Early brain tumor diagnosis plays a crucial role in treatment planning and patients' survival rate. There are distinct forms, properties, and therapies of brain tumors. Therefore, manual brain tumor detection is complicated, time‐consuming, and vulnerable to error. Hence, automated computer‐assisted diagnosis at high precision is currently in demand. This article presents segmentation through Unet architecture with ResNet50 as a backbone on the Figshare data set and achieved a level of 0.9504 of the intersection over union (IoU). The preprocessing and data augmentation concept were introduced to enhance the classification rate. The multi‐classification of brain tumors is performed using evolutionary algorithms and reinforcement learning through transfer learning. Other deep learning methods such as ResNet50, DenseNet201, MobileNet V2, and InceptionV3 are also applied. Results thus obtained exhibited that the proposed research framework performed better than reported in state of the art. Different CNN, models applied for tumor classification such as MobileNet V2, Inception V3, ResNet50, DenseNet201, NASNet and attained accuracy 91.8, 92.8, 92.9, 93.1, 99.6%, respectively. However, NASNet exhibited the highest accuracy. Highlights Two processes of transfer learning: freeze and fine‐tune, are performed to extract significant features from MRI slices. Brain tumor multi‐classification is performed using transfer learning, ResNet50‐UNet, and NASNet architecture.
Article
Lung cancer is the most common cause of cancer‐related death globally. Currently, lung nodule detection and classification are performed by radiologist‐assisted computer‐aided diagnosis systems. However, emerged artificially intelligent techniques such as neural network, support vector machine, and HMM have improved the detection and classification process of cancer in any part of the human body. Such automated methods and their possible combinations could be used to assist radiologists at early detection of lung nodules that could reduce treatment cost, death rate. Literature reveals that classification based on voting of classifiers exhibited better performance in the detection and classification process. Accordingly, this article presents an automated approach for lung nodule detection and classification that consists of multiple steps including lesion enhancement, segmentation, and features extraction from each candidate's lesion. Moreover, multiple classifiers logistic regression, multilayer perceptron, and voted perceptron are tested for the lung nodule classification using k‐fold cross‐validation process. The proposed approach is evaluated on the publically available Lung Image Database Consortium benchmark data set. Based on the performance evaluation, it is observed that the proposed method performed better in the stateof the art and achieved an overall accuracy rate of 100%. This research presents the lung nodule detection and classification framework. Multiple classifiers logistic regression, multilayer perceptron and voted perceptron using k‐fold cross‐validation process tested on LIDC dataset show promising accuracy results.
Article
Full-text available
Image processing plays a major role in neurologists' clinical diagnosis in the medical field. Several types of imagery are used for diagnostics, tumor segmentation, and classification. Magnetic resonance imaging (MRI) is favored among all modalities due to its noninvasive nature and better representation of internal tumor information. Indeed, early diagnosis may increase the chances of being lifesaving. However, the manual dissection and classification of brain tumors based on MRI is vulnerable to error, time-consuming, and formidable task. Consequently, this article presents a deep learning approach to classify brain tumors using an MRI data analysis to assist practitioners. The recommended method comprises three main phases: preprocessing, brain tumor segmentation using k-means clustering, and finally, classify tumors into their respective categories (benign/malignant) using MRI data through a finetuned VGG19 (i.e., 19 layered Visual Geometric Group) model. Moreover, for better classification accuracy, the synthetic data augmentation concept i s introduced to increase available data size for classifier training. The proposed approach was evaluated on BraTS 2015 benchmarks data sets through rigorous experiments. The results endorse the effectiveness of the proposed strategy and it achieved better accuracy compared to the previously reported state of the art techniques.
Article
Cancer detection in its early stages may allow patients to receive the proper treatment and save lives along with recovering the routine lifestyles. Breast cancer is of the top leading causes of mortality among women all around the globe. A source to find these cancerous nuclei is through analyzing histopathology images. These images, however, are very complex and large. Thus, locating the cancerous nuclei in them is very challenging. Hence, if an expert fails to diagnose their patients via these images, the situation may be exacerbated. Therefore, this study aims to introduce a method to mask as many cancer nuclei on histopathology images as possible with a high visual aesthetic to make them distinguishable by experts easily. A tailored residual fully convolutional encoder-decoder neural network based on end-to-end learning is proposed to issue the matter. The proposed method is evaluated quantitatively and qualitatively on ER + BCa H&E-stained dataset. The average detection accuracy achieved by the method is 98.61%, which is much better than that of competitors.
Article
Skin covers the entire body and is the largest organ. Skin cancer is one of the most dreadful cancers that is primarily triggered by sensitivity to ultraviolet rays from the sun. However, the riskiest is melanoma, although it starts in a few different ways. The patient is extremely unaware of recognizing skin malignant growth at the initial stage. Literature is evident that various handcrafted and automatic deep learning features are employed to diagnose skin cancer using the traditional machine and deep learning techniques. The current research presents a comparison of skin cancer diagnosis techniques using handcrafted and non‐handcrafted features. Additionally, clinical features such as Menzies method, seven‐point detection, asymmetry, border color and diameter, visual textures (GRC), local binary patterns, Gabor filters, random fields of Markov, fractal dimension, and an oriental histography are also explored in the process of skin cancer detection. Several parameters, such as jacquard index, accuracy, dice efficiency, preciseness, sensitivity, and specificity, are compared on benchmark data sets to assess reported techniques. Finally, publicly available skin cancer data sets are described and the remaining issues are highlighted. Computer vision‐based traditional and deep learning techniques for skin cancer diagnosis are explored with handcrafted and non‐handcrafted features, respectively. Results are compared on several benchmark data sets.
Article
Full-text available
Artificial intelligence (AI) is the usage of scientific techniques to simulate human intellectual skills and to tackle complex medical issues involving complicated genetic defects such as cancer. The rapid expansion of AI in the past era has paved the way to optimum judgment-making by superior intellect, where the human brain is constrained to manage large information in a limited period. Cancer is a complicated ailment along with several genomic variants. AI-centred systems carry enormous potential in detecting these genomic alterations and abnormal protein communications at a very initial phase. The contemporary biomedical study is also dedicated to bringing AI expertise to hospitals securely and ethically. AI-centred support to diagnosticians and doctors can be the big surge ahead for the forecast of illness threat, identification, diagnosis, and therapies. The applications related to AI and Machine Learning (ML) in the identification of cancer and its therapy possess the potential to provide therapeutic support for quicker planning of a novel therapy for each person. Through the utilization of AI- based methods, scientists can work together in real-time and distribute their expertise digitally to possibly cure billions. In this review, the focus was on the study of linking biology with AI and describe how AI-centred support could assist oncologists in accurate therapy. It is essential to identify new biomarkers that inject drug defiance and discover medicinal goals to improve medication methods. The advent of the “next-generation sequencing” (NGS) programs resolves these challenges and has transformed the prospect of “Precision Oncology” (PO). NGS delivers numerous medical functions which are vital for hazard prediction, initial diagnosis of infection, “Sequence” identification and “Medical Imaging” (MI), precise diagnosis, “biomarker” detection, and recognition of medicinal goals for innovation in medicine. NGS creates a huge repository that requires specific “bioinformatics” sources to examine the information that is pertinent and medically important. The malignancy diagnostics and analytical forecast are improved with NGS and MI that provide superior quality images via AI technology. Irrespective of the advancements in technology, AI faces a few problems and constraints, and the clinical application of NGS continues to be authenticated. Through the steady progress of invention and expertise, the prospects of AI and PO look promising. The purpose of this review was to assess, evaluate, classify, and tackle the present developments in cancer diagnosis utilizing AI methods for breast, lung, liver, skin cancer, and leukaemia. The research emphasizes in what way cancer identification, the treatment procedure is aided by utilizing AI with supervised, unsupervised, and deep learning (DL) methods. Numerous AI methods were assessed on benchmark datasets with respect to “accuracy”, “sensitivity”, “specificity”, and “false-positive” (FP) metrics. Lastly, challenges along with future work were discussed.
Article
Cancer is the deadliest disease. Early cancer cell detection is critical for saving human lives and ensuring successful treatment. By removing the benign cells, cancer cells can be prevented from spreading to other parts of the body (cancer cells in the early stage). Malignant tumours, on the other hand, which have an aggressive spreading capacity, affect various other parts of the body and are uncontrollable. This paper presents a study on five different types of cancer, namely breast, brain, lung, liver, and skin cancers, as well as various published techniques for detecting these cancers, which aids the researcher in understanding current techniques and aids in the development of new structures that provide better and more accurate results.It also focuses on cancer cell segmentation (ACM, PSO, UNet, watershed, and so on), feature extraction, and feature reduction (PCA, LDA, SVD). It also discusses different cancer classification methods such as machine learning and clustering (SVM, KNN, Bayesian, Neuro fuzzy, k-mean algorithm, GANs, and so on), deep learning (CNN, ResNet, VGG, and so on), and different evaluating methods.
Preprint
Full-text available
Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been developed for it. Breast cancer is one of the most common cancer types. According to the National Breast Cancer foundation, in 2020 alone, more than 276,000 new cases of invasive breast cancer and more than 48,000 non-invasive cases were diagnosed in the US. To put these figures in perspective, 64% of these cases are diagnosed early in the disease's cycle, giving patients a 99% chance of survival. Artificial intelligence and machine learning have been used effectively in detection and treatment of several dangerous diseases, helping in early diagnosis and treatment, and thus increasing the patient's chance of survival. Deep learning has been designed to analyze the most important features affecting detection and treatment of serious diseases. For example, breast cancer can be detected using genes or histopathological imaging. Analysis at the genetic level is very expensive, so histopathological imaging is the most common approach used to detect breast cancer. In this research work, we systematically reviewed previous work done on detection and treatment of breast cancer using genetic sequencing or histopathological imaging with the help of deep learning and machine learning. We also provide recommendations to researchers who will work in this field
Article
Full-text available
Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been developed for it. Breast cancer is one of the most common cancer types. According to the National Breast Cancer Foundation, in 2020 alone, more than 276,000 new cases of invasive breast cancer and more than 48,000 non-invasive cases were diagnosed in the US. To put these figures in perspective, 64% of these cases are diagnosed early in the disease's cycle, giving patients a 99% chance of survival. Artificial intelligence and machine learning have been used effectively in detection and treatment of several dangerous diseases, helping in early diagnosis and treatment, and thus increasing the patient's chance of survival. Deep learning has been designed to analyze the most important features affecting detection and treatment of serious diseases. For example, breast cancer can be detected using genes or histopathological imaging. Analysis at the genetic level is very expensive, so histopathological imaging is the most common approach used to detect breast cancer. In this research work, we systematically reviewed previous work done on detection and treatment of breast cancer using genetic sequencing or histopathological imaging with the help of deep learning and machine learning. We also provide recommendations to researchers who will work in this field.
Article
Lung's cancer is the leading cause of cancer-related deaths worldwide. Recently cancer mortality rate and incidence increased exponentially. Many patients with lung cancer are diagnosed late, so the survival rate is shallow. Machine learning approaches have been widely used to increase the effectiveness of cancer detection at an early stage. Even while these methods are efficient in detecting specific forms of cancer, there is no known technique that could be used universally and consistently to identify new malignancies. As a result, cancer diagnosis via machine learning algorithms is still fresh area of research. Computed tomography (CT) images are frequently employed for early cancer detection and diagnosis because they contain significant information. In this research, an automated lung cancer detection and classification framework is proposed which consists of preprocessing, three patches local binary pattern feature encoding, local binary pattern, histogram of oriented gradients features are extracted and fused. The fast learning network (FLN) is a novel machine-learning technique that is fast to train and economical in terms of processing resources. However, the FLN's internal power parameters (weight and basis) are randomly initialized, resulting it an unstable algorithm. Therefore, to enhance accuracy, FLN is hybrid with K-nearest neighbors to classify texture and appearance-based features of lung chest CT scans from Kaggle dataset into cancerous and non-cancerous images. The proposed model performance is evaluated using accuracy, sensitivity, specificity on the Kaggle benchmark dataset that is found comparable in state of the art using simple machine learning strategies.
Article
Early breast cancer detection is vital to enhance the human survival rate. Hence, this study aimed to segment cancer affected areas using Transfer Learning based Cancer Segmentation (TL-CAN-Seg) methodology. It also intended to extract the relevant features and then store them in the cloud. In addition, it introduced a novel MLP (Multi-layer perceptron) with tuned Levenberg Marquadrt algorithm (LM) to effectively classify the breast cancer affected-area. Initially, the mammogram image taken and image pre-processing is performed. Subsequently, segmentation is carried out using novel TL-CAN-Seg and feature extraction is performed, stored in cloud. Finally, classification is performed to classify the breast cancer. The proposed MLP with LM has the ability to converge faster and the proposed TL-CAN-Seg also has ability to learn the complex image patterns that eventually increases accuracy of breast cancer diagnosis than the traditional methods. This is confirmed through the comparative analysis in the results.
Article
The visual inspection of histopathological samples is the benchmark for detecting breast cancer, but a strenuous and complicated process takes a long time of the pathologist practice. Deep learning models have shown excellent outcomes in clinical diagnosis and image processing and advances in various fields, including drug development, frequency simulation, and optimization techniques. However, the resemblance of histopathologic images of breast cancer and the inclusion of stable and infected tissues in different areas make detecting and classifying tumors on entire slide images more difficult. In breast cancer, a correct diagnosis is needed for complete care in a limited amount of time. An effective detection can relieve the pathologist's workload and mitigate diagnostic subjectivity. Therefore, this research work investigates improved the pre-trained xception and deeplabv3+ design semantic model. The model has been trained on input images with ground masks on the tuned parameters that significantly improve the segmentation of ultrasound breast images into respective classes, that is, benign/malignant. The segmentation model delivered an accuracy of greater than 99% to prove the model's effectiveness. The segmented images and histopathological breast images are transferred to the 4-qubit-quantum circuit with six-layered architecture to detect breast malignancy. The proposed framework achieved remarkable performance as contrasted to currently published methodologies. HIGHLIGHTS: This research proposed hybrid semantic model using pre-trained xception and deeplabv3 for breast microscopic cancer classification in to benign and malignant classes at accuracy of 95% accuracy, 99% accuracy for detection of breast malignancy.
Article
Similar to other biometric systems such as fingerprint, face, DNA, iris classification could assist law enforcement agencies in identifying humans. Iris classification technology helps law‐enforcement agencies to recognize humans by matching their iris with iris data sets. However, iris classification is challenging in the real environment due to its invertible and complex texture variations in the human iris. Accordingly, this article presents an improved Oriented FAST and Rotated BRIEF with Bag‐of‐Words model to extract distinct and robust features from the iris image, followed by ensemble multi‐class‐SVM to classify iris. The proposed methodology consists of four main steps; first, iris image normalization and enhancement; second, localizing iris region; third, iris feature extraction; finally, iris classification using ensemble multi‐class support vector machine. For preprocessing of input images, histogram equalization, Gaussian mask and median filters are applied. The proposed technique is tested on two benchmark databases, that is, CASIA‐v1 and iris image database, and achieved higher accuracy than other existing techniques reported in state of the art.
Chapter
Breast cancer (BC) is one of the most leading malignancies amongst women globally; consequently, one in eight women is infected during the lifespan. The malignancy growth is reported in the breast glandular epithelium. The breast-cancer-infected individuals' prediction would be enhanced due to the required early detection and diagnosis process. Regardless of vast medical evolution, breast cancer has the second primary cause of mortality yet. Hence, initial diagnosis plays a preeminent role in decreasing the mortality rate. Several breast cancer detection procedures include computed tomography, mammography, X-rays, ultrasound, magnetic resonance imaging, thermography and more. Deep learning techniques were commonly used for medical imaging recognition for the past several years. convolutional neural network (CNN) was developed for accurate image recognition and classification, including breast cancer images due to its automatic detection and disease classification. This chapter aims to provide an extensive analysis of breast cancer, abnormalities, diagnosis, treatments and prevention strategies using image processing and CNN techniques. Additionally, available datasets description and critical statistical analysis based on CNN and different modalities for further study and future challenges are also highlighted.
Article
Females are approximately half of the total population worldwide, and most of them are victims of breast cancer (BC). Computer‐aided diagnosis (CAD) frameworks can help radiologists to find breast density (BD), which further helps in BC detection precisely. This research detects BD automatically using mammogram images based on Internet of Medical Things (IoMT) supported devices. Two pretrained deep convolutional neural network models called DenseNet201 and ResNet50 were applied through a transfer learning approach. A total of 322 mammogram images containing 106 fatty, 112 dense, and 104 glandular cases were obtained from the Mammogram Image Analysis Society dataset. The pruning out irrelevant regions and enhancing target regions is performed in preprocessing. The overall classification accuracy of the BD task is performed and accomplished 90.47% through DensNet201 model. Such a framework is beneficial in identifying BD more rapidly to assist radiologists and patients without delay. The study proposed and evaluated IoMT based breast density detection from mammogram images. Two pre‐trained CNN models: DenseNet201 and ResNet50 were applied through a transfer learning approach. Promising results achieved on a benchmark dataset. Graphical representation of the research architecture.
Article
Facial emotion recognition (FER) is an emerging and significant research area in the pattern recognition domain. In daily life, the role of non-verbal communication is significant, and in overall communication, its involvement is around 55% to 93%. Facial emotion analysis is efficiently used in surveillance videos, expression analysis, gesture recognition, smart homes, computer games, depression treatment, patient monitoring, anxiety, detecting lies, psychoanalysis, paralinguistic communication, detecting operator fatigue and robotics. In this paper, we present a detailed review on FER. The literature is collected from different reputable research published during the current decade. This review is based on conventional machine learning (ML) and various deep learning (DL) approaches. Further, different FER datasets for evaluation metrics that are publicly available are discussed and compared with benchmark results. This paper provides a holistic review of FER using traditional ML and DL methods to highlight the future gap in this domain for new researchers. Finally, this review work is a guidebook and very helpful for young researchers in the FER area, providing a general understating and basic knowledge of the current state-of-the-art methods, and to experienced researchers looking for productive directions for future work.
Chapter
Cancer is one of the most dangerous disease in human life. Diagnosing the cancer cell in early stages plays an important (valuable) role in saving human life and for successful treatment. At an early stage the spreading of cancer cells of the body can be stopped by removing the benign cells (cancer cells in the early stage). Whereas, malignant tumor (cancer cells in later stages) which are having aggressive spreading capacity, affects different other parts of the body and cannot be controlled. This paper presents a study on five different types of cancer viz., breast, brain, lung, liver and skin; and different published techniques of detecting these cancers, which help the students (researcher) to understand the current ongoing techniques and aids to develop new structure that gives better and accurate result. It also focuses on different segmentation (ACM, PSO, UNet, watershed etc), cancer feature extraction, cancer features reduction (PCA, LDA, SVD). Also, it discusses different cancer classification using machine learning and clustering (SVM, KNN, Bayesian, Neuro fuzzy, k-mean algorithm, GANs etc.), deep learning (CNN, ResNet, VGG etc) technique, also discuss about different evaluating method.
Chapter
Crucial in the development of a technological solution for managing insulin-dependent patients is an appropriate method of software engineering (SE) process. The SE process aims at providing supports for the design and implementation of a high-quality system by engaging patients and other healthcare professionals in knowing the exact specification for development. Primarily, insulin-dependent patients have a big challenge in determining the right dose of insulin to be injected according to measurements of glucose level and estimated carbohydrate from food intake. The result of this is often hypoglycaemia and hyperglycaemia, which may lead to coma and cause unfortunate death. The objective of this research work is to develop an adaptive model dedicated to the adjustment of insulin dose requirements in patients with type 1 diabetes. A mathematical model is formulated based on insulin sensitivity, basal insulin, bolus insulin, correction factor, carbohydrate counts, bolus on board equations and continuous blood glucose levels. We have evaluated the model by using root-mean-square error (RMSE) and mean absolute error (MAE) as parameters. The results showed the correct insulin dose values within the range of 4.76–0.18 IU for the fifty-two (52) patients with T1DM. Consequently, the developed model suggests a reduction in the incidence of hypoglycaemia, which can be used to predict the next insulin dose.
Article
Full-text available
Background: In digital mammography, finding accurate breast profile segmentation of women's mammogram is considered a challenging task. The existence of the pectoral muscle may mislead the diagnosis of cancer due to its high-level similarity to breast body. In addition, some other challenges due to manifestation of the breast body pectoral muscle in the mammogram data include inaccurate estimation of the density level and assessment of the cancer cell. The discrete differentiation operator has been proven to eliminate the pectoral muscle before the analysis processing. Methods: We propose a novel approach to remove the pectoral muscle in terms of the mediolateral-oblique observation of a mammogram using a discrete differentiation operator. This is used to detect the edges boundaries and to approximate the gradient value of the intensity function. Further refinement is achieved using a convex hull technique. This method is implemented on dataset provided by MIAS and 20 contrast enhanced digital mammographic images. Results: To assess the performance of the proposed method, visual inspections by radiologist as well as calculation based on well-known metrics are observed. For calculation of performance metrics, the given pixels in pectoral muscle region of the input scans are calculated as ground truth. Conclusions: Our approach tolerates an extensive variety of the pectoral muscle geometries with minimum risk of bias in breast profile than existing techniques.
Article
Full-text available
Auscultation of heart dispenses identification of the cardiac valves. An electronic stethoscope is used for the acquisition of heart murmurs that is further classified into normal or abnormal murmurs. The process of heart sound segmentation involves discrete wavelet transform to obtain individual components of the heart signal and its separation into systole and diastole intervals. This research presents a novel scheme to develop a semi-automatic cardiac valve disorder diagnosis system. Accordingly, features are extracted using wavelet transform and spectral analysis of input signals. The proposed classification scheme is the fusion of adaptive-neuro fuzzy inference system (ANFIS) and HMM. Both classifiers are trained using the extracted features to correctly identify normal and abnormal heart murmurs. Experimental results thus achieved exhibit that proposed system furnishes promising classification accuracy with excellent specificity and sensitivity. However, the proposed system has fewer classification errors, fewer computations, and lower dimensional feature set to build an intelligent system for detection and classification of heart murmurs.
Article
Full-text available
A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research.
Article
Full-text available
Medical imaging plays an integral role in the identification, segmentation, and classification of brain tumors. The invention of MRI has opened new horizons for brain-related research. Recently, researchers have shifted their focus towards applying digital image processing techniques to extract, analyze and categorize brain tumors from MRI. Categorization of brain tumors is defined in a hierarchical way moving from major to minor ones. A plethora of work could be seen in literature related to the classification of brain tumors in categories such as benign and malignant. However, there are only a few works reported on the multiclass classification of brain images where each part of the image containing tumor is tagged with major and minor categories. The precise classification is difficult to achieve due to ambiguities in images and overlapping characteristics of different type of tumors. In the current study, a comprehensive review of recent research on brain tumors multiclass classification using MRI is provided. These multiclass classification studies are categorized into two major groups: XX and YY and each group are further divided into three sub-groups. A set of common parameters from the reviewed works is extracted and compared to highlight the merits and demerits of individual works. Based on our analysis, we provide a set of recommendations for researchers and professionals working in the area of brain tumors classification.
Article
Full-text available
Currently, in the medical imaging, equipment produces 3D results for better visualization and cure. Likewise, need of capable devices for representation of these 3D medicinal images is expanding as it is valuable to view human tissues or organs straightforwardly and is an immense support to medical staff. Consequently, 3D images recreation from 2D images has attracted several researchers. The CT images normally composed of bones soft tissue and background. This paper presents 3D segmentation of leg’s bones in Computed Tomography images (CT scan). The bone section is extracted from each 2D slice and is placed in the 3D space. Surface rendering is applied on 2D bone slices. The data is visualized via multi-planar reformatting, surface rendering, and hardware-accelerated volume rendering. At last, the paper presents an enhanced technique to reproduce 3D bone segmentation in medicinal images that exhibits promising outcomes as compared to the techniques reported in literature. The proposed technique involves contour extraction, image enhancement, segmentation, outlier reduction and 3D modelling.
Article
Full-text available
With the advent of voluminous medical database, healthcare analytics in big data have become a major research area. Healthcare analytics are playing an important role in big data analysis issues by predicting valuable information through data mining and machine learning techniques. This prediction helps physicians in making right decisions for successful diagnosis and prognosis of various diseases. In this paper, an evolution based hybrid methodology is used to develop a healthcare analytic model exploiting data mining and machine learning algorithms Support Vector Machine (SVM), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The proposed model may assist physicians to diagnose various types of heart diseases and to identify the associated risk factors with high accuracy. The developed model is evaluated with the results reported by the literature algorithms in diagnosing heart diseases by taking the case study of Cleveland heart disease database. A great prospective of conducting this research is to diagnose any disease in less time with less number of factors or symptoms. The proposed healthcare analytic model is capable of reducing the search space significantly while analyzing the big data, therefore less number of computing resources will be consumed.
Article
Full-text available
With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art.
Article
Full-text available
Making deductions and expectations about climate has been a challenge all through mankind’s history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.
Article
Full-text available
Segmentation of objects from a noisy and complex image is still a challenging task that needs to be addressed. This article proposed a new method to detect and segment nuclei to determine whether they are malignant or not (determination of the region of interest, noise removal, enhance the image, candidate detection is employed on the centroid transform to evaluate the centroid of each object, the level set [LS] is applied to segment the nuclei). The proposed method consists of three main stages: preprocessing, seed detection, and segmentation. Preprocessing stage involves the preparation of the image conditions to ensure that they meet the segmentation requirements. Seed detection detects the seed point to be used in the segmentation stage, which refers to the process of segmenting the nuclei using the LS method. In this research work, 58 H&E breast cancer images from the UCSB Bio-Segmentation Benchmark dataset are evaluated. The proposed method reveals the high performance and accuracy in comparison to the techniques reported in literature. The experimental results are also harmonized with the ground truth images.
Article
Full-text available
Malaria parasitemia is the quantitative measurement of the parasites in the blood to grade the degree of infection. Light microscopy is the most well-known method used to examine the blood for parasitemia quantification. The visual quantification of malaria parasitemia is laborious, time-consuming and subjective. Although automating the process is a good solution, the available techniques are unable to evaluate the same cases such as anemia and hemoglobinopathies due to deviation from normal RBCs’ morphology. The main aim of this research is to examine the microscopic images of stained thin blood smears using a variety of computer vision techniques, grading malaria parasitemia on independent factors (RBC’s morphology). The proposed methodology is based on inductive approach, color segmentation of malaria parasites through adaptive algorithm of Gaussian mixture model (GMM). The quantification accuracy of RBCs is improved, splitting the occlusions of RBCs with distance transform and local maxima. Further, the classification of infected and non-infected RBCs has been made to properly grade parasitemia. The training and evaluation have been carried out on image dataset with respect to ground truth data, determining the degree of infection with the sensitivity of 98 % and specificity of 97 %. The accuracy and efficiency of the proposed scheme in the context of being automatic were proved experimentally, surpassing other state-of-the-art schemes. In addition, this research addressed the process with independent factors (RBCs’ morphology). Eventually, this can be considered as low-cost solutions for malaria parasitemia quantification in massive examinations.
Article
Full-text available
PurposeDue to the large quantity of data that are recorded in energy efficient buildings, understanding the behavior of various underlying operations has become a complex and challenging task. This paper proposes a method to support analysis of energy systems and validates it using operational data from a cold water chiller. The method automatically detects various operation patterns in the energy system. Methods The use of k-means clustering is being proposed to automatically identify the On (operational) cycles of a system operating with a duty cycle. The latter’s data is subsequently transformed to symbolic representations by using the symbolic aggregate approximation method. Afterward, the symbols are converted to bag of words representation (BoWR) for hierarchical clustering. A gap statistics method is used to find the best number of clusters in the data. Finally, operation patterns of the energy system are grouped together in each cluster. An adsorption chiller, operating under real life conditions, supplies the reference data for validation. ResultsThe proposed method has been compared with dynamic time warping (DTW) method using cophenetic coefficients and it has been shown that the BoWR has produced better results as compared to DTW. The results of BoWR are further investigated and for finding the optimal number of clusters, gap statistics have been used. At the end, interesting patterns of each cluster are discussed in detail. Conclusion The main goal of this research work is to provide analysis algorithms that automatically find the various patterns in the energy system of a building using as little configuration or field knowledge as possible. A bag of word representation method with hierarchical clustering has been proposed to assess the performance of a building energy system.
Article
Full-text available
Purpose: Histopathology is the clinical standard for tissue diagnosis; however, it requires tissue processing, laboratory personnel and infrastructure, and a highly trained pathologist to diagnose the tissue. Optical microscopy can provide real-time diagnosis, which could be used to inform the management of breast cancer. The goal of this work is to obtain images of tissue morphology through fluorescence microscopy and vital fluorescent stains and to develop a strategy to segment and quantify breast tissue features in order to enable automated tissue diagnosis. Methods: We combined acriflavine staining, fluorescence microscopy, and a technique called sparse component analysis to segment nuclei and nucleoli, which are collectively referred to as acriflavine positive features (APFs). A series of variables, which included the density, area fraction, diameter, and spacing of APFs, were quantified from images taken from clinical core needle breast biopsies and used to create a multivariate classification model. The model was developed using a training data set and validated using an independent testing data set. Results: The top performing classification model included the density and area fraction of smaller APFs (those less than 7 µm in diameter, which likely correspond to stained nucleoli).When applied to the independent testing set composed of 25 biopsy panels, the model achieved a sensitivity of 82 %, a specificity of 79 %, and an overall accuracy of 80 %. Conclusions: These results indicate that our quantitative microscopy toolbox is a potentially viable approach for detecting the presence of malignancy in clinical core needle breast biopsies.
Conference Paper
Full-text available
To achieve energy efficiency in buildings, a lot of raw data is recorded, during the operation of buildings. This recorded raw data is further used for the analysis of the performance of buildings and its different components e.g. Heating, Ventilation and Air-Conditioning (HVAC). To save time and energy it is required to ensure resilience of the data by detecting and replacing outliers (i.e. data samples that are not plausible) in the data before detailed analysis. This paper discusses the steps involved for detecting outliers in the data obtained from absorption chiller using their On/Off state information. It also proposes a method for automatic detection of On/Off and/or Missing Data status of the chiller. The technique uses two layer K-Means clustering for detecting On/Off as well as Missing Data state of the chiller. After automatic detection of the chiller On/Off cycle, a method for outlier detection is proposed using Z-Score normalization based on the On/Off cycle state of chillers and clustering outliers by Expectation Maximization clustering algorithm. Moreover, the results of filling the missing values with regression and linear interpolation for short and long periods are elaborated. All proposed methods are applied to real building data and the results are discussed.
Article
Full-text available
In this paper investigation of the performance criterion of a machine learning tool, Naive Bayes Classifier with a new weighted approach in classifying breast cancer is done. Naive Bayes is one of the most effective classification algorithms. In many decision making system, ranking performance is an interesting and desirable concept than just classification. So to extend traditional Naive Bayes, and to improve its performance, weighted concept is incorporated. Exploration of Domain knowledge based weight assignment on UCI machine learning repository dataset of breast cancer is performed. As Breast cancer is considered to be second leading cause of death in women today. The experiments show that a weighted naive bayes approach outperforms naive bayes.
Article
Full-text available
We present a workflow for data sanitation and analysis of operation data with the goal of increasing energy efficiency and reliability in the operation of building-related energy systems. The workflow makes use of machine learning algorithms and innovative visualizations. The environment, in which monitoring data for energy systems are created, requires low configuration effort for data analysis. Therefore the focus lies on methods that operate automatically and require little or no configuration. As a result a generic workflow is created that is applicable to various energy-related time series data; it starts with data accessibility, followed by automated detection of duty cycles where applicable. The detection of outliers in the data and the sanitation of gaps ensure that the data quality is sufficient for an analysis by domain experts, in our case the analysis of system energy efficiency. To prove the feasibility of the approach, the sanitation and analysis workflow is implemented and applied to the recorded data of a solar driven adsorption chiller.
Article
Full-text available
Application of personalized medicine requires integration of different data to determine each patient’s unique clinical constitution. The automated analysis of medical data is a growing field where different machine learning techniques are used to minimize the time-consuming task of manual analysis. The evaluation, and often training, of automated classifiers requires manually labelled data as ground truth. In many cases such labelling is not perfect, either because of the data being ambiguous even for a trained expert or because of mistakes. Here we investigated the interobserver variability of image data comprising fluorescently stained circulating tumor cells and its effect on the performance of two automated classifiers, a random forest and a support vector machine. We found that uncertainty in annotation between observers limited the performance of the automated classifiers, especially when it was included in the test set on which classifier performance was measured. The random forest classifier turned out to be resilient to uncertainty in the training data while the support vector machine’s performance is highly dependent on the amount of uncertainty in the training data. We finally introduced the consensus data set as a possible solution for evaluation of automated classifiers that minimizes the penalty of interobserver variability.
Article
Full-text available
A framework for automated detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features is proposed and examined. The various stages involved in the proposed methodology include enhancement of microscopic images, segmentation of background cells, features extraction, and finally the classification. An appropriate and efficient method is employed in each of the design steps of the proposed framework after making a comparative analysis of commonly used method in each category. For highlighting the details of the tissue and structures, the contrast limited adaptive histogram equalization approach is used. For the segmentation of background cells, k -means segmentation algorithm is used because it performs better in comparison to other commonly used segmentation methods. In feature extraction phase, it is proposed to extract various biologically interpretable and clinically significant shapes as well as morphology based features from the segmented images. These include gray level texture features, color based features, color gray level texture features, Law’s Texture Energy based features, Tamura’s features, and wavelet features. Finally, the K -nearest neighborhood method is used for classification of images into normal and cancerous categories because it is performing better in comparison to other commonly used methods for this application. The performance of the proposed framework is evaluated using well-known parameters for four fundamental tissues (connective, epithelial, muscular, and nervous) of randomly selected 1000 microscopic biopsy images.
Article
Full-text available
Health telematics is a growing up issue that is becoming a major improvement on patient lives, especially in elderly, disabled, and chronically ill. In recent years, information an d communication technologies improvements, along with mobile Internet, offering anywhere and anytime connectivity, play a key role on modern healthcare solutions. In this context, mobile health (m-Health) delivers healthcare services,overcoming geographical, temporal, and even organizational barriers. M-Health solutions address emerging problems on health services, including, the increasing number of chronic diseases related to lifestyle, high costs of existing national health services, the need to empower patients and families to self-care and handle their own healthcare, and the need to provide direct access to health services, regardless of time and place. Then, this paper presents a comprehensive review of the state of the art on m-Health services and applications. It surveys the most significant research work and presents a deep analysis of the top and novel m-Health services and applications proposed by industry. A discussion considering the European Union and United States approaches addressing the m-Health paradigm and directives already published is also considered. Open and challenging issues on emerging m-Health solutions are proposed for further works. Copyright © 2015. Published by Elsevier Inc.
Article
Full-text available
Offline cursive script recognition and their associated issues are still fresh despite of last few decades' research. This paper presents an annotated comparison of proposed and recently published preprocessing techniques with reported work in the offline cursive script recognition. Normally, in the offline script analysis, the input is a paper image or a word or a digit and the desired output is ASCII text. This task involves several preprocessing steps, and some of them are quite hard such as line removal from text, skew removal, reference line detection (lower/upper baselines), slant removal, scaling, noise elimination, contour smoothing and skeleton. Moreover, subsequent stage of segmentation (if any) and recognition is also highly dependent on these preprocessing techniques. This paper presents an analysis and annotated comparison of latest preprocessing techniques proposed by authors with those reported in the literature on IAM/CEDAR benchmark databases. Finally, future work and persist problems are highlighted.
Article
Full-text available
This paper presents a new, simple and fast approach for character segmen-tation of unconstrained handwritten words. The proposed approach first seeks the possible character boundaries based on characters geometric features analysis. However, due to inherited ambiguity and a lack of context, few characters are over-segmented. To increase the efficiency of the proposed approach, an Artificial Neural Network is trained with sig-nificant number of valid segmentation points for cursive handwritten words. Trained neural network extracts incorrect segmented points efficiently with high speed. For fair comparison, benchmark database CEDAR is used. The experimental results are promis-ing from complexity and accuracy points of view.
Article
Full-text available
Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized.
Article
Full-text available
Medical images have made a great impact on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. Many image segmentation methods for medical image analysis have been presented in this paper. In this paper, we have described the latest segmentation methods applied in medical image analysis. The advantages and disadvantages of each method are described besides examination of each algorithm with its application in Magnetic Resonance Imaging and Computed Tomography image analysis. Each algorithm is explained separately with its ability and features for the analysis of grey-level images. In order to evaluate the segmentation results, some popular benchmark measurements are presented in the final section.
Article
Full-text available
Mobile Cloud Computing (MCC) is aimed at integrating mobile devices with cloud computing. It is one of the most important concepts that have emerged in the last few years. Mobile devices, in the traditional agent-client architecture of MCC, only utilize resources in the cloud to enhance their functionalities. However, modern mobile devices have many more resources than before. As a result, researchers have begun to consider the possibility of mobile devices themselves sharing resources. This is called the cooperation-based architecture of MCC. Resource discovery is one of the most important issues that need to be solved to achieve this goal. Most of the existing work on resource discovery has adopted a fixed choice of centralized or flooding strategies. Many improved versions of energy-efficient methods based on both strategies have been proposed by researchers due to the limited battery life of mobile devices. This paper proposes a novel adaptive method of resource discovery from a different point of view to distinguish it from existing work. The proposed method automatically transforms between centralized and flooding strategies to save energy according to different network environments. Theoretical models of both energy consumption and the quality of response information are presented in this paper. A heuristic algorithm was also designed to implement the new adaptive method of resource discovery. The results from simulations demonstrated the effectiveness of the strategy and the efficiency of the proposed heuristic method.
Article
Visual inspection for the quantification of malaria parasitaemiain (MP) and classification of life cycle stage are hard and time taking. Even though, automated techniques for the quantification of MP and their classification are reported in the literature. However, either reported techniques are imperfect or cannot deal with special issues such as anemia and hemoglobinopathies due to clumps of red blood cells (RBCs). The focus of the current work is to examine the thin blood smear microscopic images stained with Giemsa by digital image processing techniques, grading MP on independent factors (RBCs morphology) and classification of its life cycle stage. For the classification of the life cycle of malaria parasite the k‐nearest neighbor, Naïve Bayes and multi‐class support vector machine are employed for classification based on histograms of oriented gradients and local binary pattern features. The proposed methodology is based on inductive technique, segment malaria parasites through the adaptive machine learning techniques. The quantification accuracy of RBCs is enhanced; RBCs clumps are split by analysis of concavity regions for focal points. Further, classification of infected and non‐infected RBCs has been made to grade MP precisely. The training and testing of the proposed approach on benchmark dataset with respect to ground truth data, yield 96.75% MP sensitivity and 94.59% specificity. Additionally, the proposed approach addresses the process with independent factors (RBCs morphology). Finally, it is an economical solution for MP grading in immense testing.
Article
Glaucoma is a neurodegenerative illness and is considered as a standout amongst the most widely recognized reasons for visual impairment. Nerve's degeneration is an irretrievable procedure, so the diagnosis of the illness at an early stage is an absolute requirement to stay away from lasting loss of vision. Glaucoma effected mainly because of increased intraocular pressure, if it is not distinguished and looked early, it can result in visual impairment. There are not generally evident side effects of glaucoma; thus, patients attempt to get treatment just when the seriousness of malady is advanced altogether. Determination of glaucoma often comprises of review of the basic crumbling of the nerve in conjunction with the examination of visual function capacity. This article shows the persistent illustration of glaucoma, its side effects, and the potential people inclined to this malady. The essence of this article is on different classification methods being utilized and proposed by various scientists for the identification of glaucoma. This article audits a few division and segmentation methodologies that are exceptionally useful for recognizable proof, identification, and diagnosis of glaucoma. The research related to the findings and the treatment is likewise evaluated in this article.
Article
Identifying abnormality using breast mammography is a challenging task for radiologists due to its nature. A more consistent and precise imaging based CAD system plays a vital role in the classification of doubtful breast masses. In the proposed CAD system, pre-processing is performed to suppress the noise in the mammographic image. Then segmentation locates the tumor in mammograms using the cascading of Fuzzy C-Means (FCM) and region-growing (RG) algorithm called FCMRG. Features extraction step involves identification of important and distinct elements using Local Binary Pattern Gray-Level Co-occurrence Matrix (LBP-GLCM) and Local Phase Quantization (LPQ). The hybrid features are obtained from these techniques. The mRMR algorithm is employed to choose suitable features from individual and hybrid feature sets. The nominated feature sets are analysed through various machine learning procedures to isolate the malignant tumors from the benign ones. The classifiers are probed on 109 and 72 images of MIAS and DDSM databases respectively using k-fold (10-fold) cross-validation method. The enhanced classification accuracy of 98.2% is achieved for MIAS dataset using hybrid features classified by Decision Tree. Whereas 95.8% accuracy is obtained for DDSM dataset using KNN classifier applied on LPQ features.
Article
Background: Knee bone diseases are rare but might be highly destructive. Magnetic Resonance Imaging (MRI) is the main approach to identify knee cancer and its treatment. Normally, the knee cancers are detected with the help of different MRI analysis techniques and later image analysis strategies assess these images. Discussion: Computer-based medical image analysis is getting researcher’s interest due to its advantages of speed and accuracy as compared to traditional techniques. The focus of current research is MRI-based medical image analysis for knee bone disease detection. Accordingly, several approaches for features extraction and segmentation for knee bone cancer are analyzed and compared on benchmark database. Conclusion: Finally, the current state of the art is investigated and future directions are proposed.
Article
Malaria parasitemia diagnosis and grading is hard and still far from perfection. Inaccurate diagnosis and grading has caused tremendous deaths rate particularly in young children worldwide. The current research deeply reviews automated malaria parasitemia diagnosis and grading in thin blood smear digital images through image analysis and computer vision based techniques. Actually, state‐of‐the‐art reveals that current proposed practices present partially or morphology dependent solutions to the problem of computer vision based microscopy diagnosis of malaria parasitemia. Accordingly, a deep appraisal of the current practices is investigated, compared and analyzed on benchmark datasets. The open gaps are highlighted and the future directions are laid down for a complete automated microscopy diagnosis for malaria parasitemia based on those factors that have not been affected by other diseases. Moreover, a general computer vision framework to perform malaria parasitemia estimation/grading is constructed in universal directions. Finally, remaining problems are highlighted and possible directions are suggested. Research Highlights The current research presents a microscopic malaria parasitemia diagnosis and grading of malaria in thin blood smear digital images through image analysis and computer vision based techniques. The open gaps are highlighted and future directions for a complete automated microscopy diagnosis of malaria parasitemia mentioned.
Article
Splitting the rouleaux RBCs from single RBCs and its further subdivision is a challenging area in computer‐assisted diagnosis of blood. This phenomenon is applied in complete blood count, anemia, leukemia, and malaria tests. Several automated techniques are reported in the state of art for this task but face either under or over splitting problems. The current research presents a novel approach to split Rouleaux red blood cells (chains of RBCs) precisely, which are frequently observed in the thin blood smear images. Accordingly, this research address the rouleaux splitting problem in a realistic, efficient and automated way by considering the distance transform and local maxima of the rouleaux RBCs. Rouleaux RBCs are splitted by taking their local maxima as the centres to draw circles by mid‐point circle algorithm. The resulting circles are further mapped with single RBC in Rouleaux to preserve its original shape. The results of the proposed approach on standard data set are presented and analyzed statistically by achieving an average recall of 0.059, an average precision of 0.067 and F‐measure 0.063 are achieved through ground truth with visual inspection. Rouleaux RBCs are splitted by taking their local maxima as the centres to draw circles employing mid‐point circle algorithm. The resulting circles are further mapped with single RBC into Rouleaux at high accuracy, while preserving original shape.
Article
Arabic writer identification and associated tasks are still fresh due to huge variety of Arabic writer's styles. Current research presents a fusion of statistical features, extracted from fragments of Arabic handwriting samples to identify the writer using fuzzy ARTMAP classifier. Fuzzy ARTMP is supervised neural model, especially suited to classification problems. It is faster to train and need less number of training epochs to "learn" from input data for generalization. The extracted features are fed to Fuzzy ARTMP for training and testing. Fuzzy ARTMAP is employed for the first time along with a novel fusion of statistical features for Arabic writer identification. The entire IFN/ENIT database is used in experiments such that 75% handwritten Arabic words from 411 writers are employed in training and 25% for testing the system at random. Several combinations of extracted features are tested using fuzzy ARTMAP classifier and finally one combination exhibited promising accuracy of 94.724% for Arabic writer identification on IFN/ENIT benchmark database.
Article
Building Your Next Big Thing with Google Cloud Platform shows you how to take advantage of the Google Cloud Platform technologies to build all kinds of cloud-hosted software and services for both public and private consumption. Whether you need a simple virtual server to run your legacy application or you need to architect a sophisticated high-traffic web application, Cloud Platform provides all the tools and products required to create innovative applications and a robust infrastructure to manage them. Google is known for the scalability, reliability, and efficiency of its various online products, from Google Search to Gmail. And, the results are impressive. Google Search, for example, returns results literally within fractions of second. How is this possible? Google custom-builds both hardware and software, including servers, switches, networks, data centers, the operating system’s stack, application frameworks, applications, and APIs. Have you ever imagined what you could build if you were able to tap the same infrastructure that Google uses to create and manage its products? Now you can! Building Your Next Big Thing with Google Cloud Platform shows you how to take advantage of the Google Cloud Platform technologies to build all kinds of cloud-hosted software and services for both public and private consumption. Whether you need a simple virtual server to run your legacy application or you need to architect a sophisticated high-traffic web application, Cloud Platform provides all the tools and products required to create innovative applications and a robust infrastructure to manage them. Using this book as your compass, you can navigate your way through the Google Cloud Platform and turn your ideas into reality. The authors, both Google Developer Experts in Google Cloud Platform, systematically introduce various Cloud Platform products one at a time and discuss their strengths and scenarios where they are a suitable fit. But rather than a manual-like "tell all" approach, the emphasis is on how to Get Things Done so that you get up to speed with Google Cloud Platform as quickly as possible. You will learn how to use the following technologies, among others: • Google Compute Engine • Google App Engine • Google Container Engine • Google App Engine Managed VMs • Google Cloud SQL • Google Cloud Storage • Google Cloud Datastore • Google BigQuery • Google Cloud Dataflow • Google Cloud DNS • Google Cloud Pub/Sub • Google Cloud Endpoints • Google Cloud Deployment Manager • Author on Google Cloud Platform • Google APIs and Translate API Using real-world examples, the authors first walk you through the basics of cloud computing, cloud terminologies and public cloud services. Then they dive right into Google Cloud Platform and how you can use it to tackle your challenges, build new products, analyze big data, and much more. Whether you’re an independent developer, startup, or Fortune 500 company, you have never had easier to access to world-class production, product development, and infrastructure tools. Google Cloud Platform is your ticket to leveraging your skills and knowledge into making reliable, scalable, and efficient products—just like how Google builds its own products.
Article
Early screening of skeptical masses or breast carcinomas in mammograms is supposed to decline the mortality rate among women. This amount can be decreased more on development of the computer-aided diagnosis with reduction of false suppositions in medical informatics. Our aim is to provide a robust tumor detection system for accurate classification of breast masses using normal, abnormal, benign, or malignant classes. The breast carcinomas are classified on the basis of observed classes. This is highly dependent on feature extraction process. In propose work, a novel algorithm for classification based on the combination of top Hat transformation and gray level cooccurrence matrix with back propagation neural network. The aim of this study is to present a robust classification model for automated diagnosis of the breast tumor with reduction of false assumptions in medical informatics. The proposed method is verified on two datasets MIAS and DDSM. It is observed that rate of false positives decreased by the proposed method to improve the performance of classification, efficiently.
Article
Knee bone diseases are rare but might be highly destructive. Magnetic resonance imaging (MRI) is the main approach to identify knee cancer and its treatment. Normally, the knee cancers are pointed out with the help of different MRI analysis techniques and latter image analysis strategies understand these images. Computer-based medical image analysis is getting researcher's interest due to its advantages of speed and accuracy as compared to traditional techniques. The focus of current research is MRI-based medical image analysis for knee bone disease detection. Accordingly, several approaches for features extraction and segmentation for knee bone cancer are analyzed and compared on benchmark database. Finally, the current state of the art is investigated and future directions are proposed.
Article
This paper presents an ear biometric approach to classify humans. Accordingly an improved local features extraction technique based on ear region features is proposed. Accordingly, ear image is segmented in to certain regions to extract eigenvector from all regions. The extracted features are normalized and fed to a trained neural network. To benchmark results, benchmark database from University of Science and Technology Beijing (USTB) is employed that have mutually exclusive sets for training and testing. Promising results are achieved that are comparable in the state of art. However, a few region features exhibited low accuracy that will be addressed in the subsequent research.
Article
Extraction of the breast border and simultaneously exclusion of pectoral muscle are principal steps for diagnosing of breast cancer based on mammogram data. The objective of propose method is to classify the mediolateral oblique fragment of the pectoral muscle. The extraction of breast region is performed using the multilevel wavelet decomposition of mammogram images. Moreover, artifact suppression and pectoral muscle detection is carried out by morphological operator. The efficient extraction with higher accuracy is validated on the set of 322 digital images taken from MIAS dataset.
Article
Extraction of the breast border and simultaneously exclusion of pectoral muscle are principal steps for diagnosing of breast cancer based on mammogram data. The objective of proposed method is to classify the mediolateral oblique fragment of the pectoral muscle. The extraction of breast region is performed using the multilevel wavelet decomposition of mammogram images. Moreover, artifact suppression and pectoral muscle detection are carried out by the morphological operator. The efficient extraction with higher accuracy is validated on the set of 322 digital images taken from MIAS dataset.
Chapter
This chapter provides an introduction to Google App Engine concepts. We’ll look at development methodology and how App Engine implements cloud computing concepts by providing both development and runtime services.
Article
Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.
Chapter
An analysis of the genes that make up the DRβ region will extend our information about the complexity of the HLA class II system, since the serologically detectable polymorphism is located mainly on the DRβ chain. Probably this analysis will be most informative if we look at the 5′ ends of the genes coding for the variable stretches of a DRβ chain [1].
Chapter
Google Compute Engine is an infrastructure service provided as part of the Google Cloud Platform. Compute Engine is made up of three major components: virtual machines, persistent disks, and networks. It is available at several Google datacenters worldwide and is provided exclusively on an on-demand basis. This means Compute Engine neither charges any upfront fees for deployment nor locks down the customer. At the same time, Compute Engine provides steep discounts for sustained use. These sustained-use discounts, without any upfront subscription fees, are an industry first; this pricing innovation shows that Google innovates outside of technology as well.
Chapter
Microsoft Office 365 is a subscription-based version of Microsoft Office 2013. Unlike any of the traditional Office suites such as Office 2010, Office 365 allows you to install Office applications on up to five different PCs and Macs, plus five mobile devices. With an Office 365 subscription, you can upgrade to the latest versions of Office for free.
Article
Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf) , standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. Microsc. Res. Tech., 2016. © 2016 Wiley Periodicals, Inc.