Cloud-based decision support system for the detection and classification of malignant cells in breast cancer using breast cytology images

ArticleinMicroscopy Research and Technique 82(10) · January 2019with 188 Reads 
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
Cite this publication

Do you want to read the rest of this article?

Request Full-text Paper PDF
Advertisement
  • ... Experiments were conducted on the IFN/ENIT database and an accuracy of 94.72% reported. Based on the complexities of Arabic cursive handwriting styles, speed, touching, overlapping and inherent properties, a pre-processing strategy is needed to collaborate with intelligent techniques to enhance the accuracy such as neural networks [33][34][35][36][37], GA [38][39][40], SVM [41][42][43]. However, character boundaries detection in touched and overlapped consecutive characters made this issue more crucial. ...
    Preprint
    Online Arabic cursive character recognition is still a big challenge due to the existing complexities including Arabic cursive script styles, writing speed, writer mood and so forth. Due to these unavoidable constraints, the accuracy of online Arabic character's recognition is still low and retain space for improvement. In this research, an enhanced method of detecting the desired critical points from vertical and horizontal direction-length of handwriting stroke features of online Arabic script recognition is proposed. Each extracted stroke feature divides every isolated character into some meaningful pattern known as tokens. A minimum feature set is extracted from these tokens for classification of characters using a multilayer perceptron with a back-propagation learning algorithm and modified sigmoid function-based activation function. In this work, two milestones are achieved; firstly, attain a fixed number of tokens, secondly, minimize the number of the most repetitive tokens. For experiments, handwritten Arabic characters are selected from the OHASD benchmark dataset to test and evaluate the proposed method. The proposed method achieves an average accuracy of 98.6% comparable in state of art character recognition techniques.
  • ... In the area of machine learning (ML), the performance of a system is dependent on the extraction of most strong kind features [41][42][43][44][45]. In medical imaging, mostly color, texture, and shape features are used for the classification process [46][47][48][49][50], but these techniques do not perform well when the processed data is vast in number. ...
    Article
    Cancer is one of the leading causes of deaths in the last two decades. It is either diagnosed malignant or benign – depending upon the severity of the infection and the current stage. The conventional methods require a detailed physical inspection by an expert dermatologist, which is time-consuming and imprecise. Therefore, several computer vision methods are introduced lately, which are cost-effective and somewhat accurate. In this work, we propose a new automated approach for skin lesion detection and recognition using a deep convolutional neural network (DCNN). The proposed cascaded design incorporates three fundamental steps including; a) contrast enhancement through fast local Laplacian filtering (FlLpF) along HSV color transformation; b) lesion boundary extraction using color CNN approach by following XOR operation; c) in-depth features extraction by applying transfer learning using Inception V3 model prior to feature fusion using hamming distance (HD) approach. An entropy controlled feature selection method is also introduced for the selection of the most discriminant features. The proposed method is tested on PH2 and ISIC 2017 datasets, whereas the recognition phase is validated on PH2, ISBI 2016, and ISBI 2017 datasets. From the results, it is concluded that the proposed method outperforms several existing methods and attained accuracy 98.4% on PH2 dataset, 95.1% on ISBI dataset and 94.8% on ISBI 2017 dataset.
  • ... To test the performance of the designed model, different brain tumor data sets have been used by researchers Abbas et al., 2018b;Saba et al., 2019) & Saba, 2017;Jamal et al., 2017;Mughal, Muhammad, et al., 2017a;Mughal, Muhammad, Sharif, Rehman, & Saba, 2018;Mughal, Sharif, et al., 2017b;Muhsin, Rehman, Altameem, Saba, & Uddin, 2014;Rad, Rahim, Rehman, Altameem, & Saba, 2013;Waheed et al., 2016). Such models include encoder-decoder models (Castillo et al., 2017;Saba, 2018;Saba, Rehman, & Sulong, 2011a;Sharif et al., 2017), multiple networks to extract the different level of features (Sedlar, 2017), models using convolutional networks with some postprocessing (Shaikh et al., 2017) and hybrid architectures with varying numbers of layers (Mundher, Muhamad, Rehman, Saba, & Kausar, 2014;Neamah, Mohamad, Saba, & Rehman, 2014;Saba & Alqahtani, 2013). ...
    Article
    Automatic and precise segmentation and classification of tumor area in medical images is still a challenging task in medical research. Most of the conventional neural network based models usefully connected or convolutional neural networks to perform segmentation and classification. In this research, we present deep learning models using long short term memory (LSTM) and convolutional neural networks (ConvNet) for accurate brain tumor delineation from benchmark medical images. The two different models, that is, ConvNet and LSTM networks are trained using the same data set and combined to form an ensemble to improve the results. We used publicly available MICCAI BRATS 2015 brain cancer data set consisting of MRI images of four modalities T1, T2, T1c, and FLAIR. To enhance the quality of input images, multiple combinations of preprocessing methods such as noise removal, histogram equalization, and edge enhancement are formulated and best performer combination is applied. To cope with the class imbalance problem, class weighting is used in proposed models. The trained models are tested on validation data set taken from the same image set and results obtained from each model are reported. The individual score (accuracy) of ConvNet is found 75% whereas for LSTM based network produced 80% and ensemble fusion produced 82.29% accuracy. ConvNet, LSTMNet and their fusion is employed to improve the tumor segmentation of human brain MRI images taken from publically available multimodal MICCAI BRATS 2015 training data set. Ensemble network provided best results.
  • ... These segmentation points are divided into correct and incorrect categories and are stored in a training file. Data is preprocessed prior to its use for BPN training as BPN takes uniform data [39][40][41][42][43]. ...
    Preprint
    The cursive nature of multilingual characters segmentation and recognition of Arabic, Persian, Urdu languages have attracted researchers from academia and industry. However, despite several decades of research, still multilingual characters classification accuracy is not up to the mark. This paper presents an automated approach for multilingual characters segmentation and recognition. The proposed methodology explores character based on their geometric features. However, due to uncertainty and without dictionary support few characters are over-divided. To expand the productivity of the proposed methodology a BPN is prepared with countless division focuses for cursive multilingual characters. Prepared BPN separates off base portioned indicates effectively with rapid upgrade character acknowledgment precision. For reasonable examination, only benchmark dataset is utilized.
  • ... A. Khan, Akram, Sharif, Awais, et al., 2018;M. A. Khan, Akram, Sharif, Shahzad, et al., 2018;Nasir et al., 2018;Rehman, Abbas, Saba, Mahmood, & Kolivand, 2018;Rehman, Abbas, Saba, Rahman, et al., 2018;Saba et al., 2019;Yousaf et al., 2019). Majority of the previous research work has focused on the early detection of lungs cancer using the texture-based interpretation of chest CTs . ...
    Article
    The emergence of cloud infrastructure has the potential to provide significant benefits in a variety of areas in the medical imaging field. The driving force behind the extensive use of cloud infrastructure for medical image processing is the exponential increase in the size of computed tomography (CT) and magnetic resonance imaging (MRI) data. The size of a single CT/MRI image has increased manifold since the inception of these imagery techniques. This demand for the introduction of effective and efficient frameworks for extracting relevant and most suitable information (features) from these sizeable images. As early detection of lungs cancer can significantly increase the chances of survival of a lung scanner patient, an effective and efficient nodule detection system can play a vital role. In this article, we have proposed a novel classification framework for lungs nodule classification with less false positive rates (FPRs), high accuracy, sensitivity rate, less computationally expensive and uses a small set of features while preserving edge and texture information. The proposed framework comprises multiple phases that include image contrast enhancement, segmentation, feature extraction, followed by an employment of these features for training and testing of a selected classifier. Image preprocessing and feature selection being the primary steps-playing their vital role in achieving improved classification accuracy. We have empirically tested the efficacy of our technique by utilizing the well-known Lungs Image Consortium Database dataset. The results prove that the technique is highly effective for reducing FPRs with an impressive sensitivity rate of 97.45%. K E Y W O R D S computed tomography, feature selection, lungs segmentation, pulmonary nodules, wavelet features
  • ... Different error measures are used to evaluate their scheme's performance, in which their schemes have better accuracy then existing ones. Computer based approach in data mining for medical science is presented in [91], in which authors use the shape based feature of tumor cells to diagnose the patients. Classification accuracy performance is evaluated as 98% through ANN and Naive Bayesian, while comparing with physical methods for detection of malignant cell. ...
    Thesis
    Full-text available
    The energy demand is increasing every day due to the huge amount of residential appliance’s energy consumption. As the energy demand for consumption is comparably higher than the generation of energy, which produce the shortage of energy. Industrial and commercial areas are also consuming large amount of energy, however residential energy demand is more flexible as compared to other two. Many of the existing schemes are developed to fulfill the consumer demand. Nowadays, many of the techniques are presented for scheduling of residential appliances to reduce the Peak to Average Ratio (PAR) and consumer delay time. However, they did not consider the total cost of electricity and waiting time for the consumer. This critical issue has been resolved by Demand Side Management (DSM) in the Smart Grid (SG), Smart Home (SH) is considered to manage with many solutions. Home Energy Management System (HEMS) is able to use different kind of meta-heuristic techniques for scheduling of home appliances. In this thesis, we reduce the cost through load shifting techniques. In order to achieve above objective, we employed some feature of Jaya Algorithm (JA) on Bat Algorithm (BA) to develop Candidate Solution Updation Algorithm (CSUA). Simulation was conducted to compare the result of existing BA and JA for single smart home with 15 smart appliances. We used Time of Use (ToU) and Critical Peak Price (CPP) for electricity cost estimation. The result depicts that successful achievement of load shifting from higher price time slot to lower price time slot, which basically bring out the reduction in electricity bills. In this thesis, we also proposed our another meta-heuristic algorithm Runner Updation Optimization Algorithm (RUOA) to arrange the consumption behavior of residential appliances. We compared the results of our scheme with other meta-heuristic algorithms Strawberry Algorithm (SBA) and Firefly Algorithm (FA). CPP and Real Time Price (RTP) are the two electricity pricing scheme that we used in this thesis to calculate cost of electricity. The main objective of this thesis is to reduce the electricity cost and PAR. However, consumer comfort is not satisfied. Afterward, when our system done with an efficient optimization at consumption side, then come across the area of utility to generate electricity with better and accurate prediction of their price. Like in the electricity market, electricity price has some complicated features like high volatility, non-linearity and non-stationarity that make very difficult to predict the accurate price. However, it is necessary for markets and companies to predict accurate electricity price. In this paper, we enhanced the forecasting accuracy by combined approaches of Kernel Extreme Learning Machine (KELM) and Autoregression Moving Average (ARMA) along with unique and enhanced features of both models. Wavelet transform is applied on prices series to decompose them, afterward test has performed on decomposed series for providing stationary series to AMRA-model and non-stationary series to KELM-model. At the end series are tuned with our combine approach of enhanced price prediction. The performance of our enhanced combined method is evaluated by electricity price dataset of New South Wales (NSW), Australian market. The simulation results show that combined method has more accurate prediction than individual methods.
  • ... Authors [78] proposed a relevant data selection methodology for consumption of energy prediction. In [79], the authors used two classifiers to train the data. ANN and naive bayesian classifiers are performed in the cloud based system. ...
    Thesis
    Full-text available
    Electricity is the basic demand of consumers. With the passage of time, the demand for electricity is increasing day by day. Smart grid (SG) is evolved to satisfy the demand of consumers. To manage electricity load from peak hours to low peak hours, consumer needs to control their appliances by home energy management system (HEMS). HEMS schedule the appliances according to customers need. Energy management using demand-side management (DSM) techniques play an important role in SG domain. Smart meters (SM) and energy management controllers (EMC) are the important components of the SG. Intelligent energy optimization techniques play a vital role in the reduction of the electricity bill via scheduling home appliances. Through appliance’s scheduling, the consumer gets a feasible cost for consumed electricity. DSM provides the facility for consumers to schedule their appliances for the reduction of power price and rebate in peak loads. HEMS is allowed to remotely shut down their appliances in emergency conditions through direct load control programs. Meta-heuristic algorithms have been used for the optimization of the user energy consumption in an efficient way. Electricity load forecasting plays a vital role in improving the use of energy through customers to make decisions efficiently. The accuracy of load prediction is a challenging task because of randomness and noise disturbance. In this thesis, efficient algorithms are proposed to control the load in residential units. Our proposed schemes are used to minimize the user comfort delay time. Customers waiting time is inversely proportional to the total cost and peak to average ratio (PAR). The aim of the current research is to manage the power of the residential units in an optimized way and predict the exact load. Simulation results show the minimum user waiting time, however, the total cost is compromised due to the high demand of the load and predict the exact load for users. In the end, our proposed schemes show better result through simulation results. In this thesis, we proposed new schemes which are used to lower the electricity price, PAR and user discomfort in electricity consumption side. The proposed schemes performed better than existing benchmark schemes. The proposed schemes used real-time price (RTP) signal for calculating the electricity cost and PAR. Simulation results also show that the proposed algorithms have met the objective of DSM. For prediction, the proposed scheme is performed better than benchmark schemes and predict the exact electricity load.
  • ... Giemsa basically stains nuclei, stained RBCs in pink color and give blue tone to chromatin ( Abbas et al., 2015). In several fields, it is recorded that manual microscopy is not counted as a dependable screening method when carried out by nonexperts ( Rad, Rahim, Rehman, Altameem, & Saba, 2013;Saba et al., 2019;Soleimanizadeh, Mohamad, Saba, & Rehman, 2015;Ullah et al., 2019). ...
    Article
    Malaria is a serious worldwide disease, caused by a bite of a female Anopheles mosquito. The parasite transferred into complex life round in which it is grown and reproduces into the human body. The detection and recognition of Plasmodium species are possible and efficient through a process called staining (Giemsa). The staining process slightly colorizes the red blood cells (RBCs) but highlights Plasmodium parasites, white blood cells and artifacts. Giemsa stains nuclei, chromatin in blue tone and RBCs in pink color. It has been reported in numerous studies that manual microscopy is not a trustworthy screening technique when performed by nonexperts. Malaria parasites host in RBCs when it enters the bloodstream. This paper presents segmentation of Plasmodium parasite from the thin blood smear points on region growing and dynamic convolution based filtering algorithm. After segmentation, malaria parasite classified into four Plasmodium species: Plasmodium falciparum, Plasmodium ovale, Plasmodium vivax, and Plasmodium malaria. The random forest and K‐nearest neighbor are used for classification base on local binary pattern and hue saturation value features. The sensitivity for malaria parasitemia (MP) is 96.75% on training and testing of the proposed approach while specificity is 94.59%. Beside these, the comparisons of the two features are added to the proposed work for classification having sensitivity is 83.60% while having specificity is 94.90% through random forest classifier based on local binary pattern feature. Plasmodium parasite from the thin blood smear is segmented through region growing and dynamic convolution based filtering algorithms. To classify, local binary pattern and hue saturation value features are extracted from Plasmodium parasite and fed to random forest and K‐nearest neighbor classifiers.
  • Article
    The numbers of diagnosed patients by melanoma are drastic and contribute more deaths annually among young peoples. An approximately 192,310 new cases of skin cancer are diagnosed in 2019, which shows the importance of automated systems for the diagnosis process. Accordingly, this article presents an automated method for skin lesions detection and recognition using pixel‐based seed segmented images fusion and multilevel features reduction. The proposed method involves four key steps: (a) mean‐based function is implemented and fed input to top‐hat and bottom‐hat filters which later fused for contrast stretching, (b) seed region growing and graph‐cut method‐based lesion segmentation and fused both segmented lesions through pixel‐based fusion, (c) multilevel features such as histogram oriented gradient (HOG), speeded up robust features (SURF), and color are extracted and simple concatenation is performed, and (d) finally variance precise entropy‐based features reduction and classification through SVM via cubic kernel function. Two different experiments are performed for the evaluation of this method. The segmentation performance is evaluated on PH2, ISBI2016, and ISIC2017 with an accuracy of 95.86, 94.79, and 94.92%, respectively. The classification performance is evaluated on PH2 and ISBI2016 dataset with an accuracy of 98.20 and 95.42%, respectively. The results of the proposed automated systems are outstanding as compared to the current techniques reported in state of art, which demonstrate the validity of the proposed method. Current research proposed a new auto system for skin lesions detection, recognition using pixel‐based seed segmented images fusion, and multilevel features reduction. Finally, using a SVM via cubic kernel functions, skin lesions are classified.
  • Article
    Lung cancer is the most common cause of cancer‐related death globally. Currently, lung nodule detection and classification are performed by radiologist‐assisted computer‐aided diagnosis systems. However, emerged artificially intelligent techniques such as neural network, support vector machine, and HMM have improved the detection and classification process of cancer in any part of the human body. Such automated methods and their possible combinations could be used to assist radiologists at early detection of lung nodules that could reduce treatment cost, death rate. Literature reveals that classification based on voting of classifiers exhibited better performance in the detection and classification process. Accordingly, this article presents an automated approach for lung nodule detection and classification that consists of multiple steps including lesion enhancement, segmentation, and features extraction from each candidate's lesion. Moreover, multiple classifiers logistic regression, multilayer perceptron, and voted perceptron are tested for the lung nodule classification using k‐fold cross‐validation process. The proposed approach is evaluated on the publically available Lung Image Database Consortium benchmark data set. Based on the performance evaluation, it is observed that the proposed method performed better in the stateof the art and achieved an overall accuracy rate of 100%. This research presents the lung nodule detection and classification framework. Multiple classifiers logistic regression, multilayer perceptron and voted perceptron using k‐fold cross‐validation process tested on LIDC dataset show promising accuracy results.
  • Article
    Visual inspection for the quantification of malaria parasitaemiain (MP) and classification of life cycle stage are hard and time taking. Even though, automated techniques for the quantification of MP and their classification are reported in the literature. However, either reported techniques are imperfect or cannot deal with special issues such as anemia and hemoglobinopathies due to clumps of red blood cells (RBCs). The focus of the current work is to examine the thin blood smear microscopic images stained with Giemsa by digital image processing techniques, grading MP on independent factors (RBCs morphology) and classification of its life cycle stage. For the classification of the life cycle of malaria parasite the k‐nearest neighbor, Naïve Bayes and multi‐class support vector machine are employed for classification based on histograms of oriented gradients and local binary pattern features. The proposed methodology is based on inductive technique, segment malaria parasites through the adaptive machine learning techniques. The quantification accuracy of RBCs is enhanced; RBCs clumps are split by analysis of concavity regions for focal points. Further, classification of infected and non‐infected RBCs has been made to grade MP precisely. The training and testing of the proposed approach on benchmark dataset with respect to ground truth data, yield 96.75% MP sensitivity and 94.59% specificity. Additionally, the proposed approach addresses the process with independent factors (RBCs morphology). Finally, it is an economical solution for MP grading in immense testing.
  • Article
    Glaucoma is a neurodegenerative illness and is considered as a standout amongst the most widely recognized reasons for visual impairment. Nerve's degeneration is an irretrievable procedure, so the diagnosis of the illness at an early stage is an absolute requirement to stay away from lasting loss of vision. Glaucoma effected mainly because of increased intraocular pressure, if it is not distinguished and looked early, it can result in visual impairment. There are not generally evident side effects of glaucoma; thus, patients attempt to get treatment just when the seriousness of malady is advanced altogether. Determination of glaucoma often comprises of review of the basic crumbling of the nerve in conjunction with the examination of visual function capacity. This article shows the persistent illustration of glaucoma, its side effects, and the potential people inclined to this malady. The essence of this article is on different classification methods being utilized and proposed by various scientists for the identification of glaucoma. This article audits a few division and segmentation methodologies that are exceptionally useful for recognizable proof, identification, and diagnosis of glaucoma. The research related to the findings and the treatment is likewise evaluated in this article.
  • Article
    Identifying abnormality using breast mammography is a challenging task for radiologists due to its nature. A more consistent and precise imaging based CAD system plays a vital role in the classification of doubtful breast masses. In the proposed CAD system, pre-processing is performed to suppress the noise in the mammographic image. Then segmentation locates the tumor in mammograms using the cascading of Fuzzy C-Means (FCM) and region-growing (RG) algorithm called FCMRG. Features extraction step involves identification of important and distinct elements using Local Binary Pattern Gray-Level Co-occurrence Matrix (LBP-GLCM) and Local Phase Quantization (LPQ). The hybrid features are obtained from these techniques. The mRMR algorithm is employed to choose suitable features from individual and hybrid feature sets. The nominated feature sets are analysed through various machine learning procedures to isolate the malignant tumors from the benign ones. The classifiers are probed on 109 and 72 images of MIAS and DDSM databases respectively using k-fold (10-fold) cross-validation method. The enhanced classification accuracy of 98.2% is achieved for MIAS dataset using hybrid features classified by Decision Tree. Whereas 95.8% accuracy is obtained for DDSM dataset using KNN classifier applied on LPQ features.
  • Article
    Background: Knee bone diseases are rare but might be highly destructive. Magnetic Resonance Imaging (MRI) is the main approach to identify knee cancer and its treatment. Normally, the knee cancers are detected with the help of different MRI analysis techniques and later image analysis strategies assess these images. Discussion: Computer-based medical image analysis is getting researcher’s interest due to its advantages of speed and accuracy as compared to traditional techniques. The focus of current research is MRI-based medical image analysis for knee bone disease detection. Accordingly, several approaches for features extraction and segmentation for knee bone cancer are analyzed and compared on benchmark database. Conclusion: Finally, the current state of the art is investigated and future directions are proposed.
  • Article
    Full-text available
    Background: In digital mammography, finding accurate breast profile segmentation of women's mammogram is considered a challenging task. The existence of the pectoral muscle may mislead the diagnosis of cancer due to its high-level similarity to breast body. In addition, some other challenges due to manifestation of the breast body pectoral muscle in the mammogram data include inaccurate estimation of the density level and assessment of the cancer cell. The discrete differentiation operator has been proven to eliminate the pectoral muscle before the analysis processing. Methods: We propose a novel approach to remove the pectoral muscle in terms of the mediolateral-oblique observation of a mammogram using a discrete differentiation operator. This is used to detect the edges boundaries and to approximate the gradient value of the intensity function. Further refinement is achieved using a convex hull technique. This method is implemented on dataset provided by MIAS and 20 contrast enhanced digital mammographic images. Results: To assess the performance of the proposed method, visual inspections by radiologist as well as calculation based on well-known metrics are observed. For calculation of performance metrics, the given pixels in pectoral muscle region of the input scans are calculated as ground truth. Conclusions: Our approach tolerates an extensive variety of the pectoral muscle geometries with minimum risk of bias in breast profile than existing techniques.
  • Article
    Malaria parasitemia diagnosis and grading is hard and still far from perfection. Inaccurate diagnosis and grading has caused tremendous deaths rate particularly in young children worldwide. The current research deeply reviews automated malaria parasitemia diagnosis and grading in thin blood smear digital images through image analysis and computer vision based techniques. Actually, state‐of‐the‐art reveals that current proposed practices present partially or morphology dependent solutions to the problem of computer vision based microscopy diagnosis of malaria parasitemia. Accordingly, a deep appraisal of the current practices is investigated, compared and analyzed on benchmark datasets. The open gaps are highlighted and the future directions are laid down for a complete automated microscopy diagnosis for malaria parasitemia based on those factors that have not been affected by other diseases. Moreover, a general computer vision framework to perform malaria parasitemia estimation/grading is constructed in universal directions. Finally, remaining problems are highlighted and possible directions are suggested. Research Highlights The current research presents a microscopic malaria parasitemia diagnosis and grading of malaria in thin blood smear digital images through image analysis and computer vision based techniques. The open gaps are highlighted and future directions for a complete automated microscopy diagnosis of malaria parasitemia mentioned.
  • Article
    Splitting the rouleaux RBCs from single RBCs and its further subdivision is a challenging area in computer‐assisted diagnosis of blood. This phenomenon is applied in complete blood count, anemia, leukemia, and malaria tests. Several automated techniques are reported in the state of art for this task but face either under or over splitting problems. The current research presents a novel approach to split Rouleaux red blood cells (chains of RBCs) precisely, which are frequently observed in the thin blood smear images. Accordingly, this research address the rouleaux splitting problem in a realistic, efficient and automated way by considering the distance transform and local maxima of the rouleaux RBCs. Rouleaux RBCs are splitted by taking their local maxima as the centres to draw circles by mid‐point circle algorithm. The resulting circles are further mapped with single RBC in Rouleaux to preserve its original shape. The results of the proposed approach on standard data set are presented and analyzed statistically by achieving an average recall of 0.059, an average precision of 0.067 and F‐measure 0.063 are achieved through ground truth with visual inspection. Rouleaux RBCs are splitted by taking their local maxima as the centres to draw circles employing mid‐point circle algorithm. The resulting circles are further mapped with single RBC into Rouleaux at high accuracy, while preserving original shape.
  • Article
    Arabic writer identification and associated tasks are still fresh due to huge variety of Arabic writer's styles. Current research presents a fusion of statistical features, extracted from fragments of Arabic handwriting samples to identify the writer using fuzzy ARTMAP classifier. Fuzzy ARTMP is supervised neural model, especially suited to classification problems. It is faster to train and need less number of training epochs to "learn" from input data for generalization. The extracted features are fed to Fuzzy ARTMP for training and testing. Fuzzy ARTMAP is employed for the first time along with a novel fusion of statistical features for Arabic writer identification. The entire IFN/ENIT database is used in experiments such that 75% handwritten Arabic words from 411 writers are employed in training and 25% for testing the system at random. Several combinations of extracted features are tested using fuzzy ARTMAP classifier and finally one combination exhibited promising accuracy of 94.724% for Arabic writer identification on IFN/ENIT benchmark database.
  • Article
    Full-text available
    Auscultation of heart dispenses identification of the cardiac valves. An electronic stethoscope is used for the acquisition of heart murmurs that is further classified into normal or abnormal murmurs. The process of heart sound segmentation involves discrete wavelet transform to obtain individual components of the heart signal and its separation into systole and diastole intervals. This research presents a novel scheme to develop a semi-automatic cardiac valve disorder diagnosis system. Accordingly, features are extracted using wavelet transform and spectral analysis of input signals. The proposed classification scheme is the fusion of adaptive-neuro fuzzy inference system (ANFIS) and HMM. Both classifiers are trained using the extracted features to correctly identify normal and abnormal heart murmurs. Experimental results thus achieved exhibit that proposed system furnishes promising classification accuracy with excellent specificity and sensitivity. However, the proposed system has fewer classification errors, fewer computations, and lower dimensional feature set to build an intelligent system for detection and classification of heart murmurs.
  • Article
    Full-text available
    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research.
  • Article
    Building Your Next Big Thing with Google Cloud Platform shows you how to take advantage of the Google Cloud Platform technologies to build all kinds of cloud-hosted software and services for both public and private consumption. Whether you need a simple virtual server to run your legacy application or you need to architect a sophisticated high-traffic web application, Cloud Platform provides all the tools and products required to create innovative applications and a robust infrastructure to manage them. Google is known for the scalability, reliability, and efficiency of its various online products, from Google Search to Gmail. And, the results are impressive. Google Search, for example, returns results literally within fractions of second. How is this possible? Google custom-builds both hardware and software, including servers, switches, networks, data centers, the operating system’s stack, application frameworks, applications, and APIs. Have you ever imagined what you could build if you were able to tap the same infrastructure that Google uses to create and manage its products? Now you can! Building Your Next Big Thing with Google Cloud Platform shows you how to take advantage of the Google Cloud Platform technologies to build all kinds of cloud-hosted software and services for both public and private consumption. Whether you need a simple virtual server to run your legacy application or you need to architect a sophisticated high-traffic web application, Cloud Platform provides all the tools and products required to create innovative applications and a robust infrastructure to manage them. Using this book as your compass, you can navigate your way through the Google Cloud Platform and turn your ideas into reality. The authors, both Google Developer Experts in Google Cloud Platform, systematically introduce various Cloud Platform products one at a time and discuss their strengths and scenarios where they are a suitable fit. But rather than a manual-like "tell all" approach, the emphasis is on how to Get Things Done so that you get up to speed with Google Cloud Platform as quickly as possible. You will learn how to use the following technologies, among others: • Google Compute Engine • Google App Engine • Google Container Engine • Google App Engine Managed VMs • Google Cloud SQL • Google Cloud Storage • Google Cloud Datastore • Google BigQuery • Google Cloud Dataflow • Google Cloud DNS • Google Cloud Pub/Sub • Google Cloud Endpoints • Google Cloud Deployment Manager • Author on Google Cloud Platform • Google APIs and Translate API Using real-world examples, the authors first walk you through the basics of cloud computing, cloud terminologies and public cloud services. Then they dive right into Google Cloud Platform and how you can use it to tackle your challenges, build new products, analyze big data, and much more. Whether you’re an independent developer, startup, or Fortune 500 company, you have never had easier to access to world-class production, product development, and infrastructure tools. Google Cloud Platform is your ticket to leveraging your skills and knowledge into making reliable, scalable, and efficient products—just like how Google builds its own products.
  • Article
    Early screening of skeptical masses or breast carcinomas in mammograms is supposed to decline the mortality rate among women. This amount can be decreased more on development of the computer-aided diagnosis with reduction of false suppositions in medical informatics. Our aim is to provide a robust tumor detection system for accurate classification of breast masses using normal, abnormal, benign, or malignant classes. The breast carcinomas are classified on the basis of observed classes. This is highly dependent on feature extraction process. In propose work, a novel algorithm for classification based on the combination of top Hat transformation and gray level cooccurrence matrix with back propagation neural network. The aim of this study is to present a robust classification model for automated diagnosis of the breast tumor with reduction of false assumptions in medical informatics. The proposed method is verified on two datasets MIAS and DDSM. It is observed that rate of false positives decreased by the proposed method to improve the performance of classification, efficiently.
  • Article
    Full-text available
    Medical imaging plays an integral role in the identification, segmentation, and classification of brain tumors. The invention of MRI has opened new horizons for brain-related research. Recently, researchers have shifted their focus towards applying digital image processing techniques to extract, analyze and categorize brain tumors from MRI. Categorization of brain tumors is defined in a hierarchical way moving from major to minor ones. A plethora of work could be seen in literature related to the classification of brain tumors in categories such as benign and malignant. However, there are only a few works reported on the multiclass classification of brain images where each part of the image containing tumor is tagged with major and minor categories. The precise classification is difficult to achieve due to ambiguities in images and overlapping characteristics of different type of tumors. In the current study, a comprehensive review of recent research on brain tumors multiclass classification using MRI is provided. These multiclass classification studies are categorized into two major groups: XX and YY and each group are further divided into three sub-groups. A set of common parameters from the reviewed works is extracted and compared to highlight the merits and demerits of individual works. Based on our analysis, we provide a set of recommendations for researchers and professionals working in the area of brain tumors classification.
  • Article
    Knee bone diseases are rare but might be highly destructive. Magnetic resonance imaging (MRI) is the main approach to identify knee cancer and its treatment. Normally, the knee cancers are pointed out with the help of different MRI analysis techniques and latter image analysis strategies understand these images. Computer-based medical image analysis is getting researcher's interest due to its advantages of speed and accuracy as compared to traditional techniques. The focus of current research is MRI-based medical image analysis for knee bone disease detection. Accordingly, several approaches for features extraction and segmentation for knee bone cancer are analyzed and compared on benchmark database. Finally, the current state of the art is investigated and future directions are proposed.
  • Article
    This paper presents an ear biometric approach to classify humans. Accordingly an improved local features extraction technique based on ear region features is proposed. Accordingly, ear image is segmented in to certain regions to extract eigenvector from all regions. The extracted features are normalized and fed to a trained neural network. To benchmark results, benchmark database from University of Science and Technology Beijing (USTB) is employed that have mutually exclusive sets for training and testing. Promising results are achieved that are comparable in the state of art. However, a few region features exhibited low accuracy that will be addressed in the subsequent research.
  • Article
    Full-text available
    Currently, in the medical imaging, equipment produces 3D results for better visualization and cure. Likewise, need of capable devices for representation of these 3D medicinal images is expanding as it is valuable to view human tissues or organs straightforwardly and is an immense support to medical staff. Consequently, 3D images recreation from 2D images has attracted several researchers. The CT images normally composed of bones soft tissue and background. This paper presents 3D segmentation of leg’s bones in Computed Tomography images (CT scan). The bone section is extracted from each 2D slice and is placed in the 3D space. Surface rendering is applied on 2D bone slices. The data is visualized via multi-planar reformatting, surface rendering, and hardware-accelerated volume rendering. At last, the paper presents an enhanced technique to reproduce 3D bone segmentation in medicinal images that exhibits promising outcomes as compared to the techniques reported in literature. The proposed technique involves contour extraction, image enhancement, segmentation, outlier reduction and 3D modelling.
  • Article
    Full-text available
    With the advent of voluminous medical database, healthcare analytics in big data have become a major research area. Healthcare analytics are playing an important role in big data analysis issues by predicting valuable information through data mining and machine learning techniques. This prediction helps physicians in making right decisions for successful diagnosis and prognosis of various diseases. In this paper, an evolution based hybrid methodology is used to develop a healthcare analytic model exploiting data mining and machine learning algorithms Support Vector Machine (SVM), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The proposed model may assist physicians to diagnose various types of heart diseases and to identify the associated risk factors with high accuracy. The developed model is evaluated with the results reported by the literature algorithms in diagnosing heart diseases by taking the case study of Cleveland heart disease database. A great prospective of conducting this research is to diagnose any disease in less time with less number of factors or symptoms. The proposed healthcare analytic model is capable of reducing the search space significantly while analyzing the big data, therefore less number of computing resources will be consumed.
  • Article
    Extraction of the breast border and simultaneously exclusion of pectoral muscle are principal steps for diagnosing of breast cancer based on mammogram data. The objective of propose method is to classify the mediolateral oblique fragment of the pectoral muscle. The extraction of breast region is performed using the multilevel wavelet decomposition of mammogram images. Moreover, artifact suppression and pectoral muscle detection is carried out by morphological operator. The efficient extraction with higher accuracy is validated on the set of 322 digital images taken from MIAS dataset.
  • Article
    Extraction of the breast border and simultaneously exclusion of pectoral muscle are principal steps for diagnosing of breast cancer based on mammogram data. The objective of proposed method is to classify the mediolateral oblique fragment of the pectoral muscle. The extraction of breast region is performed using the multilevel wavelet decomposition of mammogram images. Moreover, artifact suppression and pectoral muscle detection are carried out by the morphological operator. The efficient extraction with higher accuracy is validated on the set of 322 digital images taken from MIAS dataset.
  • Article
    Full-text available
    With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art.
  • Chapter
    This chapter provides an introduction to Google App Engine concepts. We’ll look at development methodology and how App Engine implements cloud computing concepts by providing both development and runtime services.
  • Article
    Full-text available
    Making deductions and expectations about climate has been a challenge all through mankind’s history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.
  • Article
    Segmentation of objects from a noisy and complex image is still a challenging task that needs to be addressed. This article proposed a new method to detect and segment nuclei to determine whether they are malignant or not (determination of the region of interest, noise removal, enhance the image, candidate detection is employed on the centroid transform to evaluate the centroid of each object, the level set [LS] is applied to segment the nuclei). The proposed method consists of three main stages: preprocessing, seed detection, and segmentation. Preprocessing stage involves the preparation of the image conditions to ensure that they meet the segmentation requirements. Seed detection detects the seed point to be used in the segmentation stage, which refers to the process of segmenting the nuclei using the LS method. In this research work, 58 H&E breast cancer images from the UCSB Bio-Segmentation Benchmark dataset are evaluated. The proposed method reveals the high performance and accuracy in comparison to the techniques reported in literature. The experimental results are also harmonized with the ground truth images.
  • Article
    Malaria parasitemia is the quantitative measurement of the parasites in the blood to grade the degree of infection. Light microscopy is the most well-known method used to examine the blood for parasitemia quantification. The visual quantification of malaria parasitemia is laborious, time-consuming and subjective. Although automating the process is a good solution, the available techniques are unable to evaluate the same cases such as anemia and hemoglobinopathies due to deviation from normal RBCs’ morphology. The main aim of this research is to examine the microscopic images of stained thin blood smears using a variety of computer vision techniques, grading malaria parasitemia on independent factors (RBC’s morphology). The proposed methodology is based on inductive approach, color segmentation of malaria parasites through adaptive algorithm of Gaussian mixture model (GMM). The quantification accuracy of RBCs is improved, splitting the occlusions of RBCs with distance transform and local maxima. Further, the classification of infected and non-infected RBCs has been made to properly grade parasitemia. The training and evaluation have been carried out on image dataset with respect to ground truth data, determining the degree of infection with the sensitivity of 98 % and specificity of 97 %. The accuracy and efficiency of the proposed scheme in the context of being automatic were proved experimentally, surpassing other state-of-the-art schemes. In addition, this research addressed the process with independent factors (RBCs’ morphology). Eventually, this can be considered as low-cost solutions for malaria parasitemia quantification in massive examinations.
  • Article
    Full-text available
    PurposeDue to the large quantity of data that are recorded in energy efficient buildings, understanding the behavior of various underlying operations has become a complex and challenging task. This paper proposes a method to support analysis of energy systems and validates it using operational data from a cold water chiller. The method automatically detects various operation patterns in the energy system. Methods The use of k-means clustering is being proposed to automatically identify the On (operational) cycles of a system operating with a duty cycle. The latter’s data is subsequently transformed to symbolic representations by using the symbolic aggregate approximation method. Afterward, the symbols are converted to bag of words representation (BoWR) for hierarchical clustering. A gap statistics method is used to find the best number of clusters in the data. Finally, operation patterns of the energy system are grouped together in each cluster. An adsorption chiller, operating under real life conditions, supplies the reference data for validation. ResultsThe proposed method has been compared with dynamic time warping (DTW) method using cophenetic coefficients and it has been shown that the BoWR has produced better results as compared to DTW. The results of BoWR are further investigated and for finding the optimal number of clusters, gap statistics have been used. At the end, interesting patterns of each cluster are discussed in detail. Conclusion The main goal of this research work is to provide analysis algorithms that automatically find the various patterns in the energy system of a building using as little configuration or field knowledge as possible. A bag of word representation method with hierarchical clustering has been proposed to assess the performance of a building energy system.
  • Article
    Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.
  • Chapter
    An analysis of the genes that make up the DRβ region will extend our information about the complexity of the HLA class II system, since the serologically detectable polymorphism is located mainly on the DRβ chain. Probably this analysis will be most informative if we look at the 5′ ends of the genes coding for the variable stretches of a DRβ chain [1].
  • Article
    Purpose: Histopathology is the clinical standard for tissue diagnosis; however, it requires tissue processing, laboratory personnel and infrastructure, and a highly trained pathologist to diagnose the tissue. Optical microscopy can provide real-time diagnosis, which could be used to inform the management of breast cancer. The goal of this work is to obtain images of tissue morphology through fluorescence microscopy and vital fluorescent stains and to develop a strategy to segment and quantify breast tissue features in order to enable automated tissue diagnosis. Methods: We combined acriflavine staining, fluorescence microscopy, and a technique called sparse component analysis to segment nuclei and nucleoli, which are collectively referred to as acriflavine positive features (APFs). A series of variables, which included the density, area fraction, diameter, and spacing of APFs, were quantified from images taken from clinical core needle breast biopsies and used to create a multivariate classification model. The model was developed using a training data set and validated using an independent testing data set. Results: The top performing classification model included the density and area fraction of smaller APFs (those less than 7 µm in diameter, which likely correspond to stained nucleoli).When applied to the independent testing set composed of 25 biopsy panels, the model achieved a sensitivity of 82 %, a specificity of 79 %, and an overall accuracy of 80 %. Conclusions: These results indicate that our quantitative microscopy toolbox is a potentially viable approach for detecting the presence of malignancy in clinical core needle breast biopsies.
  • Chapter
    Google Compute Engine is an infrastructure service provided as part of the Google Cloud Platform. Compute Engine is made up of three major components: virtual machines, persistent disks, and networks. It is available at several Google datacenters worldwide and is provided exclusively on an on-demand basis. This means Compute Engine neither charges any upfront fees for deployment nor locks down the customer. At the same time, Compute Engine provides steep discounts for sustained use. These sustained-use discounts, without any upfront subscription fees, are an industry first; this pricing innovation shows that Google innovates outside of technology as well.
  • Chapter
    Microsoft Office 365 is a subscription-based version of Microsoft Office 2013. Unlike any of the traditional Office suites such as Office 2010, Office 365 allows you to install Office applications on up to five different PCs and Macs, plus five mobile devices. With an Office 365 subscription, you can upgrade to the latest versions of Office for free.
  • Article
    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf) , standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. Microsc. Res. Tech., 2016. © 2016 Wiley Periodicals, Inc.
  • Conference Paper
    Full-text available
    To achieve energy efficiency in buildings, a lot of raw data is recorded, during the operation of buildings. This recorded raw data is further used for the analysis of the performance of buildings and its different components e.g. Heating, Ventilation and Air-Conditioning (HVAC). To save time and energy it is required to ensure resilience of the data by detecting and replacing outliers (i.e. data samples that are not plausible) in the data before detailed analysis. This paper discusses the steps involved for detecting outliers in the data obtained from absorption chiller using their On/Off state information. It also proposes a method for automatic detection of On/Off and/or Missing Data status of the chiller. The technique uses two layer K-Means clustering for detecting On/Off as well as Missing Data state of the chiller. After automatic detection of the chiller On/Off cycle, a method for outlier detection is proposed using Z-Score normalization based on the On/Off cycle state of chillers and clustering outliers by Expectation Maximization clustering algorithm. Moreover, the results of filling the missing values with regression and linear interpolation for short and long periods are elaborated. All proposed methods are applied to real building data and the results are discussed.
  • Article
    Full-text available
    In this paper investigation of the performance criterion of a machine learning tool, Naive Bayes Classifier with a new weighted approach in classifying breast cancer is done. Naive Bayes is one of the most effective classification algorithms. In many decision making system, ranking performance is an interesting and desirable concept than just classification. So to extend traditional Naive Bayes, and to improve its performance, weighted concept is incorporated. Exploration of Domain knowledge based weight assignment on UCI machine learning repository dataset of breast cancer is performed. As Breast cancer is considered to be second leading cause of death in women today. The experiments show that a weighted naive bayes approach outperforms naive bayes.
  • Article
    Offline clinical guidelines are typically designed to integrate a clinical knowledge base, patient data and an inference engine to generate case specific advice. In this regard, offline clinical guidelines are still popular among the healthcare professionals for updating and support of clinical guidelines. Although their current format and development process have several limitations, these could be improved with artificial intelligence approaches such as expert systems/decision support systems. This paper first, presents up to date critical review of existing clinical expert systems namely AAPHelpm, MYCIN, EMYCIN, PIP, GLIF and PROforma. Additionally, an analysis is performed to evaluate all these fundamental clinical expert systems. Finally, this paper presents the proposed research and development of a clinical expert system to help healthcare professionals for treatment.
  • Article
    Full-text available
    We present a workflow for data sanitation and analysis of operation data with the goal of increasing energy efficiency and reliability in the operation of building-related energy systems. The workflow makes use of machine learning algorithms and innovative visualizations. The environment, in which monitoring data for energy systems are created, requires low configuration effort for data analysis. Therefore the focus lies on methods that operate automatically and require little or no configuration. As a result a generic workflow is created that is applicable to various energy-related time series data; it starts with data accessibility, followed by automated detection of duty cycles where applicable. The detection of outliers in the data and the sanitation of gaps ensure that the data quality is sufficient for an analysis by domain experts, in our case the analysis of system energy efficiency. To prove the feasibility of the approach, the sanitation and analysis workflow is implemented and applied to the recorded data of a solar driven adsorption chiller.
  • Article
    Full-text available
    Application of personalized medicine requires integration of different data to determine each patient’s unique clinical constitution. The automated analysis of medical data is a growing field where different machine learning techniques are used to minimize the time-consuming task of manual analysis. The evaluation, and often training, of automated classifiers requires manually labelled data as ground truth. In many cases such labelling is not perfect, either because of the data being ambiguous even for a trained expert or because of mistakes. Here we investigated the interobserver variability of image data comprising fluorescently stained circulating tumor cells and its effect on the performance of two automated classifiers, a random forest and a support vector machine. We found that uncertainty in annotation between observers limited the performance of the automated classifiers, especially when it was included in the test set on which classifier performance was measured. The random forest classifier turned out to be resilient to uncertainty in the training data while the support vector machine’s performance is highly dependent on the amount of uncertainty in the training data. We finally introduced the consensus data set as a possible solution for evaluation of automated classifiers that minimizes the penalty of interobserver variability.
  • Article
    Full-text available
    A framework for automated detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features is proposed and examined. The various stages involved in the proposed methodology include enhancement of microscopic images, segmentation of background cells, features extraction, and finally the classification. An appropriate and efficient method is employed in each of the design steps of the proposed framework after making a comparative analysis of commonly used method in each category. For highlighting the details of the tissue and structures, the contrast limited adaptive histogram equalization approach is used. For the segmentation of background cells, k -means segmentation algorithm is used because it performs better in comparison to other commonly used segmentation methods. In feature extraction phase, it is proposed to extract various biologically interpretable and clinically significant shapes as well as morphology based features from the segmented images. These include gray level texture features, color based features, color gray level texture features, Law’s Texture Energy based features, Tamura’s features, and wavelet features. Finally, the K -nearest neighborhood method is used for classification of images into normal and cancerous categories because it is performing better in comparison to other commonly used methods for this application. The performance of the proposed framework is evaluated using well-known parameters for four fundamental tissues (connective, epithelial, muscular, and nervous) of randomly selected 1000 microscopic biopsy images.
  • Article
    Full-text available
    Health telematics is a growing up issue that is becoming a major improvement on patient lives, especially in elderly, disabled, and chronically ill. In recent years, information an d communication technologies improvements, along with mobile Internet, offering anywhere and anytime connectivity, play a key role on modern healthcare solutions. In this context, mobile health (m-Health) delivers healthcare services,overcoming geographical, temporal, and even organizational barriers. M-Health solutions address emerging problems on health services, including, the increasing number of chronic diseases related to lifestyle, high costs of existing national health services, the need to empower patients and families to self-care and handle their own healthcare, and the need to provide direct access to health services, regardless of time and place. Then, this paper presents a comprehensive review of the state of the art on m-Health services and applications. It surveys the most significant research work and presents a deep analysis of the top and novel m-Health services and applications proposed by industry. A discussion considering the European Union and United States approaches addressing the m-Health paradigm and directives already published is also considered. Open and challenging issues on emerging m-Health solutions are proposed for further works. Copyright © 2015. Published by Elsevier Inc.
  • Article
    As mobile computing has been developed for decades, a new model for mobile computing, namely, mobile cloud computing, emerges resulting from the marriage of powerful yet affordable mobile devices and cloud computing. In this paper we survey existing mobile cloud computing applications, as well as speculate future generation mobile cloud computing applications. We provide insights for the enabling technologies and challenges that lie ahead for us to move forward from mobile computing to mobile cloud computing for building the next generation mobile cloud applications. For each of the challenges, we provide a survey of existing solutions, identify research gaps, and suggest future research areas.
  • Article
    Full-text available
    Offline cursive script recognition and their associated issues are still fresh despite of last few decades' research. This paper presents an annotated comparison of proposed and recently published preprocessing techniques with reported work in the offline cursive script recognition. Normally, in the offline script analysis, the input is a paper image or a word or a digit and the desired output is ASCII text. This task involves several preprocessing steps, and some of them are quite hard such as line removal from text, skew removal, reference line detection (lower/upper baselines), slant removal, scaling, noise elimination, contour smoothing and skeleton. Moreover, subsequent stage of segmentation (if any) and recognition is also highly dependent on these preprocessing techniques. This paper presents an analysis and annotated comparison of latest preprocessing techniques proposed by authors with those reported in the literature on IAM/CEDAR benchmark databases. Finally, future work and persist problems are highlighted.
  • Article
    Full-text available
    This paper presents a new, simple and fast approach for character segmen-tation of unconstrained handwritten words. The proposed approach first seeks the possible character boundaries based on characters geometric features analysis. However, due to inherited ambiguity and a lack of context, few characters are over-segmented. To increase the efficiency of the proposed approach, an Artificial Neural Network is trained with sig-nificant number of valid segmentation points for cursive handwritten words. Trained neural network extracts incorrect segmented points efficiently with high speed. For fair comparison, benchmark database CEDAR is used. The experimental results are promis-ing from complexity and accuracy points of view.
  • Article
    Full-text available
    Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized.
  • Article
    Full-text available
    Medical images have made a great impact on medicine, diagnosis, and treatment. The most important part of image processing is image segmentation. Many image segmentation methods for medical image analysis have been presented in this paper. In this paper, we have described the latest segmentation methods applied in medical image analysis. The advantages and disadvantages of each method are described besides examination of each algorithm with its application in Magnetic Resonance Imaging and Computed Tomography image analysis. Each algorithm is explained separately with its ability and features for the analysis of grey-level images. In order to evaluate the segmentation results, some popular benchmark measurements are presented in the final section.
  • Conference Paper
    This paper describes two approaches to scaling in cloud environments - semi-automatic (also called by request) and automatic (also called on demand) - and explains why the latter is to prefer. Semi-automatic scaling is illustrated with an example of Amazon Elastic Compute Cloud, whereas automatic scaling is illustrated with an example of Amazon Web Services Elastic Beanstalk.
  • Article
    Personalized medicine is a modern healthcare approach where information on each person's unique clinical constitution is exploited to realize early disease intervention based on more informed medical decisions. The application of diagnostic tools in combination with measurement evaluation that can be performed in a reliable and automated fashion plays a key role in this context. As the progression of various cancer diseases and the effectiveness of their treatments are related to a varying number of tumor cells that circulate in blood, the determination of their extremely low numbers by liquid biopsy is a decisive prognostic marker. To detect and enumerate circulating tumor cells (CTCs) in a reliable and automated fashion, we apply methods from machine learning using a naive Bayesian classifier (NBC) based on a probabilistic generative mixture model. Cells are collected with a functionalized medical wire and are stained for fluorescence microscopy so that their color signature can be used for classification through the construction of Red-Green-Blue (RGB) color histograms. Exploiting the information on the fluorescence signature of CTCs by the NBC does not only allow going beyond previous approaches but also provides a method of unsupervised learning that is required for unlabeled training data. A quantitative comparison with a state-of-the-art support vector machine, which requires labeled data, demonstrates the competitiveness of the NBC method. © 2014 International Society for Advancement of Cytometry.
  • Article
    Full-text available
    Mobile Cloud Computing (MCC) is aimed at integrating mobile devices with cloud computing. It is one of the most important concepts that have emerged in the last few years. Mobile devices, in the traditional agent-client architecture of MCC, only utilize resources in the cloud to enhance their functionalities. However, modern mobile devices have many more resources than before. As a result, researchers have begun to consider the possibility of mobile devices themselves sharing resources. This is called the cooperation-based architecture of MCC. Resource discovery is one of the most important issues that need to be solved to achieve this goal. Most of the existing work on resource discovery has adopted a fixed choice of centralized or flooding strategies. Many improved versions of energy-efficient methods based on both strategies have been proposed by researchers due to the limited battery life of mobile devices. This paper proposes a novel adaptive method of resource discovery from a different point of view to distinguish it from existing work. The proposed method automatically transforms between centralized and flooding strategies to save energy according to different network environments. Theoretical models of both energy consumption and the quality of response information are presented in this paper. A heuristic algorithm was also designed to implement the new adaptive method of resource discovery. The results from simulations demonstrated the effectiveness of the strategy and the efficiency of the proposed heuristic method.