Chapter

2022 Book BigDataIntelligenceForSmartApp

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

t In digital healthcare systems, the patients face significant problems identifying an optimal vacant and available slot for their appointment as the number of patient requests directly correlates with the slot availability. Existing smart healthcare systems facilitate the patient to reserve a particular time slot and attain real-time healthcare information. However, most of these systems need sensitive information about patients, i.e., desired destination of the healthcare providers. Moreover, existing systems utilize a centralized system which makes these systems vulnerable to numerous intruders’ attacks and security breaches (particularly related to service providers) and results in single-point failure of the entire system. In this paper, Ring Signature for permissioned Block-chain-based Private Information Retrieval scheme is proposed to improve privacy-preserving in the smart healthcare system. Our proposed Scheme initially utilizes an improved multi-transaction mode consortium block-chain constructed by different numbers of requests to the healthcare providers to achieve maximized appointment offers based on availability, transparency, and security. Our proposed Scheme is useful for information retrieval from multiple domains. The simulation results verify that the proposed Scheme provides a predominant performance in ensuring maximized patient privacy with reduced computation and communication overheads.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Chapter
Full-text available
Coronavirus disease 19 or also known as COVID-19 is a highly transmittable and pathogenic viral infection triggered by Severe Acute Respiratory Syndromes Coronavirus 2 (SARS-CoV-2), which originated across the world from Wuhan, China. The current global COVID-19 pandemic challenge has crossed the limits of national, progressive, philosophical, cultural, financial, and pedagogy. The healthcare system enabled by the Internet of Things (IoT) is useful for the proper monitoring of COVID-19 patients by using an interconnected system. Hospital admission rates are growing as a result of COVID-19, and patients interact with healthier people within hospitals causing immense problems. Once COVID-19 has been identified, there is a high need for health precautions to prevent spread within hospitals. This chapter topic aims to investigate, design an IoT-based system to detect and control the COVID-19 pandemic inside the hospital environment. The proposed design has composed of the detection of COVID-19 patients with thermal images, IoT solution for social distancing, and Chatbot for the COVID-19 consultant which is functions as a single IoT solution in the hospital environment. We were aimed to highlight the IoT enabled wearable technologies with promising outcomes and the potential to identify COVID-19 cases to eliminate the spread by alerting the government authority.
Chapter
Full-text available
The dramatic growth of urbanization in modern cities calls for smart strategies to resolve crucial problems such as transportation, healthcare, energy, and civil infrastructure. The Internet of Things (IoT) is one of the most exciting enabling technologies to address smart city problems by creating a large global network of interconnected physical objects embedded with electronics, software, sensors, and network connectivity. Since the end of 2019, the world has been confronted with the challenge of the COVID-19 virus originating in Wuhan, China. Precautionary measures are expected to be needed in the world to combat the pandemic of COVID-19 until an effective vaccine is developed. The disease has already proven the value of modern smart healthcare which plays a significant role in preventing COVID-19. The aim of this research is to develop the COVID-19 cluster tracking system, and face mask detection system for public safety. The proposed cluster tracking system was validated for 26 test subjects in an experimental scenario of a potential COVID-19 cluster. The accuracy of the proposed face mask detection system was 86.96% observed at 0.9756 precision. According to the testing results, the proposed system was showed promising outcomes for the prevention of the COVID-19 pandemic in a smart city.
Conference Paper
Full-text available
In this world of modern technologies and media, online news publications and portals are increasing at a high speed. That is why, nowadays, it has become almost impossible to check out the traditional fact of news headlines and examine them due to the increase in the number of content writers, online media portals, and news portals. Mostly, fake headlines are filled with bogus or misleading content. They attract the commoners by putting phony words or misleading fraudulent content in the headlines to increase their views and share. But, these fake and misleading headlines create havoc in the commoner’s life and misguide them in many ways. That is why we took a step so that the commoners can differentiate between fake and real news. We proposed a model that can successfully detect whether the story is fake or accurate based on the news headlines. We created a novel data set of Bengali language and achieved our aim and reached the target using the Gaussian Naive Bayes algorithm. We have used other algorithms, but the Gaussian Naive Algorithm has performed well in our model. This algorithm used a text feature dependent on TF-IDF and an Extra Tree Classifier to choose the attribute. In our model, using Gaussian Naive Bayes we got 87% accuracy which is comparatively best than any other algorithm we used in this model.
Article
Full-text available
Distributed Data Mining (DDM) has been proposed as a means to deal with the analysis of distributed data, where DDM discovers patterns and implements prediction based on multiple distributed data sources. However, DDM faces several problems in terms of autonomy, privacy, performance and implementation. DDM requires homogeneity regarding environment, control, administration and the classification algorithm(s), and such that requirements are too strict and inflexible in many applications. In this paper, we propose the employment of a Multi-Agent System (MAS) to be combined with DDM (MAS-DDM). MAS is a mechanism for creating goal-oriented autonomous agents within shared environments with communication and coordination facilities. We shall show that MAS-DDM is both desirable and beneficial. In MAS-DDM, agents could communicate their beliefs (calculated classification) by covering private and non-sharable data, and other agents decide whether the use of such beliefs in classifying instances and adjusting their prior assumptions about each class of data. In MAS-DDM, we will develop and use a modified Naive Bayesian algorithm because (1) Naive Bayesian has been shown to be the most used algorithm to deal with uncertain data, and (2) to show that even if all agents in MAS-DDM use the same algorithm, MAS-DDM preforms better than DDM approaches with non-communicating processes. Point (2) provide an evidence that the exchange of information between agents helps in increasing the accuracy of the classification task significantly.
Chapter
Full-text available
At the moment, the best and only way to reduce the spread of coronavirus disease 2019 is by limiting close contact with others. Respecting social distance, infection is less probable. Since this is a new lifestyle for everyone, it’s hard to be distance all the times. People forget to keep distance or they are not taking seriously the actual situation. That’s why in this paper, we propose a smart surveillance solution. Our test prototype ensures the respect of social distancing by detecting persons, calculating distances between them and generating loud vocal alerts. The smart surveillance prototype is based on Raspberry Pi and Camera Pi. Then, we make a comparison study of object detection pretrained models. SSD-MobileNet gives the most satisfying result using Raspberry pi with limited computing resources. Despite that implementing CNN based model on the Raspberry Pi is such a challenging work, we reach a value of 1,1 FPS on real-time object detection and distance analysis system.
Article
Full-text available
Vehicles nowadays are equipped with a vast amount of sensors that collect data for the vehicle and its environment. This, combined with the acceleration of the automotive industry towards interconnected and autonomous cars, suggests that security and specifically the ability to detect compromised nodes, collect and preserve evidence of an attack or malicious activities emerge as a priority in successfully deploying the Internet of Vehicle ecosystem. Until today Digital Forensics attempts are concerned with in vehicle forensics. In this paper we present the challenges of integrating digital forensics in an IoV ecosystem and we introduce the Attack Attribution and Forensics Readiness Tool of the nIoVe system, an integrated holistic cybersecurity solution for IoV.
Article
Full-text available
Background: As public health strategists and policymakers explore different approaches to lessen the devastating effects of novel coronavirus disease (COVID-19), blockchain technology has emerged as a resource that can be utilized in numerous ways. Many blockchain technologies have been proposed or implemented during the COVID-19 pandemic; however, to the best of our knowledge, no comprehensive reviews have been conducted to uncover and summarise the main feature of these technologies. Objective: This study aims to explore proposed or implemented blockchain technologies used to mitigate the COVID-19 challenges as reported in the literature. Methods: We conducted a scoping review in line with guidelines of PRISMA Extension for Scoping Reviews (PRISMA-ScR). To identify relevant studies, we searched 11 bibliographic databases (e.g., EMBASE and MEDLINE) and conducted backward and forward reference list checking of the included studies and relevant reviews. The study selection and data extraction were conducted by 2 reviewers independently. Data extracted from the included studies was narratively summarised and described. Results: 19 of 225 retrieved studies met eligibility criteria in this review. The included studies reported 10 used cases of blockchain to mitigate COVID-19 challenges; the most prominent use cases were contact tracing and immunity passports. While the blockchain technology was developed in 10 studies, its use was proposed in the remaining 9 studies. The public blockchain technology was the most commonly utilized type in the included studies. All together, 8 different consensus mechanisms were used in the included studies. Out of 10 studies that identified the used platform, 9 studies used Ethereum to run the blockchain. Solidity was the most prominent programming language used in developing blockchain technology in the included studies. The transaction cost was reported in only 4 of the included studies and varied between USD 10⁻¹⁰ and USD 5. The expected latency and expected scalability were not identified in the included studies. Conclusion: Blockchain technologies are expected to play an integral role in the fight against the COVID-19 pandemic. Many possible applications of blockchain were found in this review; however, most of them are not mature enough to reveal their expected impact in the fight against COVID-19. We encourage governments, health authorities, and policymakers to consider all blockchain applications suggested in the current review to combat COVID-19 challenges. There is a pressing need to empirically examine how effective blockchain technologies are in mitigating COVID-19 challenges. Further studies are required to assess the performance of blockchain technologies’ fight against COVID-19 in terms of transaction cost, scalability, and/or latency when using different consensus algorithms, platforms, and access types.
Article
Full-text available
Numerous works focus on the data privacy issue of the Internet of Things (IoT) when training a supervised Machine Learning (ML) classifier. Most of the existing solutions assume that the classifier's training data can be obtained securely from different IoT data providers. The primary concern is data privacy when training a K-Nearest Neighbour (K-NN) classifier with IoT data from various entities. This paper proposes secure K-NN, which provides a privacy-preserving K-NN training over IoT data. It employs Blockchain technology with a partial homomorphic cryptosystem (PHC) known as Paillier in order to protect all participants (i.e., IoT data analyst C and IoT data provider P) data privacy. When C analyzes the IoT data of P, both participants' privacy issue arises and requires a trusted third party. To protect each candidate's privacy and remove the dependency on a third-party, we assemble secure building blocks in secure K-NN based on Blockchain technology. Firstly, a protected data-sharing platform is developed among various P, where encrypted IoT data is registered on a shared ledger. Secondly, the secure polynomial operation (SPO), secure biasing operations (SBO), and secure comparison (SC) are designed using the homomorphic property of Paillier. It shows that secure K-NN does not need any trusted third-party at the time of interaction, and rigorous security analysis demonstrates that secure K-NN protects sensitive data privacy for each P and C. The secure K-NN achieved 97.84%, 82.33%, and 76.33% precisions on BCWD, HDD, and DD datasets. The performance of secure K-NN is precisely similar to the general K-NN and outperforms all the previous state of art methods.
Article
Full-text available
Objective: Newborn malware increase significantly in recent years, becoming more dangerous for many applications. So, researchers are focusing more on solutions that serve the defense of new malwares trends and variance, especially zero-day malware attacks. The prime goal of our proposition is to reach a high security level by defending against malware attacks effectively using advanced techniques. Methods: In this paper, we propose an Intelligent Cybersecurity Framework specialized on malware attacks in a layered architecture. After receiving the unknown malware, the Framework Core layer use malware visualization technique to process unknown samples of the malicious software. Then, we classify malware samples into their families using: K-Nearest Neighbor, Decision Tree and Random Forest algorithms. Classification results are given in the last layer, and based on a Malware Behavior Database we are able to warn users by giving them a detail report on the malicious behavior of the given malware family. The proposed Intelligent Cybersecurity Framework is implemented in a graphic user interface easy to use. Results: Comparing machine learning classifiers, Random Forest algorithm gives best results in the classification task with a precision of 97,6%. Conclusion: However, we need to take into account results of the other classifiers for more reliability. Finally, obtained results are as efficient as fast that meets cybersecurity frameworks general requirements.
Article
Full-text available
The advent of the World Wide Web and the rapid adoption of social media platforms (such as Facebook and Twitter) paved the way for information dissemination that has never been witnessed in the human history before. With the current usage of social media platforms, consumers are creating and sharing more information than ever before, some of which are misleading with no relevance to reality. Automated classification of a text article as misinformation or disinformation is a challenging task. Even an expert in a particular domain has to explore multiple aspects before giving a verdict on the truthfulness of an article. In this work, we propose to use machine learning ensemble approach for automated classification of news articles. Our study explores different textual properties that can be used to distinguish fake contents from real. By using those properties, we train a combination of different machine learning algorithms using various ensemble methods and evaluate their performance on 4 real world datasets. Experimental evaluation confirms the superior performance of our proposed ensemble learner approach in comparison to individual learners.
Article
Full-text available
The vehicular Internet of Things (IoT) comprises enabling technologies for a large number of important applications including collaborative autonomous driving and advanced transportation systems. Due to the mobility of vehicles, strict application requirements, and limited communication resources, the conventional centralized control fails to provide sufficient quality of service for connected vehicles, so a decentralized approach is required in the vicinity to satisfy the requirements of delay-sensitive and mission-critical applications. A decentralized system is also more resistant to the single point of failure problem and malicious attacks. Blockchain technology has been attracting great interest due to its capability of achieving a decentralized, transparent, and tamper-resistant system. There are many studies focusing on the use of blockchain in managing data and transactions in vehicular environments. However, the application of blockchain in vehicular environments also faces some technical challenges. In this paper, we first explain the fundamentals of blockchain and vehicular IoT. Then, we conduct a literature review on the existing research efforts of the blockchain for vehicular IoT by discussing the research problems and technical issues. After that, we point out some future research issues considering the characteristics of both blockchain and vehicular IoT.
Article
Full-text available
Owing to the limited resources of sensor nodes, we propose an efficient hybrid routing scheme using a dynamic cluster-based static routing protocol (DCBSRP), leveraging the ad hoc on-demand distance vector (AODV) routing protocol and low-energy adaptive clustering hierarchy (LEACH) protocol. In the proposed scheme, the cluster head (CH) nodes are formed dynamically for a fixed interval, whereas static routing is applied in the designated clusters by utilizing the AODV routing protocol. The static routing condition of the proposed scheme limits all connected nodes of the cluster for a defined interval of time (T) to share their collected information through a specific CH node. Once the time (T) interval is completed, all ordinary nodes connected with the specific CH are released and they are free to advertise their CH candidateship within the network. Likewise, the node receiving the maximum number of route replies (RREPs) is selected as the next CH node in the vicinity of deployed sensor nodes. However, with the DCBSRP protocol, the recently selected CH node does not advertise its candidateship for five consecutive cycles and acts as an ordinary node. The simulation result shows significant improvement in the lifetime and participation of ordinary nodes in the network until the end-stage of the network. In the proposed scheme, the participation of ordinary nodes in the network is 95.9 %, which not only balances load between participating nodes but also improves the network lifetime in the presence of field-proven schemes. Moreover, the simulation results show an out-performance of rival schemes in terms of communication cost, end to end delay, throughput, packet lost ratio, and energy consumption.
Article
Full-text available
Recent innovative development in Internet of Things, usage of wearable devices in body area networks has become smarter and has reached new perception, in terms of connectivity and diagnosis. Energy consumption, latency and network coverage are some of the research issues occurred in IoT based body area network. To address latency issue, in this work, networks could adopt to the Fog architectures to perform computation, data analysis and storage near to the users. To improve battery life period of sensor nodes an intelligent proactive routing algorithms for body-fog-cloud area networks are needed. In this research a novel algorithm called as modified WORN-DEAR algorithms for BAN-IoT networks is proposed to achieve energy efficient routing and scheduling using the principle of deep learning based adaptive distance-energy features. This work is simulated on Cooja-Contiki network simulator and implemented on different test beds with ESP8266 WIFI SoC interface. Final results were compared with existing WORN-DEAR algorithm and achieved higher accuracy of 98% in LSTM compare to other machine learning algorithms such as logistic regression, naïve bias, SVM and KNN.
Article
Full-text available
Resource limited networks have various applications in our daily life. However, a challenging issue associated with these networks is a uniform load balancing strategy to prolong their lifespan. In literature, various schemes try to improve the scalability and reliability of the networks, but majority of these approaches assume homogeneous networks. Moreover, most of the technique uses distance, residual energy and hop count values to balance the energy consumption of participating nodes and prolong the network lifetime. Therefore, an energy efficient load balancing scheme for heterogeneous wireless sensor networks (WSNs) need to be developed. In this paper, an energy gauge node (EGN) based communication infrastructure is presented to develop a uniform load balancing strategy for resource-limited networks. EGN measures the residual energy of the participating nodes i.e., Ci 2 Network. Moreover, EGN nodes advertise hop selection information in the network which is used by ordinary nodes to update their routing tables. Likewise, ordinary nodes use this information to uni-cast its collected data to the destination. EGN nodes work on built-in configuration to categorize their neighboring nodes such as powerful, normal and critical energy categories. EGN uses the strength of packet reply (SPR) and round trip time (RTT) values to measure the neighboring node’s residual energy (Er) and those node(s) which have a maximum Er values are advertised as reliable paths for communication. Furthermore, EGN transmits a route request (RREQ) in the network and receives route reply (RREP) from every node reside in its closed proximity which is used to compute the Er energy values of the neighboring node(s). If Er value of a neighboring node is less than the defined category threshold value then this node is advertised as non-available for communication as a relaying node. The simulation results show that our proposed scheme surpasses the existing schemes in terms of lifespan of individual nodes, throughput, packet loss ratio (PLR), latency, communication costs and computation costs, etc,. Moreover, our proposed scheme prolongs the lifespan of WSNs and as well as an individual node against exiting schemes in the operational environment.
Article
Full-text available
Data exchange has been rapidly increased recently by increasing the use of mobile networks. Sharing information (text, image, audio and video) over unsecured mobile network channels is liable for attacking and stealing. Encryption techniques are the most suitable methods to protect information from hackers. Hill cipher algorithm is one of symmetric techniques, it has a simple structure and fast computations, but weak security because sender and receiver need to use and share the same private key within a non-secure channel. Therefore, a novel hybrid encryption approach between elliptic curve cryptosystem and hill cipher (ECCHC) is proposed in this paper to convert Hill Cipher from symmetric technique (private key) to asymmetric one (public key) and increase its security and efficiency and resist the hackers. Thus, no need to share the secret key between sender and receiver and both can generate it from the private and public keys. Therefore, the proposed approach presents a new contribution by its ability to encrypt every character in the 128 ASCII table by using its ASCII value direct without needing to assign a numerical value for each character. The main advantages of the proposed method are represented in the computation simplicity, security efficiency and faster computation.
Article
Full-text available
The complex and interdependent nature of smart cities raises significant political, technical, and socioeconomic challenges for designers, integrators and organisations involved in administrating these new entities. An increasing number of studies focus on the security, privacy and risks within smart cities, highlighting the threats relating to information security and challenges for smart city infrastructure in the management and processing of personal data. This study analyses many of these challenges, offers a valuable synthesis of the relevant key literature, and develops a smart city interaction framework. The study is organised around a number of key themes within smart cities research: privacy and security of mobile devices and services; smart city infrastructure, power systems, healthcare, frameworks, algorithms and protocols to improve security and privacy, operational threats for smart cities, use and adoption of smart services by citizens, use of blockchain and use of social media. This comprehensive review provides a useful perspective on many of the key issues and offers key direction for future studies. The findings of this study can provide an informative research framework and reference point for academics and practitioners.
Article
Full-text available
The explosive growth in fake news and its erosion to democracy, justice, and public trust has increased the demand for fake news detection and intervention. This survey reviews and evaluates methods that can detect fake news from four perspectives: (1) the false knowledge it carries, (2) its writing style, (3) its propagation patterns, and (4) the credibility of its source. The survey also highlights some potential research tasks based on the review. In particular, we identify and detail related fundamental theories across various disciplines to encourage interdisciplinary research on fake news. We hope this survey can facilitate collaborative efforts among experts in computer and information sciences, social sciences, political science, and journalism to research fake news, where such efforts can lead to fake news detection that is not only efficient but more importantly, explainable.
Article
Full-text available
The heavy reliance on digital technology, by individuals and organizations, has reshaped the traditional economy into a digital economy. In response, cybercriminals' attention has shifted dramatically from showing off skills and conducting individual attacks into high sophisticated attacks with financial gain as the goal. This, inevitably, poses a challenge to the cybersecurity community as they strive to find solutions to preserve the confidentiality, availability and integrity of the individual users’ and corporates’ private data and services. Cybercriminals mainly deploy malware to achieve their goals, which could be in the form of ransomware, botnets, etc. The use of encryption, packing and polymorphism techniques makes it harder to detect the malware files, especially when these are created in great numbers every day. In this paper, a novel framework, named Malware Spectrogram Image Classification (MSIC), is proposed. It employs spectrogram images in conjunction with the convolution neural network to classify a malware file to its corresponding family and to differentiate it from a benign file. Further, this research shares with the research community two privately collected labeled malicious and benign datasets. The evaluation of MSIC showed its effectiveness to be 91.6% F-measure and 92.8% accuracy in classifying malware files to their corresponding families, in comparison to, respectively, 90.6% and 92.3% results produced by the grayscale image classification approach. Likewise, in classifying files as malicious or benign, MSIC scored 96% F-measure and accuracy results compared to 95.5% with the grayscale solution. Also, MSIC required less computational time in converting and resizing the files than the grayscale framework.
Conference Paper
Full-text available
Malwares attacks are becoming increasingly destructive. Hackers target all types of devices from big to the most little ones. Researcher’s communities in cybersecurity field are working hard to defend malwares attacks as well as any other malicious activity. In fact, the primary goal is to defend cyberattacks as fast as possible to avoid catastrophic damages. In this paper, we proposed new cybersecurity architecture specialized in malwares attacks defense. This proposal puts together four layers based on malwares behaviors. In addition, we perform malware classifier using malware visualization technique, GIST descriptor features and K-Nearest Neighbor algorithm. The classifier is able to put each input malware image into its corresponding family. Families distribution is been divided by malwares behaviors. For the purpose of attaining speedy malwares classifier, we use Univariate Feature Selection technique to reduce GIST feature. So we succeeded in getting from 320 to only 50 features in less timing with very close accuracy of 97,67%.
Article
Full-text available
Jamming attack is one of the most common threats on wireless networks through sending a high-power signal to the network in order to corrupt legitimate packets. To address Jamming attacks problem, the Particle Swarm Optimization (PSO) algorithm is used to describe and simulate the behavior of a large group of entities, with similar characteristics or attributes, as they progress to achieve an optimal group, or swarm. Therefore, in this study enhanced version of PSO is proposed called the Improved PSO algorithm aims to enhance the detection of jamming attack sources over randomized mobile networks. The simulation result shows that Improved PSO algorithm in this study is faster at obtaining the location of the given mobile network at which coverage area is minimal and hence central compared to other algorithms. The Improved PSO as well was applied to a mobile network. The Improved PSO algorithm was evaluated with two experiments. In the First experiment, The Improved PSO was compared with PSO, GWO and MFO, obtained results shown the Improved PSO is the best algorithm among others to fine obtain the location for jamming attack. In Second experiment, Improved PSO was compared with PSO in mobile network environment. The obtain results prove that Improved PSO is better than PSO for obtaining the location in mobile network where coverage area is minimal and hence central.
Article
Full-text available
Wireless Sensor Networks (WSNs) are vulnerable to various security threats. One of the most common types of vulnerability threat is the jamming attack, where the attacker uses the same frequency signals to jam the network transmission. In this paper, an edge node scheme is proposed to address the issue of jamming attack in WSNs. Three edge nodes are used in the deployed area of WSN, which have different transmission frequencies in the same bandwidth. The different transmission frequencies and Round Trip Time (RTT) of transmitting signal makes it possible to identify the jamming attack channel in WSNs transmission media. If an attacker jams one of the transmission channels, then the other two edge nodes verify the media serviceability by means of transmitting information from the same deployed WSNs. Furthermore, the RTT of the adjacent channel is also disturbed from its defined interval of time, due to high frequency interference in the adjacent channels, which is the indication of a jamming attack in the network. The simulation result was found to be quite consistent during analysis by jamming the frequency channel of each edge node in a step-wise process. The detection rate of jamming attacks was about 94% for our proposed model, which was far better than existing schemes. Moreover, statistical analyses were undertaken for field-proven schemes, and were found to be quite convincing compared with the existing schemes, with an average of 6% improvement.
Article
Full-text available
Nowadays, the increasing number of patients accompanied with the emergence of new symptoms and diseases makes heath monitoring and assessment a complicated task for medical staff and hospitals. Indeed, the processing of big and heterogeneous data collected by biomedical sensors along with the need of patients' classification and disease diagnosis become major challenges for several health-based sensing applications. Thus, the combination between remote sensing devices and the big data technologies have been proven as an efficient and low cost solution for healthcare applications. In this paper, we propose a robust big data analytics platform for real time patient monitoring and decision making to help both hospital and medical staff. The proposed platform relies on big data technologies and data analysis techniques and consists of four layers: real time patient monitoring, real time decision and data storage, patient classification and disease diagnosis, and data retrieval and visualization. To evaluate the performance of our platform, we implemented our platform based on the Hadoop ecosystem and we applied the proposed algorithms over real health data. The obtained results show the effectiveness of our platform in terms of efficiently performing patient classification and disease diagnosis in healthcare applications.
Article
Full-text available
With the widespread adoption of the internet of things (IoT) technologies towards building a smart city, connected devices often offload computation tasks to nearby edge locations (base stations) to reduce overall computation and network delay. However, serving an ever-increasing number of end devices at these traditional edge locations is becoming impossible, subsequently making them fail to deliver the agreed quality of service to all requesting devices. However, the backend cloud data center is available to serve these requests but incurred additional communication delay, thus, unsuitable for delay-sensitive applications. Furthermore, the fact that the underlying network is inherently ad hoc which makes it prone to malicious nodes affecting its overall performance. In this work, we propose a secure fog computing paradigm where roadside units (RSUs) are used to offload tasks to nearby fog vehicles based on repute scores maintained at a distributed blockchain ledger. The experimental results demonstrate a significant performance gain in terms of queuing time, end-to-end delay, and task completion rate when compared to the baseline queuing-based task offloading scheme.
Article
Full-text available
k-means clustering, which partitions data records into different clusters such that the records in the same cluster are close to each other, has many important applications such as image segmentation and genes detection. While the k-means clustering has been well-studied by a significant amount of works, most of the existing schemes are not designed for peer-to-peer (P2P) networks. P2P networks impose several efficiency and security challenges for performing clustering over distributed data. In this paper, we propose a novel privacy-preserving k-means clustering scheme over distributed data in P2P networks, which achieves local synchronization and privacy protection. Specifically, we design a secure aggregation protocol and a secure division protocol based on homomorphic encryption to securely compute clusters without revealing the privacy of individual peer. Moreover, we propose a novel massage encoding method to improve the performance of our aggregation protocol. We formally prove that the proposed scheme is secure under the semi-honest model and demonstrate the performance of our proposed scheme.
Article
Full-text available
Wireless sensor networks (WSNs) is an infrastructure free organization of various operational devices. Due to their overwhelming characteristics, these networks are used in different applications. For WSNs, it is necessary to collect real time and precise data as critical decisions are based on these readings in different application scenarios. In WSNs, authentication of the operational devices is one the challenge issue to the research community as these networks are dynamic and self-organizing in nature. Moreover, due to the constraint oriented nature of these devices a generalized light-weight authentication scheme is needed to be developed. In this paper, a light-weight anonymous authentication techniques is presented to resolve the black-hole attack issue associated with WSNs. In this scheme, Medium Access Control (Mac) address is used to register every node in WSNs with its nearest cluster head(CH) or base station module(s). The registration process is performed in an off-line phase to ensure authenticity of both legitimate nodes and base stations in an operational network. The proposed technique resolves the black-hole attack issue as an intruder node needs to be registered with both gateway and neighbouring nodes which is not possible. Moreover, a hybrid data encryption scheme, elliptic curve integrated encryption standard (ECIES) and elliptic curve deffi-hellman problem (ECDDHP), is used to improve authenticity, confidentiality and integrity of the collected data. Simulation results show the exceptional performance of the proposed scheme against field proven techniques in terms of minimum possible end-to-end delay & communication cost, maximum average packet delivery ratio and throughput in presence of malicious node(s).
Article
Full-text available
Multiple Input Multiple Output (MIMO) system has several input and output antennas for executing the data transmission. Channel Estimation (CE) is required in MIMO, to achieve the effective signal transmission over the various amount of antennas in mobile networks. By using CE over the MIMO, the noiseless data transmission is performed. Hence in this paper, a Multi layer Neural Network (MNN) is used for identifying the CE and this system is named as Multi-layer Neural Network-MIMO-Digital Filter (MNN-MIMO-CE) is proposed for blind channel equalization. The MNN-MIMO-CE has Feed-forward Artificial Neural Network (FANN) with back propagation in Levenberg-Marquardt (LM) algorithm and it has two processes MNN training and MNN testing. LM algorithm is used to train the MNN. These processes are used to provide the CE for different combination of antennas. The performance of the MNN-MIMO-CE method is evaluated in comparison with the existing method [25] through simulations using BER as the performance measure.
Chapter
Weather related event prediction is always a fascinating problem for scientists due to its importance in different sectors of life. This chapter has used machine learning algorithms to predict events like rainfall, thunderstorm, and fog in a large metropolitan city. The study proposed here has particularly focused on the long-term event predictions which is currently missing in the state of the artwork. Different machine learning algorithms mainly Random Forest, Gradient Boosting Classifier, Logistic Regression, and others were used to learn the model. Five years of meteorological data was used for this purpose. Different algorithms showed accuracy more than 90%, among which Random Forest outperformed the other algorithms by achieving the highest accuracy.
Article
Mobile devices and applications are prone to different kinds of cyber threats and attacks that affect their users’ privacy. Therefore, there is critical need to understand all cyber threats characteristics in order to prevent their risks. However, most of cyber threats classifications are usually limited and based on one or two criteria in the classification process of threats. In addition, the current frameworks did not present an exhaustive list of cyber threats on mobile devices and applications. According to above reasons, this study proposes an exhaustive framework for mobile devices and applications-cyber security threat classifications, which includes most cyber threats classification and principles. The main purpose of our framework is to systematically identify cyber security threats, show their potential impacts, draw the mobile users’ attention to those threats, and enable them to take protective actions as appropriate.
Chapter
For a few decades, mega-cities are facing some huge challenges. Among them, the prevention of crime seems to be more challenging than others. The safety of citizens in the dense urban population with conventional practices are unable to control the increasing crime rate. This work is aimed to develop a framework for the autonomous surveillance of public places, with visual-based handheld arms detection in a near real-time. It scans all the objects that come in front of the camera and when any type of weapon comes in contact with a lens it gives an alert, locks that object and the person holding it and identifies the person using facial recognition. If the alert does not get responded in a few minutes, the system will automatically notify the 3rd person or agency about the incident. It can also manually highlight any object in a frame to keep track of its movement for security purposes. Machine and Deep Learning techniques were used to train models for object detection and facial recognition. The model achieved an accuracy of 97.33% in object detection and 90% in facial recognition.
Chapter
Blockchain Technology is one of the innovation projects launched in 2008 by Satochi Nakamoto, which makes this technology face to a set of attacks like money laundering, DDOs attack and illicit activity ... In this paper, we compare a Support Vector Machine algorithm with Random Forest and Logistic Regression to detect illicit transaction in the bitcoin network. The proposed approaches use the elliptic dataset. The used dataset contains 203.769 nodes and 234.355 edges, it allows to classify the data into three classes: illicit, licit or unknown. In this study, we consider unknown classes as licit. The accuracy reaches the 98.851% using Random Forest. The recall does not exceed 45% and the precision reaches 65.90% using Random Forest.
Chapter
In the era of the fourth industrial revolution, there is a massive production of data resulting from different resources like connected devices, social media, search engines, etc. Without adapting the traditional Marketing tools to the digitalization era, the new client’s unstructured shopping process could not be easily understood. The Digital Marketing introduced how to apply marketing strategies on digital technologies while always promoting value for e-customers. Nevertheless, the data that is produced by this process cannot be processed manually. Analyzing the data to predict future based on the patterns detected in the data is primordial in Digital Marketing. Different Data Mining and Machine Learning tools can be used on historical records to learn more about the client’s behavior or to predict future outcomes.
Article
The purpose of unconditional text generation is to train a model with real sentences, then generate novel sentences of the same quality and diversity as the training data. However, when different metrics are used for comparing the methods of unconditional text generation, contradictory conclusions are drawn. The difficulty is that both the diversity and quality of the sample should be considered simultaneously when the models are evaluated. To solve this problem, a novel metric of distributional discrepancy (DD) is designed to evaluate generators based on the discrepancy between the generated and real training sentences. However, it cannot compute the DD directly because the distribution of real sentences is unavailable. Thus, we propose a method for estimating the DD by training a neural-network-based text classifier. For comparison, three existing metrics, bi-lingual evaluation understudy (BLEU) versus self-BLEU, language model score versus reverse language model score, and Fréchet embedding distance, along with the proposed DD, are used to evaluate two popular generative models of long short-term memory and generative pretrained transformer 2 on both syntactic and real data. Experimental results show that DD is significantly better than the three existing metrics for ranking these generative models.
Book
This book presents the proceedings of the 2020 International Conference on Integrated Science in Digital Age, which was jointly supported by the Institute of Certified Specialists (Russia) and Springer, and was held on May 1–3, 2020. The conference provided an international forum for researchers and practitioners to present and discuss the latest innovations, trends, results, experiences and concerns in the various areas of integrated science in the digital age. The main goal of the conference was to efficiently disseminate original findings in the natural and social sciences, covering topics such as blockchain & cryptocurrency; computer law & security; digital accounting & auditing; digital business & finance; digital economics; digital education; digital engineering; machine learning; smart cities in the digital age; health policy & management; and information management.
Chapter
Machine Learning (ML) based approaches are becoming increasingly common for securing critical Cyber Physical Systems (CPS), such as electric power grid and water treatment plants. CPS is a combination of physical processes (e.g., water, electricity, etc.) and computing elements (e.g., computers, communication networks, etc.). ML techniques are a class of algorithms that learn mathematical relationships of a system from data. Applications of ML in securing CPS is commonly carried out on data from a real system. However, there are significant challenges in using ML algorithms as it is for security purposes. In this chapter, two case studies based on empirical applications of ML for the CPS security are presented. First is based on the idea of generating process invariants using ML and the second is based on system modeling to detect and isolate attacks. Further several challenges are pointed out and a few recommendations are provided.
Chapter
As a result of the digital transformation that has led by different production sectors, the organizations were flooded by vast, various, and complex data, called Big Data. To manage their data assets, the adoption of the Big Data value chain (BDVC) was suitable for a value realization as well as a data-intensive decision-making. The manipulation and exploitation of these data gave rise to the concept of Big Data monetization. The marriage between the BDVC and data monetization remains an own and recent approach. This combination allows the organization's process to become entirely data-driven and expand the scope of its exchanges. Besides, advanced deployment models such as cloud computing will enable this combination to become more resilient by providing large amounts of computational and networking capabilities. Furthermore, it allows sharing data between different BDVCs to build an expandable ecosystem. In this contribution, we review the literature regarding the association between BDVC, data monetization, and cloud computing. Then, we propose a Big Data monetization-driven value chain model that relies on cloud computing and analytical capabilities. It allows monetizing both data and insight as a service under a collaborative concept.
Article
Social media channels, such as Facebook, Twitter, and Instagram, have altered our world forever. People are now increasingly connected than ever and reveal a sort of digital persona. Although social media certainly has several remarkable features, the demerits are undeniable as well. Recent studies have indicated a correlation between high usage of social media sites and increased depression. The present study aims to exploit machine learning techniques for detecting a probable depressed Twitter user based on both, his/her network behavior and tweets. For this purpose, we trained and tested classifiers to distinguish whether a user is depressed or not using features extracted from his/her activities in the network and tweets. The results showed that the more features are used, the higher are the accuracy and F-measure scores in detecting depressed users. This method is a data-driven, predictive approach for early detection of depression or other mental illnesses. This study's main contribution is the exploration part of the features and its impact on detecting the depression level.
Article
Recent years have witnessed widespread adoption of machine learning (ML)/deep learning (DL) techniques due to their superior performance for a variety of healthcare applications ranging from the prediction of cardiac arrest from one-dimensional heart signals to computer-aided diagnosis (CADx) using multi-dimensional medical images. Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial attacks. In this paper, we present an overview of various application areas in healthcare that leverage such techniques from security and privacy point of view and present associated challenges. In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications. Finally, we provide insight into the current research challenges and promising directions for future research.
Chapter
This Chapter studied the required amount of radio spectral resource enough to support timely and reliable vehicular communication via vehicular ad-hoc networks (VANETs). The study focussed on both DSRC/WAVE and the European standard ITS-G5 that are based on recently approved IEEE 802.11p specification, which uses a simplified version of CSMA/CA as MAC protocol, and an STDMA MAC recently proposed by European Telecommunications Standards Institute (ETSI). The Chapter further carried out a feasibility analysis of radio spectrum requirement for timely and reliable vehicle-to-vehicle (V2V) communication. In the feasibility analysis, synchronized STDMA MAC is compared with the CSMA/CA MAC protocol, which 802.11p is based on. Message Reception Failure (MRF) probability is used as a performance metric to investigate and ascertain the minimum spectrum requirement for efficient, timely, and reliable V2V communication. Simulation results show that even at the same allocation of 10 MHz channel bandwidth, STDMA MAC outperforms the CSMA/CA based MACs due to the fact that STDMA based MACs provide a structured shared medium access and prevent negative impact of unhealthy contention for shared channel access. The results further show that up to 40 MHz channel bandwidth over 5.9GHz band would be required to guarantee optimal reliability of safety packets exchange in vehicular networks as opposed to 10 MHz allocated in US.
Article
Depression is one of the most common mental health problems worldwide. The diagnosis of depression is usually done by clinicians based on mental status questionnaires and patient's self-reporting. Not only do these methods highly depend on the current mood of the patient, but also people who experience mental illness are often reluctantly seeking help. Social networks have become a popular platform for people to express their feelings and thoughts with friends and family. With the substantial amount of data in social networks, there is an opportunity to try designing novel frameworks to identify those at risk of depression. Moreover, such frameworks can provide clinicians and hospitals with deeper insights about depressive behavioral patterns, thereby improving diagnostic process. In this paper, we propose a big data analytics framework to detect depression for users of social networks. In addition to syntactic and syntax features, it focuses on pragmatic features toward modeling the intention of users. User intention represents the true motivation behind social network behaviors. Moreover, since the behaviors of user's friends in the network are believed to have an influence on the user, the framework also models the influence of friends on the user's mental states. We evaluate the performance of the proposed framework on a massive real dataset obtained from Facebook and show that the framework outperforms existing methods for diagnosing user-level depression in social networks.
Article
A cyber attack launched on a critical infrastructure (CI), such as a power grid or a water treatment plant, could lead to anomalous behavior. There exist several methods to detect such behavior. This paper reports on a study conducted to compare two methods for detecting anomalies in CI. One of these methods, referred to as design-centric, generates invariants from the design of a CI. Another method, referred to as data-centric, generates the invariants from data collected from an operational CI. The key question that motivated the study is “How do design and data-centric methods compare in the effectiveness of the generated invariants in detecting process anomalies.” The data-centric approach used Association Rule Mining for generating invariants from operational data. These invariants, and their performance in detecting anomalies, was compared against those generated by a design-centric approach reported in the literature. The entire study was conducted in the context of an operational scaled down version of a water treatment plant.