Journal of Ambient Intelligence and Humanized Computing

Published by Springer Nature
Online ISSN: 1868-5145
Print ISSN: 1868-5137
Learn more about this page
Recent publications
  • Yasar MajibYasar Majib
  • Mahmoud BarhamgiMahmoud Barhamgi
  • Behzad Momahed HeraviBehzad Momahed Heravi
  • [...]
  • Charith PereraCharith Perera
Detecting anomalies at the time of happening is vital in environments like buildings and homes to identify potential cyber-attacks. This paper discussed the various mechanisms to detect anomalies as soon as they occur. We shed light on crucial considerations when building machine learning models. We constructed and gathered data from multiple self-build (DIY) IoT devices with different in-situ sensors and found effective ways to find the point, contextual and combine anomalies. We also discussed several challenges and potential solutions when dealing with sensing devices that produce data at different sampling rates and how we need to pre-process them in machine learning models. This paper also looks at the pros and cons of extracting sub-datasets based on environmental conditions.
  • Ankita SrivastavaAnkita Srivastava
  • Arun PrakashArun Prakash
  • Rajeev TripathiRajeev Tripathi
Multi-hop data transmission in Vehicular Ad Hoc Network (VANET) is mostly affected by vehicle’s mobility, intermittent connection, insufficient bandwidth, and multichannel switching. Geographic routing technique in cognitive vehicular ad hoc network (CR-VANET) resolves bandwidth scarcity and connectivity issues simultaneously. The proposed QoS aware stochastic relaxation approach (QASRA) is a geographic routing protocol that additionally performs network exploration under inappropriate connectivity and exploits the already existing valid solutions while discovering routes in urban CR-VANET. The candidate forwarders are prioritized depending upon their closeness from destination, relative velocity from sender, and their street efficiency in terms of connectivity and delay. Transmission is done over minimally occupied cognitive or service channels. Different sets of experiments were performed to evaluate the effect of growing vehicular density, primary users (PUs) and CBR connection pairs in an urban scenario. The simulation on NS-2.24 platform demonstrates that at higher velocities, that are in between 20 and 60 km/hr, the average packet delivery ratio (PDR) is 60% when the density of vehicles were altered, 63.6% when PU’s count is changed and 69% when number of CBR connection pair is varied. The average end-to-end delay is 1.03 s when the density of vehicles is altered, 0.734 when PU’s count is changed and, 0.756 when number of CBR connection pair is varied. The average PU’s success ratio is 68.4% when density of vehicles were changed, 61.4% when PU’s count is changed and, 64.4% when the number of CBR connection pairs is varied. The analysis done through simulation demonstrates that a successful delivery of both secondary and primary users is achieved in minimum time when compared with other traditional methods.
  • M. Dhana Lakshmi BhavaniM. Dhana Lakshmi Bhavani
  • R. MuruganR. Murugan
  • Tripti GoelTripti Goel
The visual quality of an outdoor scenario during the winter season is mainly affected by haze or fog. The visibility is lacking even if the optical sensor system’s lens was adjusted, for example, automatic driver assistance, remote sensing, and video surveillance. Removing such haziness effects from a single image has created a tricky situation due to the cloudy and murky atmosphere. This paper, proposes a new methodology that helps remove the haziness and gives a clear vision in terms of both color and texture information. To dehaze an image, introducing a multi-scale image-fusion on a single hazy image by extracting different scale images from a single scenario. Multi-scale image fusion supports solving hazing problems using significant features at multiple scales. The two derived images of an original degraded image are the white balanced portion and the luminance parameter-based image. The straightforward image fusion on the derived images with their corresponding weight maps prompts unwanted enhancement in the results. To eliminate such effects, pyramid decomposition is applied on weight maps and the input images which helps to enhance the contrast and also sharpens the hazy image. The proposed method effectively produces the dehazed image from a single hazy image. The experimental results reveal that the proposed algorithm is performing well in generating a better visible image efficiently. The proposed method has achieved better performance metrics such as peak signal-to-noise ratio (PSNR) and average gradient ratio (AGR) which are improved by 8.55 and 31.13% respectively compared to an average of other state-of-the-art methods.
A simple DDS
Vehicle routing plans
  • Rong GaoRong Gao
  • Yebao MaYebao Ma
  • Dan A. RalescuDan A. Ralescu
Multilevel programming is widely applied to solve decentralized decision-making problems. In practice, indeterminacies are presented in these problems due to volatile factors or emergencies. As a type of indeterminacy, uncertainty is introduced in multilevel programming. For resolving multilevel programming problems with uncertain parameters, this paper constructs the uncertain expected value multilevel programming model and chance-constrained multilevel programming. Then, these models are converted to their equivalent forms. Moreover, the Stackelberg-Nash equilibrium solutions are obtained by using a genetic algorithm. Finally, these models are applied to the omni-channel vehicle routing problem, and a numerical experiment is given. The numerical experiment shows that the established models can optimize the distribution efficiency by coordinating the interests of decision-makers.
  • Swati NigamSwati Nigam
  • Rajiv SinghRajiv Singh
  • Manoj Kumar SinghManoj Kumar Singh
  • Vivek Kumar SinghVivek Kumar Singh
Significant efforts have been made to monitor human activity, although it remains a challenging area for computer vision research. This paper has introduced a framework to identify the most common types of video surveillance activities. The proposed framework consists of three consecutive modules: (i) human detection by background subtraction, (ii) retrieval of uniform and rotation invariant local binary pattern (LBP) feature, and (iii) identification of human activities with a support vector machine (SVM) multiclass classifier. This framework provides a consistent view of the human actions that look at multiple subjects from different views. In addition to this, uniform patterns provide better performance in discriminating human activities. A multiclass SVM is used for classification of human activities. SVM classifier is set and trained to achieve the better efficiency by selecting the appropriate feature before it is integrated. Weizmann's Multiview dataset, CASIA dataset and IXMAS dataset confirm the high efficiency and better robustness of the proposed framework.
With remarkable information technology development, information security has become a major concern in the communication environment, where security must be performed for the multimedia messages exchanged between the sender and the intended recipient. Digital multimedia steganography techniques have been developed to attain a security for covert communication and secure data. This paper proposes an approach for image steganography using the Least Significant Bit Substitution (LSB) and Nature-Inspired Harris Hawks Optimization (HHO) algorithm for efficient concealing of the secret data inside a cover image; thus providing high confidentiality. The HHO based data encoding operation uses the PSNR visual quality metric as an objective function. The objective function is used to determine the ideal encoding vector to convert the secret message to its encoded form. The proposed approach performs better than other state-of-the-art methods in terms of standard measures of visual quality with maintaining high embedding capacity. Comparisons with existing LSB or multi-directional PVD embedding methods demonstrate that the proposed method has more optimized and higher embedding capacity with maintaining visual quality. Besides, the proposed approach achieves high security against statistical StegoExpose analysis, ALASKA2 deep learning steganalysis, and image processing attacks.
The task of identifying malicious activities in logs and predicting threats is crucial nowadays in industrial sector. In this paper, we focus on the identification of past malicious activities and in the prediction of future threats by proposing a novel technique based on the combination of Marked Temporal Point Processes (MTTP) and Neural Networks. Differently from the traditional formulation of Temporal Point Processes, our method does not make any prior assumptions on the functional form of the conditional intensity function and on the distribution of the events. Our approach is based the adoption of Neural Networks with the goal of improving the capabilities of learning arbitrary and unknown event distributions by taking advantage of the Deep Learning theory. We conduct a series of experiments using industrial data coming from gas pipelines, showing that our framework is able to represent in a convenient way the information gathered from the logs and predict future menaces in an unsupervised way, as well as classifying the past ones. The results of the experimental evaluation, showing outstanding values for precision and recall, confirm the effectiveness of our approach.
Feistel structure of decrypt
Single round processing
Encryption time vs word length
Decryption time vs word length
  • Deepraj ChowdhuryDeepraj Chowdhury
  • Ajoy DeyAjoy Dey
  • Ritam GaraiRitam Garai
  • [...]
  • Waleed S. AlnumayWaleed S. Alnumay
Triple Data Encryption Standard (also known as 3DES) is a symmetric encryption algorithm. European Traffic Management System popularly uses 3DES for authentication and encryption. However, as per a draft published by NIST in 2018, 3DES is officially being retired and not suggested to use for new applications. Several attacks were imposed on 3DES, and the biggest threat to 3DES is meet in the middle attack. Therefore for long term security, it is essential to enhance the security of such algorithms. This paper proposed a new cipher DeCrypt inspired by 3DES, an improved version of the 3DES algorithm, and secured against meet in the middle attack. As per the experiment performed, DeCrypt cipher is 61 per cent faster than 3DES, providing long-term and better security against sweet-32 attacks than other symmetric algorithms. The proposed algorithm is also faster than 3DES due to reduced encryption and decryption time.
Visualization of the proposed hybrid blockchain and federated learning system. The main idea is based on training one model on many, different clients that have their datasets. Each client trains their model, then they encrypt the model and send it to the server for aggregation. In addition, checksums are determined and placed in the blockchain. After receiving the models from all clients, the server validates them. Then it aggregates all of them to obtain the new, averaged model. In the next step, it is encrypted and sent to the client for the next training round. The data for model validation is placed in the blockchain, to which clients of the current FL structure have access
Diagram of the server/client operation before and after sending the model
The accuracy of the model for different learning transfer architectures in the proposed solution
Analysis of attack detection in the proposed system
Federated learning is becoming a practical solution for machine learning (ML) in industry. This is due to the possibility of implementing artificial intelligence (AI) systems and training its models on private data sets. However, this is not an ideal solution as it is possible to manipulate or even intercept the model during its transmission between the server and workers. In this paper, we propose a solution to ensure the security of the model transmitted between units in FL. The given model is encrypted with AES, DES, RSA algorithms, then a checksum is determined. This checksum with a private key is stored as a transaction on a blockchain. In the case of sending the model and its modification, the recipient can easily verify whether it is correct. The proposed solution has been described, tested, and compared to indicate its advantages and disadvantages. Conducted experiments were based on analyzing the communication time between participants, the accuracy of machine learning models, and attack detection. In terms of attack detection on the blockchain, we reached 81% thanks to the checksum mechanism.
Energy harvesting (EH) by power constrained nodes in simultaneous wireless information and power transfer (SWIPT) system have recently attracted a considerable research interest. However, most of the researches have investigated SWIPT in static scenario over the conventional fading channels. Analyzing SWIPT over a channel that is suitable for mmWave communications forms the key to the success of 5G and beyond technologies. Concerning to another major challenge for high-frequency communication is the high amount of fading losses leading to reduced cell size. In such, small cell scenario the impact of node mobility becomes significant and hence needs to be accounted for accurate analysis. Thus, in this paper we analyzed the cooperative SWIPT system over fluctuating two ray (FTR) fading channels that can accurately characterize fluctuations at GHz range. Additionally, we have integrated the random waypoint (RWP) mobility model to characterize the random mobility pattern of both user and relay node. We have introduced a RV parameter to quantify the impact of variable terms influenced by mobility like link distance, signal to noise ratio (SNR) and path-loss exponent. We derived the cumulative distribution function and probability density function in the form of easy tractable gamma function. Further, we derived the ergodic outage probability (EOP) for the SWIPT system for decode-and-forward (DF) and amplify-and-forward (AF) relaying protocols considering 1D, 2D and 3D topologies. The analytical results are compared with Monte–Carlo simulations to validate the analysis.
The growing number of next-generation applications offers a relevant opportunity for healthcare services, generating an urgent need for architectures for systems integration. Moreover, the huge amount of stored information related to events can be explored by adopting a process-oriented perspective. This paper discusses an Ambient Assisted Living healthcare architecture to manage hospital home-care services. The proposed solution relies on adopting an event manager to integrate sources ranging from personal devices to web-based applications. Data are processed on a federated cloud platform offering computing infrastructure and storage resources to improve scientific research. In a second step, a business process analysis of telehealth and telemedicine applications is considered. An initial study explored the business process flow to capture the main sequences of tasks, activities, events. This step paves the way for the integration of process mining techniques to compliance monitoring in an AAL architecture framework.
In this paper, a novel swarm intelligence optimization algorithm is presented based on the sparrow search algorithm, namely, an intensified sparrow search algorithm (ISSA). Specifically, the newly proposed neighbor search strategy takes into account both exploring the entire feasible solution space as much as possible in the iterative process and effectively preventing the weakening of exploration ability in the late iteration. In addition, a new foraging method called the saltation learning strategy is put forward to improve the search capability of the scrounger. Firstly, the effectiveness of the ISSA is comprehensively evaluated by competition functions, known as CEC-BC-2017. The simulation results show that the ISSA substantially improves the convergence accuracy of the basic SSA and also outperforms three SSA-based variants and six state-of-art optimization algorithms. Then, to further demonstrate the real-world application potential, the ISSA is successfully used in two engineering design problems (including the pressure vessel and the welded beam designs). Finally, the proposed ISSA is employed to optimize the hyper-parameters of the long short-term memory (LSTM) network, which leads to a novel ISSA-LSTM model. The developed ISSA-LSTM model is applied to the short-term load forecasting of the power system. The experimental results show that the mean absolute percentile error (MAPE), root mean square error (RMSE) and mean absolute error (MAE) values of the proposed ISSA-LSTM model are 1.2778%, 1.2171 and 0.9267 respectively, which is superior to several LSTM-based variants.
In the Internet of Things (IoT), the data that are sent via devices are sometimes unrelated, duplicated, or erroneous, which makes it difficult to perform the required tasks. Hence transmitted data need to be filtered and selected to suit the nature of the problem being dealt with in order to achieve the highest possible level of security. Feature selection is the process of identifying the suitable characteristics needed from a dataset's whole data set for usage in a certain task (FS). This study proposes a novel wrapper FS model that uses the emperor penguin colony (EPC) method to explore the issue space and a K-nearest neighbor classifier to solve FS for IoT challenges. In experiments, the proposed EPC model was applied to nine well-known IoT datasets in order to evaluate its performance. The results showed that the model had clear superiority over the multi-objective particle swarm optimization (MOPSO) and MOPSO-Lévy methods in terms of accuracy and FS size, achieving 98% classification accuracy. The results also provided a clear understanding of the effect of the EPC algorithm on various filter methods, including the ReliefF, correlation, information gain and symmetrical methods.
Schematic diagram of DAPFSP-FASDST
Flowchart of the RLIG algorithm
Visual analysis of the computational results
Integrating component and final assembly production plans is critical to optimizing the global supply chain production system. This research extends the distributed assembly permutation flowshop scheduling problem to consider unrelated assembly machines and sequence-dependent setup times. A mixed-integer linear programming (MILP) model and a novel metaheuristic algorithm, called the Reinforcement Learning Iterated Greedy (RLIG) algorithm, are proposed to minimize the makespan of this problem. The RLIG algorithm applies a multi-seed hill-climbing strategy and an ε-\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon { - }$$\end{document}greedy selection strategy that can exploit and explore the existing solutions to find the best solutions for the addressed problem. The computational results, as based on extensive benchmark instances, show that the proposed RLIG algorithm is better than the MILP model at solving tiny-size problems. In solving the small- and large-size test instances, RLIG significantly outperforms the traditional iterated greedy algorithm. The main contribution of this work is to provide a highly effective and efficient approach to solving this novel scheduling problem.
In the optimistic era of the internet, connected devices have the capability to communicate and share information with each other. The implementation of the Internet of Things (IoT) is not possible until the security-related issues of managing a huge amount of data with reduced latency have been resolved. In contrast to traditional cryptographic techniques, trust establishment schemes among sensor nodes are found to be secure, reliable, and easily manageable. Therefore, in this paper, we propose a novel hybrid trust estimation approach that calculates the trust value of devices both at the device layer (Short-Term Trust) and at the edge layer (Long-Term Trust) depending upon their resource capabilities. Short-Term Trust (STT) uses the Markov model and considers only the current trust state for the evaluation of trust value whereas Long-Term Trust (LTT) uses the voluminous historical data for trust value prediction. Further, both LTT and STT are then alternatively referred to after every periodic interval leading to the evolution of the hybrid trust model. The healthcare simulation of the proposed work when compared with the available state of arts viz. ConTrust, BTEM, and Entropy gained a 17%, 10%, and 11% increase in the level of trustworthiness respectively. In addition, on average; the simulation results provide a 7% higher detection rate and 36% lower false-positive rate when compared to BTEM and Entropy trust models presented in the literature. Besides, the proposed scheme scores only 0.69% of computational overhead; which is observed to be suitable for resource constraint IoT devices.
The hardware composition of the early landslide warning system
The proposed data fusion method based on the generalized evidence theory
Fusion results for example 1
Fusion results for example 2
Fusion results for example 3
The triggering threshold is one of the most important methods for early warning of landslide disasters. And the traditional method is to conduct a simple comparative analysis of the data collected at each monitoring point, which cannot take full advantage of the information provided by the data. In order to overcome this setback and improve the accuracy of the early warning, rainfall detector, global navigation satellite system and deep displacement sensor are used to detect external factors and internal states that cause landslides. Then, on this basis, a new data fusion technology based on the generalized evidence theory is proposed in this paper. Firstly, the system collects information from different sensors and transmits them to the landslide probability. Considering that the landslide is a gradual process, a new method is used to convert the landslide probability into the basic probability assignment for intuitionistic fuzzy sets. Then, the fuzzy divergence measure is used to calculate the uncertainly of evidence, which expresses the relative importance of the evidence. Next, the ultimate weight of each sensor is applied to adjust the mass function and gain the reliability. Finally, the system makes a final decision according to fusion results based on the generalized evidence theory. Three types of multi-sensors data were used to test the performance of the proposed algorithm. Comparing with other four fusion method, the proposed method has a better performance, which can increase the basic probability assignments from 0.73 to 0.92. Moreover, the proposed model achieved when dealing data with high degree of conflict. The simulation results showed that the proposed method could reduce the uncertainly and get a more comprehensive and integrated decision for the early landslide warning system.
The widespread occurrences of airborne outbreaks (e.g., COVID-19) and pollution (e.g., PM2.5) have urged people in the affected regions to protect themselves by wearing face masks. In certain areas, wearing masks amidst such health-endangering times is even enforced by law. While most people wear masks to guard themselves against airborne substances, some exploit such excuses and use face masks to conceal their identity for criminal purposes such as shoplifting, robbery, drug transport, and assault. While automatic face recognition models have been proposed, most of these models aim to identify clear, unobstructed faces for authentication purposes and cannot effectively handle cases where masks cover most facial areas. To mitigate such a problem, this paper proposes a deep-learning-based feature-fusion framework, FIREC, that combines additional demographic-estimated features such as age, gender, and race into the underlying facial representation to compensate for the information lost due to mask obstruction. Given an image of a masked face, our system recommends a ranked list of potential identities of the person behind the mask. Empirical results show that the best configuration of our proposed framework can recognize bare faces and masked faces with the accuracy of 99.34% and 97.65% in terms of Hit@10, respectively. The proposed framework could greatly benefit high-recall facial identity recognition applications such as identifying potential suspects from CCTV or passers-by’s cameras, especially during crisis times when people commonly cover their faces with protective masks.
Lung cancer has the highest mortality rate among all types of cancers. Early detection of lung cancer may improve survival rates. The two categories of pulmonary lung nodules have high visual similarities. So, distinguishing them is a challenging task for radiologists. The main purpose of this work is to use convolutional neural network to perform binary classification of pulmonary nodules in CT images. This paper proposes a new multi-scale (64 \(\times \) 64, 32 \(\times \) 32 and 16 \(\times \) 16) convolutional neural network architecture for benign and malignant nodules classification. In addition, transfer learning method is used to initialize the weights of multi-scale architecture. Experimental results on the dataset LIDC-IDRI demonstrate that the proposed method achieved accuracy of 93.88%, sensitivity of 93.36% and specificity of 93.26% on nodule malignancy classification. The proposed method also outperforms the other state-of-the-art methods explicitly designed for malignancy classification of pulmonary lung nodules.
The classification of brain tumors is significantly important for diagnosing and treating brain tumors in IoT healthcare systems. In this work, we have proposed a robust classification model for brain tumors employing deep learning techniques. In the design of the proposed method, an improved Convolutional neural network is used to classify Meningioma, Glioma, and Pituitary types of brain tumors. To test the multi-level convolutional neural network model, brain magnetic resonance image data is utilized. The MCNN model classification results were improved using data augmentation and transfer learning methods. In addition, hold-out and performance evaluation metrics have been employed in the proposed MCNN model. The experimental results show that the proposed model obtained higher outcomes than the state-of-the-art techniques and achieved 99.89% classification accuracy. Due to the higher results of the proposed approach, we recommend it for the identification of brain cancer in IoT-healthcare systems.
In the world of computers, that is, by application of modern tools to determine and represent the visual field, of the heavy-duty vehicles driver in the virtual environment, the real investigations can be successfully replaced with the virtual ones. Demands which one vehicle must to satisfy during the projecting phase are: comfort, visibility, easy manoeuvrability, esthetical demands and similar. One very important demand from the aspect of the safety of all traffic participants and reliability of all systems on the vehicle is the good visibility around the vehicle, which investigation is the aim of this paper. The purpose of this paper is the analysis of everyday situation of truck driver at the intersection, in the virtual reality, as well as the analysis of causes which lead to the traffic accident. The main aim of the paper is to determine do a truck driver sees the vulnerable group of traffic participants depending from their mutual position, by application of RAMSIS software. By application of the virtual reality, the main finding of this study, is that the truck driver in some situations cannot see the vulnerable group of traffic participants. The originality of this study bases on the investigation, do a truck driver sees the electric scooter driver, and the idea for such research have come on the basis of everyday situations, because the electric scooters are more and more present on streets. Graphical Abstract
The Active-Matrix Organic Light-Emitting Diodes (AMOLED) technology has become the mainstream of displays in recent years. However, it will generate a lot of power consumption on AMOLED displays when displaying high-brightness content. To address this problem, an exposure correction mechanism is needed to remove high-brightness ambient light in the image. This work proposes a Power-Constrained Exposure Correction (PCEC) network based on adversarial learning. The PCEC network utilizes a U-Net-based generator with self-regularized high-exposure attention and adopts the global-local discriminator architecture for adversarial learning. To reduce the intensity of the high-exposed regions while constraining the AMOLED display’s power consumption, we include a power-constraint algorithm in the generator. The experimental results on three different datasets show that the proposed method can effectively correct high-exposed regions and achieve an average of 22.69% power-saving rate in the high-quality mode and 68.71% in the high-efficiency mode. The proposed method achieves 82.4 milliseconds of average inference time when run on a mobile device. Furthermore, the proposed method can enhance the saturation and contrast of the image and provide better visual quality than the existing power-constrained over-exposure correction methods.
Usage of public restrooms, provided by the Indian government under “Clean India Mission”, is limited due to negligence in manual supervision. This paper proposes an Internet of Things-based smart application for sanitation to keep check on geographically distributed restrooms autonomously through soft-hard sensors. Due to a substantial increase in the number of sensing devices in recent years, the application aims to re-utilize data from hard sensors co-located near the restroom locations. A private Sensing as a Service (SaS) paradigm is proposed on the fog node that provides sensor data as a service at the network edge to reduce new sensor deployments. The data from re-utilized sensors is vendor-specific, and, therefore, have heterogeneous protocols and file formats, unknown to the application vendor. A soft-hard fusion framework is proposed to handle data heterogeneity of re-utilized data and perform time-series fusion of hard-sensor data (vendor-specific or application-specific) with uncertain soft sensor data at the fog node for having complete and accurate information about the toilet. The proposed framework takes approximately 0.145 s to resolve heterogeneity, handle soft uncertainty and perform fusion with low resource consumption. Moreover, it has a good system as well as network performance with increased classification accuracy in predicting the cleaning requirement of every toilet.
Graph neural networks (GNNs) have achieved great success in processing non-Euclidean geometric spatial data structures. However, the irregular memory access of aggregation and the power-law distribution of the real-world graph challenge the existing memory hierarchy and caching policy of CPUs and GPUs. Meanwhile, after the emergence of an increasing number of GNN algorithms, higher requirements have been established for the flexibility of the hardware architecture. In this work, we design a dynamically reconfigurable GNN accelerator (named DRGN) supporting multiple GNN algorithms. Specifically, we first propose a vertex reordering algorithm and an adjacency matrix compressing algorithm to improve the graph data locality. Furthermore, to improve bandwidth utilization and the reuse rate of node features, we proposed a dedicatedly designed prefetcher to significantly improve hit rate. Finally, we proposed a scheduling mechanism to assign tasks to PE units to address the issue of workload imbalance. The effectiveness of proposed DRGN accelerator was evaluated using three GNN algorithms, including PageRank, GCN, and GraphSage. Compared to the execution time of these three GNN algorithms on CPU, performing PageRank algorithm on DRGN can achieve speedup by 231×, the GCN algorithm can achieve speedup by 150× on DRGN, and the GraphSage algorithm can achieve speedup by 39× when executed on DRGN. Compared with state-of-the-art GNN accelerators, DRGN can achieve higher energy-efficiency under the condition of relative lower-end process.
The advent of the Software-Defined Networks (SDNs) has caused the control plane on the switches to be moved to a separate part of the data plane. Failure of a single controller deployed in the network disrupts the proper function of the network; therefore, we need to look for the multiple controller placement and find a way to plan the star ahead assignment of switches to the controllers. The two challenges to customizing the multiple controller placement problem in the form of star assignment are the same as the significant increase in the worst case of delay after reassigning switches to active controllers in the network and the network search space. Therefore, this search space can be significantly reduced by using standard array decision variables. In this paper, we present an optimal array model for the star capacity-aware delay-based next controller placement problem (SCDNCPP). The purpose of the proposed model is to minimize the maximum, for all switches, of the sum of the worst-case delay from the switch to the nearest first controller with enough capacity and the worst-case delay from the same switch to the nearest second controller with enough capacity. In addition, we formulate the problem with MIP (Mixed Integer Programming) model for multiple controller failure and solve it with CPLEX optimizer, but the execution time of the model is significantly longer. We also use the population-based simulated annealing algorithm to converge the problem rapidly toward the optimal solution and reduce time complexity. The simulation results are estimated with real Internet Zoo topologies. The delay improvement rate of the proposed approach according to the simulation results, in case of two controller failure, performs better than CNCP (Capacitated Next Controller Placement) and RCCPP (Resilient Capacity-aware Controller Placement Problem) approaches as much as 1.73 ms and 2.34 ms on Pameltto topology and 6 ms and 2.81 ms on Deltacom topology, respectively. The improvement rate improves significantly when the topology size grows. Additionally, the results show that the execution time of the heuristic algorithm to solve the problem is much better than the execution time of the mixed Integer programming formulation, on average, as much as 1.85 s on Pameltto topology and 1.98 s on Deltacom topology, respectively.
Corporate governance and social sustainability are conceptualized in western countries and their practices have developed over the globe. Environmental, Social, and Governance (ESG) scores can measure the sustainability performances of firms and the social measures of sustainability are at present gaining greater importance. Hence to recuperate the level of corporate governance, we link the same to performance in social sustainability. We consider for this study about 1820 firms, globally that are listed in the Thomson Reuters’ ESG scores. Through this study; we empirically deliberate the relationship between social factors and corporate governance, which in return can influence the overall ESG score of an organization. The insights from the study also indicate how efficiently the organizations are dealing with their corporate governance. These empirical findings can provide support to the theories explaining the rationale for the impact of social sustainability on corporate governance.
In the year 2020, the word "pandemic" has become quite popular. A pandemic is a disease that spreads over a wide geographical region. The massive outbreak of coronavirus popularly known as COVID-19 has halted normal life worldwide. On 11th March 2020, the World Health Organization (WHO) quoted the COVID-19 outbreak as a "Pandemic". The outbreak pattern differs widely across the globe based on the findings discovered so far; however, fever is a common and easily detectable symptom of COVID-19 and the new COVID strain. After the virus outbreak, thermal scanning is done using infrared thermometers in most public places to detect infected persons. It is time-consuming to track the body temperature of each person. Besides, close contact with infected persons can spread the virus from the infected persons to the individual performing the screening or vice-versa. In this research, we propose a device architecture capable of automatically detecting the coronavirus or new COVID strain from thermal images; the proposed architecture comprises a smart mask equipped with a thermal imaging system, which reduces human interactions. The thermal camera technology is integrated with the smart mask powered by the Internet of Things (IoT) to proactively monitor the screening procedure and obtain data based on real-time findings. Besides, the proposed system is fitted with facial recognition technology; therefore, it can also display personal information. It will automatically measure the temperature of each person who came into close contact with the infected humans or humans in public spaces, such as markets or offices. The new design is very useful in healthcare and could offer a solution to preventing the growth of the coronavirus. The presented work hasa key focus on the integration of advanced algorithms for the predictive analytics of parameters required for in-depth evaluations. The proposed work and the results are pretty effectual and performance cognizant for predictive analytics. The manuscript and associated research work integrate the IoT and Internet of Everything (IoE) based analytics with sensor technologies with real-time data so that the overall predictions will be more accurate and integrated with the health sector. Supplementary information: The online version contains supplementary material available at 10.1007/s12652-022-04395-7.
The comprehensive view of the whole framework
Overview of the framework for contrastive learning
Multimodal feature fusion in the Findings generation module
Automated radiology report generation can not only lighten the workload of clinicians but also improve the efficiency of disease diagnosis. However, it is a challenging task to generate semantically coherent radiology reports that are also highly consistent with medical images. To meet the challenge, we propose a Multimodal Recursive model with Contrastive Learning (MRCL). The proposed MRCL method incorporates both visual and semantic features to generate “Impression” and “Findings” of radiology reports through a recursive network, in which a contrastive pre-training method is proposed to improve the expressiveness of both visual and textual representations. Extensive experiments and analyses prove the efficacy of the proposed MRCL, which can not only generate semantically coherent radiology reports but also outperform state-of-the-art methods.
In recent years, advanced threat and zero day attacks are increasing significantly, but the traditional network intrusion detection system based on feature filtering or based on a well known signature has some drawbacks. Accordingly, there is a need for security solutions that are suitable for IoT environment. A network intrusion detection system (NIDS) is a solution that examines network traffic and alerts system administrators if there are security breaches. In this paper, a fusion-based anomaly detection using modified isolation forest for Internet of Things (IoT) is proposed. The proposed NIDS has been evaluated using three benchmark datasets(UNSW-NB15, NLS-KDD and KDDCUP 99) in terms of F-score, accuracy and detection rate. Results show that the suggested approach reduces the run time by 28.80% for UNSW-NB15 in the training model and achieves 97.2%,97.4% accuracy and detection rate respectively. Moreover, M-iForest outperforms other NIDS techniques that are selected from state-of-the-art relevant research found in the literature.
Structure of the system
Structure of NAE model
Extraction scheme
The time consumption of NAE, hybrid model and several machine learning methods
a Accuracy of NAE on KDDTest+ in training phase. b Average loss of NAE in training phase
In recent years, deep learning techniques have been widely applied to network intrusion detection. However, current detection methods suffer from a low detection rate, rendering them useless in fighting unknown attacks. Thus, in this article, we proposed a hybrid detection system based on deep learning techniques. First, to understand comprehensively the pattern of normal network traffic and anomaly detection, a designed nonsymmetric autoencoder (NAE) is proposed. The NAE extracts the latent feature of network traffic with two different convolution neural networks and multiple linear layers are applied to the NAE’s decoder to reconstruct the input. Besides, a latent feature extraction scheme using the NAE encoder is proposed and a deep neural network (DNN) is trained with this latent feature to achieve detection. In addition, in order to strengthen system stability, a hybrid scheme is proposed that considers the detection results of NAE and DNN, and makes the comprehensive decision. Experiments are performed on the NSL-KDD, N-baIoT and BoT-IoT datasets respectively to evaluate the proposed hybrid model. The evaluation is carried out through classification evaluation indexes such as accuracy, precision, recall, F1-score. The results have shown that our proposed hybrid model gets the best detection ability of abnormal traffic compared with several state-of-the-art intrusion detection methods.
Automated skin lesion classification in dermoscopy images remains challenging due to the existence of artefacts and intrinsic cutaneous features, diversity of lesion morphology, insufficiency of training data, and class imbalance problem. To address these challenges, we propose a new densely connected convolutional network termed AttDenseNet-121, which is obtained by integrating the convolutional block attention module (CBAM) into DenseNet-121. CBAM is a simple yet efficient attention module, which is further improved by adding a novel pooling layer (multiscale-pooling) to effectively enhance its attention capability. The optimized CBAM strengthens the representation power of DenseNet-121 by emphasizing meaningful features and suppressing unnecessary ones, thus significantly enhancing the classification performance. Besides, to handle the imbalanced dataset, we employ an improved focal loss function rather than the traditional cross-entropy loss function to train AttDenseNet-121. The improved focal loss is calculated from a different perspective compared with the original focal loss. It makes the distribution of positive samples and negative samples in each batch the same as that in the original data, successfully mitigating the negative influence of class imbalance on this multi-class classification task. We conduct extensive experiments on the public benchmark dataset (HAM10000) and the results indicate that our proposed method achieves superior performance compared with that of the baselines and state-of-the-art algorithms.
E-Health systems based on the internet of things belong to real-time diagnosis and monitoring systems require optimizing solutions. Performing the quality of service and security of large-scale sensor network data developed for e-health-aware applications becomes a great challenge. To provide real-time effective solution, this paper introduces a novel generalized chaotic function expressed as a form of discrete mapping with a new approach for enhancing structure complexity. While the control parameters vary in a wide range of values, the structure exhibits several new one-dimensional discrete-time maps, including known ones namely Wavelet and Gaussian maps. Optimized values of the operands depending on excellent chaotic dynamic proprieties were chosen for highly random and secure keys. Simulation is conducted for different encryption algorithms applied to NIH chest X-ray data set to prove the new map efficiency in terms of speed and accuracy. The results show an excellent and satisfactory encryption time and security performances record that has reached; 0.0001s, supplemented by an entropy value of 7.9999947.
Artificial neural networks (ANNs) are finding increasing use as tools to model and solve problems in almost every discipline in today’s world. The successful implementation of ANNs in software—particularly in the fields of deep learning and machine learning—has spiked an interest in designing hardware architectures that are custom-made to implement ANNs. Several categories of ANNs exist. The two-layer bidirectional associative memory (BAM) is a particular class of hetero-associative memory networks that is extremely efficient and exhibits good performance for storing and retrieving pattern pairs. The memristor is a novel hardware element that is well-suited to modelling neural synapses because it exhibits tunable resistance. In this work, in order to create a device that can perform Braille–Latin conversion, we have implemented a circuit realization of a BAM neural network. The implemented hardware BAM uses a memristor crossbar array for modelling neural synapses and a neuron circuit comprising an I-to-V converter (resistor), voltage comparator, D flip-flop, and inverter. The efficiency of the implemented hardware BAM was tested initially using 2 × 2 and 3 × 3 patterns. Upon successfully verifying the ability of the implemented BAM to store and retrieve simple pattern pairs, it was trained for a pattern-recognition application, namely mapping Braille alphabets to their Latin counterparts and vice versa. The performance of the implemented BAM network is robust even on the introduction of noise. The application can recognize the input patterns with accuracies of 100% in either direction when tested with up to 30% noise.
Fractional-order chaotic systems are a hot topic of research due to their applicability in all related fields of science and engineering. For instance, they have been pointed out as a potential solution for security in smart cities and Internet of Things networks through data encryption using random number generators. However, for security applications, one of the main challenges is the physical realization of fractional-order chaotic systems using digital platforms. To overcome this, several works have proposed FPGA technology for implementing complex systems with high computational performance. Nevertheless, the FPGA-based fractional-order chaotic systems implementations require a suitable trade-off between speed performance and hardware cost, especially in embedded approaches that regularly incorporate a system-on-a-chip micro-controller. In this paper, a comparative analysis between the embedded and non-embedded FPGA-based designs for implementing a three-dimensional fractional-order chaotic system and a chaos-based true random number generators is reported. First, we compute a semi-analytical solution of the fractional-order chaotic systems using the Adomian decomposition method, the obtained chaotic time series serves as a random source for the true random number generators. Then, the configuration, resource usage, speed performance, and power consumption of the implementations are tested on a Xilinx Zynq-7000 XC7Z020 system-on-a-chip and an xQuP01v0 FPGA-based processor. The results reveal that while the non-embedded approach showed improved efficiency between cost and performance, the embedded method on the xQuP01v0 FPGA-based processor presented an attractive lower cost-power consumption option against commercial processors.
Signature is used as an important, typical means of identification in banking, security controls, certificates, and contracts. Moreover, it has an intensive role in legal issues that need to be studied, considering the development of related applications. From this standpoint, this study proposes a new method for identifying offline signatures under different uncertainties such as various experimental conditions and environmental noises using deep learning approaches. To achieve this objective, a comprehensive right-to-left signature dataset based on relevant standards is collected from 85 participants at various time intervals under different experimental conditions. A developed deep neural network based on transfer learning networks is designed to extract features from raw data in a hierarchical manner. One of the benefits of the proposed method is that it is independent of right-handed or left-handed people and can be applied to both. The method proposed is examined not just on the collected dataset but also on a variety of other datasets. The proposed network has a 99% accuracy for author signature classification and can withstand a wide range of SNRs. As a result, the classification accuracy at 15 dB remains above 90%. The study's findings show that the proposed network can learn features hierarchically from raw signature data and achieve greater accuracy than other methods. Because of its superior performance, the proposed model can be used as an assistant by signature experts in a variety of applications, including the detection of forgery and criminals.
Traditional recommender systems (RS) assume users’ taste to be static (taste remains same over time) and reactive (a change in taste cannot be predicted and is observed only after it occurs). Further, traditional RS restricts the recommendation process to candidate items generation. This work aims to explore two phases of RS, i.e., Candidate Generation as well as Candidate Ranking. We propose a RS from a multi-objective (short-term prediction, long-term prediction, diversity, and popularity bias) perspective which was previously overlooked. The sequential and non-sequential behavior of users is exploited to predict future behavioral trajectories with the consideration of short-term and long-term prediction using recurrent neural networks and nearest neighbors approach. Further, a novel candidate ranking method is introduced to prevent users from being entangled in recommended items. On multiple datasets, largest being MovieLens (ML) 1M, our model shows excellent results achieving a hit rate and short-term prediction success of 58% and 71% respectively on ML 1M. Further, it implicitly handles two important parameters, i.e., diversity and item popularity with a success rate of 59.22% and 34.28% respectively.
Wireless body area networks (WBANs) are becoming a popular and convenient mechanism for IoT-based health monitoring applications. Maintaining the energy efficiency of the nodes in WBANs without degrading network performance is one of the crucial factors for the success of this paradigm. Obtaining routes for data packets should be a dynamic decision depending on network conditions. Consequently, in this paper, a novel cost-based routing protocol ZITA has been proposed that addresses primary issues of WBAN routing, such as timeliness, link quality, temperature control, and energy efficiency while finding the next hop for data packets. Zipf’s law is applied for relay selection to ensure the distribution of forwarding load among the potential relays. ZITA controls the transmission power level adaptively in order to cope with the time-varying channel conditions following multi-hop architecture. The protocol is simulated and the results show that the protocol gives better performance in terms of data received by the sink, heat dissipation of the wearable as well as implantable sensor nodes, and load sharing among relay nodes.
Human action can be recognized through a unimodal way. However, the information obtained from a single mode is limited due to the fact that a single mode contains only one type of physical attribute. Therefore, it is motivational to improve the accuracy of actions through fusion of two different complementary modality, which are the surface electromyography (sEMG) and the skeletal data. In this paper, we propose a general framework of fusion of sEMG signals and skeletal data. Firstly, vector of locally aggregated descriptor (VLAD) was extracted from sEMG sequences and skeletal sequences, respectively. Secondly, features obtained from sEMG and skeletal data are mapped through different weighted kernels using multiple kernel learning. Finally, the classification results are obtained through the model of multiple kernel learning. A dataset of 18 types of human actions is collected via KinectV2 and Thalmic Myo armband to verify our ideas. The experimental results show that the accuracy of human action recognition are improved by combining skeletal data with sEMG signals.
The recent challenge faced by the users from the multimedia area is to collect the relevant object or unique image from the collection of huge data. During the classification of semantics, the media was allowed to access the text by merging the media with the text or content before the emergence of content based retrieval. After its presence, media retrieval process is made easier than earlier stages by adding the attributes to the media in the database using multi-dimensional feature vectors which are termed as descriptors. The identification this features has become major challenges, so to overcome this issue this paper focuses on a deep learning techniques named as Modified Visual Geometry Group _16, and the result of this techniques have been compared with the existing other feature extraction techniques such as conventional histogram of oriented gradients (HOG), local binary patterns (LBP) and convolution neural network (CNN) methods. In this scheme the video frame image retrieval is performed by assigning the indexing to the all video files in the database in order to perform the system more efficiently. Thus the system produces the top result matches for the similar query in comparison with the existing techniques based on accuracy, precision, recall and F1 score in optimized video frame retrieval.
Numerical solution of example 1 for ε=2-6\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon =2^{-6}$$\end{document} and N=26\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=2^6$$\end{document} with fitting factor (W.F) and without fitting factor (W.O.F)
Loglog plot of the errors EεN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E_{\varepsilon }^{N}$$\end{document} of example 1 for different value of N with ε=10-5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon = 10^{-5}$$\end{document}
Loglog plot of the errors EpεN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E_{p\varepsilon }^{N}$$\end{document} of example 2 for different value of N with ε=10-5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon = 10^{-5}$$\end{document}
Loglog plot of the errors EpεN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E_{p\varepsilon }^{N}$$\end{document} of example 3 for different value of N with ε=10-5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon = 10^{-5}$$\end{document}
The main aim of this paper is to address a novel exponentially fitted finite difference method for the treatment of a class of 2nd order singularly perturbed boundary value problems in ordinary differential equations with a simple turning point. Solution of such pervasive problem exhibits twin boundary layers when the perturbation parameter ε is small tending to zero. The method is most suitable for ε≤10-5 and is obtained by partitioning the domain into two subdomains. Taylor’s series with non symmetric difference approximations to the first derivative is used to derive new three term finite difference schemes valid over each of the two subdomains. Non-uniformity in the solution is resolved by the introduction of suitable exponential fitting factors in the derived schemes using the asymptotic theory of singular perturbations. At the turning point, the reduced equation is approximated by the use of central difference analogue of 2nd order derivative. Thomas algorithm is implemented on Code::BlocksIDEforFortran-90 platform for solving the resulting tridiagonal system of equations. Stability and Convergence of the method are analysed. Efficiency of the method is illustrated by solving three standard problems for ε≤10-5 and presenting the results in tabular/graphical form. Anewformula is introduced and used to know how much a method overcomes the other method(s). Comparisons made show the capability of the method in producing highly accurate and uniformly convergent results with linear rate for all the values of the mesh size h>>ε.
The heart diseases are one of the leading causes of death in today’s world. Wearable Technology is gaining a lot of attention in today’s world. This research focuses on developing a wearable biomedical prototype to predict the presence of heart disease. The research’s findings will be especially helpful in countries where doctor to patient ratio is alarmingly low as wearable technology can be used to monitor parameters of patients anywhere, without restricting to hospital environments. The objective is to predict a possibility of heart disease using Machine Learning Algorithms. Electrocardiogram (ECG) patterns are obtained from the ECG sensor embedded in the wearable biomedical prototype. Variations in ECG patterns are monitored. ECG patterns are used to obtain the heart rate using the R-to-R method. Cleveland data set is used which has 13 attributes including ECG related attributes like resting ECG results, depression in ST-segment induced by exercise relative to rest and slope of peak exercise segment. The proposed system with the Random Forest Algorithm have predicted with an efficiency of 88%. For testing the prototype, human subjects are not involved rather we used static(real) data and the results are sent to the app to take necessary action. This prototype which is developed as a proof of concept will help the elderly people as an assistive equipment.
Manifold is considered to be the explicit form of data, so the smoothness of manifold is related to data dimensionality. Data becomes sparse in the high-dimensional space, which hardly affords sufficient information. Thus, it is a challenge for smooth manifold extraction from the data existing in high-dimensional space. To address this issue, here proposes a deep model of having three-hidden layers for smooth manifold extraction. Our thought is originate from the view of the optimal transportation mass theory. Because high-dimensional data resides around a low-dimensional manifold, we can reconstruct a lower dimensional manifold in high-dimensional space. To guarantee the quality of the reconstructed manifold, the sampling condition is used in order to reconstruct a discrete surface that can converge to an original surface. Meanwhile, the loss function derived by Brenier theorem minimizes the error between the original data distribution and the reconstructed data distribution. In addition, to promote the generalization ability of our model, the neurons in the hidden layers are turned off with the probability manner just during training. Experimental results show our method outperforms the state-of-the-art methods in smooth manifold extraction. We find that as for a deep model, the manner of turning off some neurons using probability carries more weights in improving the smoothness of manifold extraction than investing the effort of simply stacking hidden layers. Moreover, the manner of turning off some neurons using probability also mitigates over-fitting to a certain extent. Our finding also suggests that for high-dimensional space, the results of manifold extraction using the model possessing a deep architecture basic paradigm are superior to that of using the state-of-the-art methods.
This paper explores the issue of COVID-19 detection from X-ray images. X-ray images, in general, suffer from low quality and low resolution. That is why the detection of different diseases from X-ray images requires sophisticated algorithms. First of all, machine learning (ML) is adopted on the features extracted manually from the X-ray images. Twelve classifiers are compared for this task. Simulation results reveal the superiority of Gaussian process (GP) and random forest (RF) classifiers. To extend the feasibility of this study, we have modified the feature extraction strategy to give deep features. Four pre-trained models, namely ResNet50, ResNet101, Inception-v3 and InceptionResnet-v2 are adopted in this study. Simulation results prove that InceptionResnet-v2 and ResNet101 with GP classifier achieve the best performance. Moreover, transfer learning (TL) is also introduced in this paper to enhance the COVID-19 detection process. The selected classification hierarchy is also compared with a convolutional neural network (CNN) model built from scratch to prove its quality of classification. Simulation results prove that deep features and TL methods provide the best performance that reached 100% for accuracy.
This paper investigates the single server double orbit retrial queueing model with customer’s discouragement in a fuzzy environment. The arriving customers in the system may be categorized into two classes, namely, ordinary customers and premium customers. The provision of double orbits (i) ordinary orbit and (ii) premium orbit, for the different class of customers, is made along with the realistic feature of balking and feedback behavior of the ordinary customers. On arrival, if both types of customers find the server busy and intend to join the system, then they are forced to shift their respective orbits. It is assumed that high (less) paying customers can be accommodated in a premium (ordinary) orbit. Further, based on Zadeh’s extension principle, α\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha$$\end{document}-cut approach is implemented to construct a set of parametric nonlinear programming for performance indices, which is then solved by using concepts of calculus. Various performance indices including a mean number of customers and mean waiting time in the system are established for both in the crisp and fuzzy environment. Illustration examples are also explained by taking a triangular fuzzy number for input parameters.
Continuous monitoring of air pollutants in public spaces is indispensable, for ensured wellbeing of humans. Sensor network technology enables real-time monitoring and control of physical environs from distant places using assemblages of low-cost sensor nodes. The nodes are capable of observing physical variations, like changes in air temperature and humidity and the presence of various gasses in their dedicated surroundings, and transferring their readings to a gateway using cooperative routing schemes. The environmental data recorded can provide a solid -basis to accurately identify dominant pollutant regions, the elements involved, and their triggers. In this paper, a cloud-assisted mesh sensor network solution (CMSNS) is proposed for air pollution real-time data acquisition in public regions. The sensory readings are dispatched to a cloud -server for live monitoring, processing, and sustained storage, and made obtainable on an android application for visualization. A real-time experiment is conducted to substantiate and back-up the developed solution.
Among the pool of promising research areas of the current technological era, an exciting research area is the Internet of Things (IoT) that aims to build a network of Internet-capacitate devices to facilitate a smart world. A large pool of devices is embedded in all possible geographical sites to gather the data to enable the intelligent world. The data collected from this massive pool of devices will be enormous in terms of size and diversity. Keeping in cognizance of the battery and energy constraints of the Internet-capacitate devices, any IoT network's efficiency will depend upon the total number of intra-and inter-communications between/within the IoT network's sensor devices components like the base station and the data collection nodes. To decrease the number of intra-and inter-communications, data aggregation from the multiple nodes and transmitting the aggregated data as a single data-packet can be a possible solution. Data aggregation has been proven to be an efficient technique to increase efficiency and keep the data fresh in an IoT framework. Aggregating the data efficiently will eventually minimize the latency and increase the throughput of the network as a whole. This paper has proposed a new mechanism for data aggregation, i.e., beta-dominating set centered cluster-based data aggregation mechanism (βDSC 2 DAM) for the Internet of Things, which is an improvement of the classical cluster-based data aggregation mechanism. The proposed mechanism is compared with the classical cluster-based data aggregation mechanism and evaluated on the parameters of Data Aggregation Time, Average Latency, Mean End-to-End delay of the arrived packets, and Maximum End-to-End delay of the arrived packets in the IoT network. The algorithms are also compared based on asymptotic time complexity analysis. The results reveal that the βDSC 2 DAM performs better in terms of time complexity and the parameters listed than the classical cluster-based aggregation mechanism for the Internet of Things.
The automotive industry is expanding its efforts to develop new techniques for increasing the level of intelligent driving and create new autonomous cars capable of driving with more intelligent capabilities. Thus, companies in this sector are turning to the development of autonomous cars and more specifically developing software along with more artificial intelligent algorithms. However, to be able to trust these systems, they must be developed very carefully, and use techniques that can increase the level of recognition that will consequently improve the level of safety. One of the most important components in this respect for road users is the correct interpretation of traffic sings. This paper presents a deep learning model based on convolutional neural networks and image processing that can be used to improve the recognition of traffic sings autonomously. The results are focused on difficult cases such as images with lighting problems, blurry traffic sings, hidden traffic sings, and small images. Hence, real cases are used in this study for identifying the existing problems and achieving good performance in traffic signal recognition. Finally, as a result, the configuration of the neural architecture based on three phases of convolutions proposed shows a validation accuracy of 99.3% during the data training. Another comparison carried out with the model ResNet-50 obtained an accuracy of 88.5%. Thus, for this type of application, a high validation accuracy is required as the results of our model demonstrated.
Overall system diagram. ⊕\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\oplus$$\end{document} represents the average pooling operation, and ♢\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\diamondsuit$$\end{document} is a binary decision unit which takes as input the sentence representation and the document representation (hi,d)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$({\mathbf {h}}_i, {\mathbf {d}} )$$\end{document} to compute the abstract features described in Eq. (8), and finally, a label is output based on the computed probability
Schematic of a PRNN block. The blocks labeled Bi-GRUp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Bi-GRU^p$$\end{document} are actually the forward and backward GRUs shown in a single block. Internally, the hidden representation h¯i\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{{\mathscr {h}}}_i$$\end{document} is computed by Eq. (1c)
Recurrent Neural Networks (RNN) and their variants like Gated Recurrent Units (GRUs) have been the de-facto method in Natural Language Processing (NLP) for solving a range of NLP problems, including extractive text summarization. However, for certain sequential data with multiple temporal dependencies like the human text data, using a single RNN over the whole sequence might prove to be inadequate. Transformer models that use multiheaded attention have shown that human text contains multiple dependencies. Supporting networks like attention layers are needed to augment the RNNs to capture the numerous dependencies in text. In this work, we propose a novel combination of RNNs, called Parallel RNNs (PRNN), where small and narrow RNN units work on a sequence, in parallel and independent of each other, for the task of extractive text summarization. These PRNNs, without the need for any attention layers, capture various dependencies present in the sentence and document sequences. Our model achieved a 10% gain in ROUGE-2 score over the single RNN model on the popular CNN/Dailymail dataset. The boost in performance indicates that such an ensemble arrangement of RNNs improves the performance compared to the standard single RNNs, which alludes to the fact that constituent units of the PRNN learn various input sequence dependencies. Hence, the sequence is represented better using the combined representation from the constituent RNNs.