Manual detection of brain and tumor tissues takes a long time and is dependent on the state of the operator due to the great complexity of brain tissues. Experts are also required to study the images in order to discover these difficulties, rendering the traditional and outdated approaches ineffectual in their absence. As a result, using automated approaches for precise tumor examination will be quite beneficial. The use of magnetic resonance imaging technologies to diagnose brain cancers has garnered a lot of interest in recent years. One of the most generally utilized procedures in this field is magnetic resonance imaging, which has a great capability of revealing the interior structures of the human body. The present study uses an automated method to determine the tumorous cases from brain MRI. The images have been fed into an ideal convolutional neural network after being preprocessed. Here, a CNN optimized by a metaheuristic algorithm is used for providing a higher accuracy. The proposed CNN has been optimized by an improved version of political optimizer. The results are then compared with some other reported method to show its prominence toward the other methodologies.
Recently, developing automated video surveillance systems (VSSs) has become crucial to ensure the security and safety of the population, especially during events involving large crowds, such as sporting events. While artificial intelligence (AI) smooths the path of computers to think like humans, machine learning (ML) and deep learning (DL) pave the way more, even by adding training and learning components. DL algorithms require data labeling and high-performance computers to effectively analyze and understand surveillance data recorded from fixed or mobile cameras installed in indoor or outdoor environments. However, they might not perform as expected, take much time in training, or not have enough input data to generalize well. To that end, deep transfer learning (DTL) and deep domain adaptation (DDA) have recently been proposed as promising solutions to alleviate these issues. Typically, they can (i) ease the training process, (ii) improve the generalizability of ML and DL models, and (iii) overcome data scarcity problems by transferring knowledge from one domain to another or from one task to another. Although the increasing number of articles proposed to develop DTL- and DDA-based VSSs, a thorough review that summarizes and criticizes the state-of-the-art is still missing. To that end, this paper introduces, to the best of the authors’ knowledge, the first overview of existing DTL- and DDA-based video surveillance to (i) shed light on their benefits, (ii) discuss their challenges, and (iii) highlight their future perspectives.
Building occupancy information could aid energy preservation while simultaneously maintaining the end-user comfort level. Energy conservation becomes essential since energy resources are scarce and human dependency on appliances is only exponentially increasing. While intrusive sensors (i.e., cameras and microphones) can raise privacy concerns, this paper presents an innovative non-intrusive occupancy detection approach using environmental sensor data (e.g., temperature, humidity, carbon dioxide (CO2), and light sensors). The proposed scheme transforms multivariate time-series data into images for better encoding and extracting relevant features. The utilized image transformation method is based on data normalization and matrix conversion. Precisely, by representing time-series in 2D space, an encoding kernel can move in two directions while it can move only in one direction when applied to a 1D signal. Moreover, machine learning (ML) and deep learning (DL) techniques are utilized to classify occupancy patterns. Several simulations are used to evaluate the approach; mainly, we investigated pre-trained and custom convolutional neural network (CNN) models. The latter attained an accuracy of 99.00%. Additionally, pixel data are extracted from the generated images and subjected to traditional ML methods. Throughout the numerous comparison settings, it was observed that the latter strategy provided the optimal balance of 99.42% accuracy performance and minimal training time across the occupancy datasets.
Non-intrusive load monitoring (NILM) techniques are central techniques to achieve the energy sustainability goals through the identification of operating appliances in the residential and industrial sectors, potentially leading to increased rates of energy savings. NILM received significant attention in the last decade, reflected by the number of contributions and systematic reviews published yearly. In this regard, the current paper provides a meta-analysis summarising existing NILM reviews to identify widely acknowledged findings concerning NILM scholarship in general and neural NILM algorithms in particular. In addition, this paper emphasizes federated neural NILM, receiving increasing attention due to its ability to preserve end-users’ privacy. Typically, by combining several locally trained models, federated learning has excellent potential to train NILM models locally without communicating sensitive data with cloud servers. Thus, the second part of the current paper provides a summary of recent federated NILM frameworks with a focus on the main contributions of each framework and the achieved performance. Furthermore, we identify the non-availability of proper toolkits enabling easy experimentation with federated neural NILM as a primary barrier in the field. Thus, we extend existing toolkits with a federated component, made publicly available and conduct experiments on the REFIT energy dataset considering four different scenarios.
Intersections form a significant part of an urban area and are the nuclei of congestion. In this regard, traffic management in un-signalized intersections is a considerable challenge because the unorganized passage of vehicles may lead to accidents, traffic jams, or even deadlocks. This can also increase the average waiting time for vehicles. In this research, a context-aware mechanism (CATMI) is proposed to calculate the priority of vehicles for passing the intersection. To this end, multi-attribute decision-making is utilized, which obtains a formula based on the effectiveness of the contributed contextual elements. Based on the priority, a vehicle is either granted or denied to cross the intersection. In this scheme, traffic management is accomplished such that deadlocks and starvations are prevented. The simulation result of the CATMI mechanism is compared with the results of previous traffic control systems. The results indicate that at intersections with various input rates, CATMI reduces the delay in most scenarios.
Hyperspectral Image (HSI) classification methods that use Deep Learning (DL) have proven to be effective in recent years. In particular, Convolutional Neural Networks (CNNs) have demonstrated extremely powerful performance in such tasks. However, the lack of training samples is one of the main contributors to low classification performance. Traditional CNN-based techniques under-utilize the inter-band correlations of HSI because they primarily use 2D-CNNs for feature extraction. Contrariwise, 3D-CNNs extract both spectral and spatial information using the same operation. While this overcomes the limitation of 2D-CNNs, it may lead to insufficient extraction of features. In order to overcome this issue, we propose an HSI classification approach named Tri-CNN which is based on a multi-scale 3D-CNN and three-branch feature fusion. We first extract HSI features using 3D-CNN at various scales. The three different features are then flattened and concatenated. To obtain the classification results, the fused features then traverse a number of fully connected layers and eventually a softmax layer. Experimental results are conducted on three datasets, Pavia University (PU), Salinas scene (SA) and GulfPort (GP) datasets, respectively. Classification results indicate that our proposed methodology shows remarkable performance in terms of the Overall Accuracy (OA), Average Accuracy (AA), and Kappa metrics when compared against existing methods.
The growth of IoT, edge and mobile Artificial Intelligence (AI) is supporting urban authorities exploit the wealth of information collected by Connected and Autonomous Vehicles (CAV), to drive the development of transformative intelligent transport applications for addressing smart city challenges. A critical challenge is timely and efficient road infrastructure maintenance. This paper proposes an intelligent hierarchical framework for road infrastructure maintenance that exploits the latest developments in 6G communication technologies, deep learning techniques, and mobile edge AI training approaches. The proposed framework abides with the stringent requirements of training efficient machine learning applications for CAV, and is able to exploit the vast numbers of CAVs forecasted to be present on future road networks. At the core of our framework is a novel Convolution Neural Networks (CNN) model which fuses imagery and sensory data to perform pothole detection. Experiments show the proposed model can achieve state of the art performance in comparison to existing approaches while being simple, cost-effective and computationally efficient to deploy. The proposed system can form part of a federated learning framework for facilitating large scale real-time road surface condition monitoring and support adaptive resource allocation for road infrastructure maintenance.
To analyze day trading dynamics for Nifty Index futures and options contracts, a detailed study is steered to understand the quantum of volume traded and how volume traded affects the underlying volatility. Day trades are about 30% and 46% of the total trades for futures and options contracts, respectively. This signifies high volatility. Volume traded by individuals is bulk compared to other categories for both intraday and non-day trades. This study estimates the volatility volume dynamics. Volatility is assessed by the minimum-variance unbiased estimator. This method, independent of the drift and opening jumps, provides estimates of the least variance for more accuracy. Volume is segmented into a number of trades and average trade size. To understand the effect of volume, trade size and inventory on volatility, we use the logit regression function. For non-day Nifty Index futures contracts, low volumes are traded as opposed to high volumes for day trades, suggesting high speculative activity. For options contracts, the volume volatility estimates although significant are weak compared to futures contracts.
Exploring the potential of natural extracts for pharmaceutical applications in the treatment of different diseases is an emerging field of medical research, owing to the tremendous advantages that they can offer. These include compound sustainability due to the natural origin and virtually unlimited availability. In addition, they contribute to promoting the countries in which they are extracted and manufactured. For this reason, wild active compounds derived from plants are attracting increasing interest due to their beneficial properties. Among them, Avicennia marina has been recently recognized as a potential source of natural substances with therapeutic activities for anti-cancer treatment. A. marina beneficially supplies different chemical compounds, including cyclic triterpenoids, flavonoids, iridoids, naphtaquinones, polyphenols, polysaccharides, and steroids, most of them exhibiting potent antitumor activity. The in vivo and in vitro studies on different models of solid tumors demonstrated its dose-dependent activity. Moreover, the possibility to formulate the A. marina extracted molecules in nanoparticles allowed researchers to ameliorate the therapeutic outcome of treatments exploiting improved selectivity toward cancer cells, thus reducing the side effects due to nonspecific spread.
The current scenario of society is to produce fuel from renewable energy resources. The purpose of this research work is to develop an integrated approach for glycerol valorization and biodiesel production. Employing a range of methodologies widely used in the industry, technical analysis and assessments of the process’s applicability in real-world situations are also made. The integrated process plant is simulated using Aspen Plus®. Several different sensitivity analyses are carried out to describe the process that improves efficiency and are designed to maximize hydrogen recovery from the reforming section. The integrated process results are compared with several existing standalone biodiesel production processes. Additionally, the results are verified with the theoretical studies on glycerol valorization. The outcomes of the process plant simulation reveal coherent results with the current industrial standards for the two processes. The results show that the amount of glycerol produced (stream 7) is 60.72 kmol/h in mass flow rate, this translates to 7272.74 kg/h. The hydrogen produced is 488.76 kmol/h and, in mass flow rate, this translates to 985.3 kg/h. The total yield of hydrogen produced is around 13%. The biodiesel yield is at 92.5%. It shows a realistic recovery that would be attained if the process is implemented, contrary to theoretical studies.
Major Depressive Disorder (MDD) is a neurohormonal disorder that causes persistent negative thoughts, mood and feelings, often accompanied with suicidal ideation (SI). Current clinical diagnostic approaches are solely based on psychiatric interview questionnaires. Thus, a computational intelligence tool for the automated detection of MDD with and without suicidal ideation is presented in this study. Since MDD is proven to affect cardiovascular and respiratory systems, the aim of the study is to automatically identify the disorder severity in MDD patients using corresponding multi-modal physiological signals, including electrocardiogram (ECG), finger photoplethysmography (PPG) and respiratory signals (RSP). Data from 88 subjects were used in this study, out of which 25 were MDD patients without SI (MDDSI−), 18 MDD patients with SI (MDDSI+), and 45 normal subjects. Multi-modal physiological signals were acquired from each subject, including ECG, RSP, and PPG signals, and then pre-processed. Discrete wavelet transform (DWT) was applied to the signals, which were decomposed up to six levels, and then eleven nonlinear features were extracted. The features were ranked according to the analysis of variance test and Marginal Fisher Analysis was employed to reduce the feature set, after which the reduced features were ranked again to select the most discriminatory features. Support vector machine with polynomial radial basis function (SVM-RBF) as well as k-nearest neighbor (KNN) classifiers were used to classify the significant features. The performance of the classifiers was evaluated in a 10-fold cross validation scheme. The best performance achieved for the classification of MDDSI+ patients was up to 85.2%, by using selected features from the obtained multi-modal signals with SVM-RBF, while it was up to 96.6% for the detection of MDD patients against healthy subjects. This work is a step toward the utilization of automated tools in diagnostics and monitoring of MDD patients in a personalized and wearable healthcare system.
Cloud computing forms the backbone of the era of automation and the Internet of Things (IoT). It offers computing and storage-based services on consumption-based pricing. Large-scale datacenters are used to provide these service and consumes enormous electricity. Datacenters contribute a large portion of the carbon footprint in the environment. Through virtual machine (VM) consolidation, datacenter energy consumption can be reduced via efficient resource management. VM selection policy is used to choose the VM that needs migration. In this research, we have proposed PbV mSp: A priority-based VM selection policy for VM consolidation. The PbV mSp is implemented in cloudsim and evaluated compared with well-known VM selection policies like gpa, gpammt, mimt, mums, and mxu. The results show that the proposed PbV mSp selection policy has outperformed the exisitng policies in terms of energy consumption and other metrics.
Non-intrusive Load Monitoring (NILM) is becoming a paramount in both industrial and residential sectors to achieve efficient energy consumption. Thus, research on this matter flourished in recent years, where deep neural networks gained the highest interest from the research community, commonly referred to as neural NILM. As a predominant practice, neural NILM models follow a centralised based learning scheme where the energy data is assumed to be available in a central node for training. However, this practice and the enormous amount of data required by these algorithms raise privacy and security concerns from the consumer’s side since energy data can reveal in- home activities and occupancy records if intercepted. Federated Learning (FL), also referred to as collaborative learning, is seen as viable solution to address these issues. Nonetheless, its application in neural NILM is still in its infancy and many challenges are yet to be addressed. The current paper presents an overview of neural NILM models following both a centralised and a federated learning paradigm. Furthermore, it identifies the main challenges with regard to both learning paradigms along with potential future re- search directions for more robust, secure and privacy-preserving models in the neural NILM industry.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.