SN Computer Science

Published by Springer Nature
Online ISSN: 2661-8907
Print ISSN: 2662-995X
Learn more about this page
Recent publications
With the rapid growth of energy consumption and acceleration of industrialization as well as urbanization, the emission of an automobile and industrial exhaust, polluting gases have created incredible harm to nature, also influenced individuals' healthy lives. The control and prevention of air pollution become required to protect the environment and human lives. Additionally, the prediction of air pollution may offer reliable data of air pollution through predicting the future concentration of pollutants in the air. These days, concentrating on tackling exceptional ecological issues and taking activities to forestall and lessen air contamination have become a fundamental and challenging task. The machine learning is an efficient approach in the field of environmental modelling, which is reliably forecast air pollution in advance. With regards, the present study focuses on the study, analyse and review of forecasting air pollution using different learning techniques then suggest a possible solution for future work.
 
As modern communication system shifts upward to the millimetre waveband due to the advantages of wider bandwidth and interference avoidance at lower frequency spectrum. It demands a high data rate of information exchange for millimetre wave applications. In this view Dielectric resonator antenna are preferred, as DRA’s are more efficient and have minimum losses compared to microstrip patch antennas. A substrate Integrated dielectric resonator antenna is proposed using aperture coupling for millimeter wave applications. In contrast, the Dielectric resonator (DR) antenna (DRA) relies on the radiating mode of a DR which is purely made of dielectric. Due to the absence of surface waves and conductor losses, the efficiency of DRA can reach as high as over 90% even in mm wave band. A wideband circularly polarised substrate-integrated dielectric resonator antenna (SIDRA) is proposed for millimetre wave applications and is simulated using HFSS software. The proposed SIDRA has two cylindrical dielectric resonators, an inner cylindrical dielectric resonator (DR), an outer ring DR and a substrate-integrated waveguide (SIW) cavity. In order to generate CP fields two rectangle slots of different lengths are used to form a cross-slot to feed the DRA at its bottom. The fundamental HEM11ẟ mode of inner DR and the higher order HEM12 ẟ+1 mode of the overall DR is excited simultaneously at frequencies. The two degenerate modes provide wide impedance and axial ratio bandwidths. The directivity of the antenna is improved due to the surrounding SIW cavity as compared with the isolated one and achieves a maximum gain of 8dBic.
 
A Stroke is a medical condition in which poor blood flow to the brain. The blood flow to the brain is interrupted and the brain cells are deprived of oxygen and nutrients it will cause the cell death. In order to detect a brain stroke, generally we use a CT scan or MRI is done to identify the region. It mainly occurs for people within the age group of 25-90 years. On analysis of risk factors according to the type of stroke hypertension still remaining the most common risk factor for both Ischemic and Hemorrhagic stroke. This study revels hypertension as the most common risk factor for stroke followed by smoking, diabetes. In this paper we present a method for a detection of brain stroke by using image processing tools. The input to the system is the MRI images and these are preprocessed by some techniques, it analyses to conclude about of brain stroke. After prediction of the stroke, the feature extract takes place. By using Electroencephalography (EEG) we examined hemorrhagic or ischemic stroke based on CT-scan/MRI.
 
Low light images captured in a non-uniform illumination environment sometimes are degraded with the scene depth, or artificial lights and the performance of the sensor used for capturing the image. This degradation results in severe object information loss in the image, which makes the salient object detection more challenging due to low contrast property and artificial light influence. However, existing salient object detection models are developed based on the assumption that the images are captured under a sufficient brightness environment, which is impractical in real-world scenarios. Modern digital cameras are still limited in capturing high dynamic range images in low-light conditions. This paper provides the most effective way to capture evidentiary color detail in extreme low light environments by using image processing techniques.
 
  • M. S. Guru Prasad
    M. S. Guru Prasad
  • H. N. Naveen Kumar
    H. N. Naveen Kumar
  • K. Raju
    K. Raju
  • [...]
  • S. Chandrappa
    S. Chandrappa
Segmentation is a process of dividing image into multiple parts. Each part is called a segment. The main objective of the segmenting image is to convert the representation of an image into another format that is useful for the analysis of image features and properties. Retinal image segmentation is an essential stage for retinal disease analysis and identification. Retinal image segmentation helps the ophthalmologists in the detection of glaucoma eye disease. Glaucoma eye disease is one of the important causes of permanent vision loss. Early detection of glaucoma is most important to prevent further progression of vision loss. The vertical cup-to-disc ratio is the important clinical parameter used for glaucoma disease detection. Therefore, accurate segmentation of optic disc from retinal images is of great significance. This work presents three categories of segmentation algorithms for the segmentation of the optic disc region from retinal fundus images. Thresholding-based methods, clustering-based technique, and region-based technique are used for optic disc segmentation. The proposed methods were evaluated using DRIONS-DB dataset containing 110 images and HRF dataset containing 45 images. The performance metric boundary localization error is calculated by comparing each proposed method with the ground truth values. Results from the proposed methods show that methods are less complex and efficiently work on all images.
 
  • Ioannis G. Tsoulos
    Ioannis G. Tsoulos
  • Chrysostomos Stylios
    Chrysostomos Stylios
  • Vlasis Charalampous
    Vlasis Charalampous
A feature construction method that incorporates a grammatical guided procedure is presented here to predict the monthly mortality rate of the COVID-19 pandemic. Three distinct use cases were obtained from publicly available data and three corresponding datasets were created for that purpose. The proposed method is based on constructing artificial features from the original ones. After the artificial features are generated, the original data set is modified based on these features and a machine learning model, such as an artificial neural network, is applied to the modified data. From the comparative experiments done, it was clear that feature construction has an advantage over other machine learning methods for predicting pandemic elements.
 
The Sentiment analysis model for the Twitter data
Flow chart for the process of Classification
Graphical representations of existing F-WOA-HDNN with the proposed SGDOA-SGNN model
The improvised sentiment analysis model over a diverse classifier
The overall performance of the proposed method with other classifiers
  • K. P. Vidyashree
    K. P. Vidyashree
  • A. B. Rajendra
    A. B. Rajendra
Sentiment analysis is one of the effective techniques for mining the opinion from shapeless data contains text like review of the products, review of the movie. Sentiment analysis is used as a key to gather response from consumers, reviews of brands, marketing analyses, and political campaigns. In the subject of natural processing, performing sentiment analysis using the data obtained from Twitter is considered as a new study in these days. The dataset is gathered using the Twitter API and the Twitter package. The analysis of Twitter data is a process which takes place automatically by text data analysis to determine the view of public on the specified topic. Here, an improvised sentimental analysis model is proposed to identify the polarity of the tweets such as positive, neutral and negative. In this paper, stochastic gradient descent (SGD) algorithm uses stochastic gradient neural network (SGNN) to categorize the sentiment analysis on basis of tweets provided by the Twitter users and the proposed stochastic gradient descent optimization Algorithm based on stochastic gradient neural network (SGDOA-SGNN) provides better performance when compared with the existing Forest–Whale Optimization Algorithm based on deep neural network F-WOA-DNN model.
 
A classification scheme for Graph-Like Modeling Languages (GLML) is presented in this paper. The novelty of this classifier lies in its application to a meta-model for GLML that deviates from the simple graph model and underlies a large number of GLML. The main goal of using this classification scheme is to support the reuse of layout algorithms for GLML. GLML are used directly or indirectly for the development of software by model-based software engineering techniques. In other domains, graph-like models are artifacts (e.g., circuit diagrams, energy flow diagrams) that serve as input for downstream specialized applications (simulators, optimizers). The concrete syntax of a language for creating, editing, and understanding models is highly important for the development of modeling tools. Layout methods for the used languages have to be implemented to achieve software tools with good usability. Developing layout algorithms is a complex topic that is covered by the specialized field of Graph Drawing. However, there is no existing procedure to determine which layout algorithm can be used for a GLML. Matching layout algorithms to GLML can be achieved by applying the presented classification scheme.
 
Now-a-days globally visually impaired people suffers with lot of problems in their daily activities during their livelihood. We identified their problems of object detection and correlative analysis based problems and tried to solve them in an efficient way. They suffer object identification in a minimal and maximal way of sight viewing and the critical way to identify whether the object is correct or not. Thus, for this purpose, we proposed a new mechanism for the easiest way to object identification by using the You Only Look Once (YOLO) algorithm and automatic synchronized data enhanced through the Natural Language Processing using Text-To-Speech (TTS) and proposed application which will guide the entire object behavior. This research work is useful to access for the people in need across the globe who actually need it.
 
Cyber attacks increased 50% year over year
Comparing methods of text classification
Humans have benefited greatly from technology, which has helped to raise standards of living and make important discoveries. But there are a lot of hazards associated with using it. The prevalence of digital video through mobile smartphone applications like WhatsApp and YouTube as well as web-based multimedia platforms are likewise gaining in importance as crucial. But there are also global security issues that are arising. These difficulties could cause significant issues, especially in cases where multimedia is a crucial factor in criminal decision-making, such as in child pornography and movie piracy. Consequently, copyright protection and video authentication are required in order to strengthen the reliability of using digital video in daily life. A tampered film may contain the relevant evidence in a legal dispute to convict someone of a violation or clear a guilty party of wrongdoing. Hence, to develop it is crucial to have reliable forensic techniques that would enhance the justice administration systems and enable them to reach just verdicts. This article discusses numerous forensic analysis fields, including network forensics, audio forensics, and video forensics. In this study, many algorithms such as Random Forest, Multilayer Perceptron (MLP), and Convolutional Recurrent Neural Networks (CRNN) are used for implementing different types of forensic analysis. Also, image fusion is used which can provide more information than a single image and extract features from the original images. This study came to the conclusion that the random forest provides the finest results for network forensic analysis with an accuracy of 98.02 percent. A lot of work has been done during the past years, through an analysis of current methods and machine learning strategies in the field of video source authentication and the study aims to provide a thorough summary of that work.
 
Flowchart of the conducted experiment for all four scenarios
Reinforcement learning to enhance resource scheduling and load balancing
Cloud computing provides various services to the end-user by processing a high number of tasks using the Internet. The end-user submits this high number of tasks to the cloud for execution. The cloud processes and executes these tasks on the cloud Virtual Machines (VM) using resource scheduling algorithms and performing load-balancing mechanisms. The cloud performance is directly proportional to how the resources are scheduled and how the load is managed. With proper resource scheduling and load balancing, the cloud performance is enhanced, and it can execute a more significant number of tasks. Similarly, the cloud performance is hampered by poor resource scheduling as well as load misbalancing. Therefore, it becomes essential for the cloud to schedule its resources and manage its load in an appropriate way to provide proper Quality of Service (QoS) without any infractions in the Service Level Agreements (SLA). With static resource scheduling, managing the resources and balancing the load becomes challenging while executing tasks, especially when the cloud system has been given no intelligence. Resource scheduling and load balancing become complex without any intelligence to keep a smooth flow of task execution, irrespective of the task load. The main objective of this research paper is to study and compare the behavior of resource scheduling algorithms by executing tasks of different loads under different scenarios and circumstances. This research paper is broadly divided into three phases: the first phase includes a simulation experiment conducted on the WorkflowSim environment where tasks are processed and executed on VMs in four different scenarios and circumstances; the second phase includes a detailed empirical analysis of the results obtained from the experiment conducted in the first phase using the mathematical model of Linear Regression and R² analysis; the last part proposes reinforcement learning (RL) to provide intelligence and improve the resource scheduling and load-balancing mechanisms in the cloud computing environment.
 
Basic steps in crop disease detection
Comparison of ANN
Comparison of accuracy
Comparison of precision
By 2050, the population is projected to exceed nine billion, necessitating a 70% increase in agricultural output to meet the need. Land, water, and other resources are running out due to the growing world population, making it impossible to maintain the demand–supply cycle. The yield of cultivation is also declining as a result of people's ignorance of the growing crop illnesses. Given that food is the most basic human requirement, future research should focus on revitalizing the agricultural sector. Farming may be made more productive for farmers by applying the right artificial intelligence technologies and datasets. Agronomics can benefit greatly from artificial intelligence. So that we can farm more effectively and be as productive as possible, we need to adopt a better strategy. The objective of this paper is to experimentally analyze the machine learning algorithms and methods already in use and forecast the most effective approach to use in each agricultural sector. In this article, we will present the challenges farmers face when using traditional farming methods and how artificial intelligence is revolutionizing agriculture by replacing the traditional methods.
 
In the contemporary era, cloud computing has emerged as an eminent technology that offers on-demand services anytime and anywhere over the internet. Cloud environment allows organizations to scale their application based on demand. Traditionally, monolithic approach of application development has begun to face various bottlenecks and challenges. This has promoted a shift to a new paradigm, micro-service architecture for the development of cloud-based applications, which is gaining popularity due to decoupled independent services. Micro-service architecture contemplates overcoming the limited scalability of monolithic architecture. In this paper, a multitenant booking application is designed and developed using both monolithic and micro-service architecture as a case study. The application is deployed as Docker container images on Google cloud platform. The comparison of various factors, such as performance, scalability, load balancing, reliability, resource utilization, and infrastructure cost is performed. JMeter is used as a load generation and performance testing tool. Performance analysis in terms of response time is done for the multitenant booking application. Results indicate that independent scaling of micro-services leads to effective utilization of resources unlike the monolithic approach.
 
Breast cancer is the second most common cause of death among women. An early diagnosis is vital for reducing the fatality rate in the fight against breast cancer. Thermography could be suggested as a safe, non-invasive, non-contact supplementary method to diagnose breast cancer and can be the most promising method for breast self-examination as envisioned by the World Health Organization (WHO). Moreover, thermography could be combined with artificial intelligence and automated diagnostic methods towards a diagnosis with a negligible number of false positive or false negative results. In the current study, a novel intelligent integrated diagnosis system is proposed using IR thermal images with Convolutional Neural Networks and Bayesian Networks to achieve good diagnostic accuracy from a relatively small dataset of images and data. We demonstrate the juxtaposition of transfer learning models such as ResNet50 with the proposed combination of BNs with artificial neural network methods such as CNNs which provides a state-of-the-art expert system with explainability. The novelties of our methodology include: (i) the construction of a diagnostic tool with high accuracy from a small number of images for training; (ii) the features extracted from the images are found to be the appropriate ones leading to very good diagnosis; (iii) our expert model exhibits interpretability, i.e., one physician can understand which factors/features play critical roles for the diagnosis. The results of the study showed an accuracy that varies for the most successful models amongst four implemented approaches from approximately 91% to 93%, with a precision value of 91% to 95%, sensitivity from 91% to 92 %, and with specificity from 91% to 97%. In conclusion, we have achieved accurate diagnosis with understandability with the novel integrated approach.
 
We study the costs and benefits of different quantum approaches to finding approximate solutions of constrained combinatorial optimization problems with a focus on the maximum independent set. Using the Lagrange multiplier approach, we analyze the dependence of the output on graph density and circuit depth. The Quantum Alternating Operator Ansatz approach is then analyzed, and we examine the dependence on different choices of initial states. This approach, although powerful, is expensive in terms of quantum resources. We also introduce a new algorithm, the dynamic quantum variational ansatz (DQVA), that dynamically adapts to ensure the maximum utilization of a fixed allocation of quantum resources. Our analysis and the new proposed algorithm can also be generalized to other related constrained combinatorial optimization problems.
 
Cloud computing process
Risk and challenges of the cloud computing process (CCP)
Load-Balancing System
Classification of Load-Balancing Technique
Qualitative and quantitative metrics for the LB techniques
In the recent Web-based knowledge transfer, cloud computing is essential. The real world has been changed into a virtual one as a result of the pandemic scenario. Cloud computing plays a major role for storing and computing data using remote computing infrastructure for day-to-day activities. The primary concern in cloud computing is distributing information technology (IT) resources efficiently to record the user requests in a short duration. Load-balancing (LB) techniques distribute the system's load among its various nodes to maximize resource usage and user satisfaction. It identifies the heavy loaded and lightly loaded IT resources and balances the task among the clusters. Load balancing ensures that each node in the network shortens reaction times, utilizes optimal resource and boosts performance. To upgrade the performance metrics in cloud computing (CC), various categories of LB techniques have been developed. This survey evaluates the different categories of LB techniques based on general LB, nature inspired-based LB and hybrid LB. The researchers evaluated and tabulated the qualitative and quantitative metrics for LB techniques.
 
Artificial neural networks (ANN) are now widely recognized as a powerful tool for many decision modeling problems. Many methods in this area, like multi-layer perceptron, backpropagation, feed-forward neural network, and many more, are used widely to solve many complex issues. ANN algorithms are also used to solve medical diagnosis to recognizing the disease, check MRI images to provide the results, diagnose heart disease, cancer, etc. Heart failure is considered one of the riskiest human diseases worldwide. In this paper, the MLP-SMOTE model is proposed to predict the failure of the heart using TensorFlow. The experimental result demonstrates that the system predicts heart illness with about 91.55% accuracy using neural networks.
 
Driving value in the supply chain through blockchain [1]
Blockchain in Supply Chain Management [2]
A distribution network is a mechanism that links a company and its suppliers to create and distribute a product to the end customer. This network is made up of numerous activities including people, entities, knowledge, and assets. The distribution network also represents the steps taken to get a good or service out of its inception to the customer. A supply chain links a company and its suppliers to create and distribute a product to the end customer. This network is made up of numerous actions, persons, entities, knowledge, and resources. The distribution network also represents the steps taken to get a service or product from its inception to the customer. Blockchain allows all parties in a supply chain to access the same data, potentially reducing communications or data transfer issues. Less time to be spent on data confirmation and more time can be spent on providing goods and services quality, cutting prices, or both. Blockchain allows all parties in a supply chain to access the same information, potentially reducing connection or data transfer issues. Less time that could be spent on data confirmation and more time could be spent on delivering products or services quality, cutting prices, or both. This article takes a broad look at how blockchain might assist manage supply chains. Also discussed is how a crypto supply network outperforms a supply chain.
 
The main focus here is to generate the model to visualize the activities of a human that assures the aversion of human life. Machine learning techniques are used in these applications to classify signals collected by various types of sensors. Indeed, this sector frequently necessitates dealing with high-dimensional, multimodal streams of data that are activity recognition characterized by a large variability. Activity recognition is a method of identifying a person’s activities based on observations of that individual and his or her surroundings. Data obtained from many sources, such as ambient or body-worn sensors, can be used to perform recognition. The six categories of sitting, standing, walking, climbing up, climbing down, and lying are used to group the actions into a dataset (Bulbul in Mach Learn Comput Sci, 2018). We offer a study of a method for identifying activities using data from a gyroscope and accelerometer, such as walking up stairs or standing. A depiction of the data informs the analysis. The differences in mistake rates across different classification systems are investigated.
 
Evidential COVID-Net architecture
Sample images of datasets of COVID-CT
Curve of ROC with full supervised training and partially training
Performance analysis of testing and training
At present, the entire world has suffered a lot due to the spike of COVID disease. Despite the world has been developed with so much of technology in the domain of medicine, this is a very huge challenge in all over the world. Though, there is a rapid development in medical field, those are not even sufficient to diagnose the symptoms of this COVID in earlier stage. Since the spread of this disease in all over the world, it affects the livelihood of the human. Computed Tomography (CT) images have given necessary data for the radio diagnostics to detect the COVID cases. Therefore, this paper addressed about the classification techniques to diagnose about the symptoms of this virus with the help of belief function with the support of convolution neural networks. This method initially extracts the features and correlates the features with the belief maps to decide about the classification. This research work would provide classification of more accuracy than the earlier research. Therefore, compared with the traditional deep learning method, this proposed procedure would be more efficient with desirable results achieved for accuracy as 0.87, an F1 of 0.88, and 0.95 as AUC.
 
Sample images from DIBaS dataset
Water can truly be called the ‘Elixir of life’. Clean water is an imperative requirement for life to thrive. But we find that, in vast regions of the developing and underdeveloped world, waterborne diseases wreak havoc. As per the World Health Organization, about 3.6 million people around the world die due to waterborne diseases, out of which about 2.2 million deaths are of children. Waterborne diseases can be defined as ailments caused by the consumption of contaminated water which contains pathogens like harmful bacteria, viruses, protozoa etc. Poor sanitation and spillage of sewage matter into drinking water sources causes these harmful pathogens to contaminate water. Government organizations as well as NGOs have been trying their level best to improve the quality of drinking water. But clean drinking water remains a distant dream for a large majority of the world’s population. Optimal solutions based on bleeding edge innovations in the area of deep learning and machine learning can be used to effectively combat this global menace. In this paper, we have done a retrospective survey on deep learning and machine learning approaches for the early and rapid detection of bacterial pathogens in water.
 
Space Research is a curiosity of everyone, it gives hidden challenges to researchers, and the majority of research is done through physics laws but implantation through scientific and Technological approaches. We are utilizing space applications converted into common man needs fulfillment based on priority development platforms. In this paper, we give an overview of the entire space explorations and applications from initial to current trends in the view of fundamental physics and Artificial Intelligence. These scientific and technological methods are better ways for researchers to view how the universe works and expands along with space explorations too.
 
Stress may be identified by examining changes in everyone’s physiological reactions. Due to its usefulness and non-intrusive appearance, wearable devices have gained popularity in recent years. Sensors provide the possibility of continuous and real-time data gathering, which is useful for tracking one’s own stress levels. Numerous studies have shown that emotional stress has an impact on heart rate variability (HRV). Through the collection of multimodal information from the wearable sensor, our framework is able to accurately classify HRV based users’ stress levels using explainable machine learning (XML). Sometimes, ML algorithms are referred to as black boxes. XML is a model of ML that is designed to explain its objectives, decision-making, and reasoning to end users. End users may include users, data scientists, regulatory bodies, domain experts, executive board members, and managers who utilize machine learning with or without understanding or anybody whose choices are impacted by an ML model. The purpose of this work is to construct an XML-enabled, uniquely adaptable system for detecting stress in individuals. The results show promising qualitative and quantifiable visual representations that may provide the physician with more detailed knowledge from the outcomes offered by the learnt XAI models, hence improving their comprehension and decision making.
 
Requirement change management is a challenging issue in software development. One of the main objectives of the Intent-Defined Adaptive Software program is to verify the satisfaction of requirement changes during software development. In this paper, we develop an ontology-based method to detect inconsistencies in Systems Modeling Language (SysML) models with Object Constraint Language (OCL) constraints as a first step of requirement change management. Specifically, we map the SysML/OCL models to Web Ontology Language (OWL), so that the consistency of the corresponding ontology can be checked by OWL reasoners automatically. We propose a set of mapping rules to interpret the components of SysML state machine diagrams, along with OCL constraints, to OWL. Toward this objective, we demonstrate three consistency reasoning tasks over a state machine diagram using OWL reasoners. In each case, the result of reasoning is accompanied by an explanation of the logic behind the decision.
 
Lung cancer is the most frequent cancer globally. New technologies have recently piqued the interest of the healthcare world due to their ability to automate or provide additional information to medical personnel. After lung cancer has been diagnosed, it is compared with nearby areas on CT scans in order to determine how far the disease has spread. This work aimed to identify characteristics of tumorous lungs on CT scans utilizing new machine learning technologies. Although 3D ResNet architecture could learn opacity and cancer (AUC = 0.61), it was not better than chance. As a result, only emphysema was learned, attaining an AUC of 0.79. The network was then added with a transfer learning approach to improve results. Finally, a self-supervision transfer learning approach and training without prior knowledge were contrasted. The transfer learning method produced comparable results in the multi-task approach for emphysema (AUC = 0.78 versus 0.60 without pre-training) and opacities (AUC = 0.61). To use this as intended, the classification can be used to anticipate future health complications which may occur if cancer has spread to other parts of the body.
 
Binarized neural netwok
System Framework
Test accuracy
Comparision of accuracy
A novel approach using tensor flow is deployed where a Binarized Neural Network (BNN) is trained with weights and activations both at train time and runtime through the forward pass. The parameter gradients are calculated using binary weights and activations at train time. In the forward pass BNN replaces almost all of the computational operations along with bit-line operations to enhance the power efficiency. To substantiate the performance of BNN, MNIST dataset was used in Keras tensor flow which showed error reduction and improvement in accuracy by 4%.
 
Cardiovascular disease/heart disease is one of the chronic diseases prevailing across the world. Prediction of heart disease in an efficient and in a timely manner is difficult. The majority of the existing work for predicting heart disease focuses on machine learning techniques, but they failed to attain higher accuracy. Recent developments in deep learning techniques has significant impact on data analytics. So, the proposed work here combines convolutional neural networks with a long short term memory (LSTM) network to achieve higher accuracy than the traditional machine learning approaches. The hybrid CNN and LSTM method was applied over the heart disease dataset to classify it as normal and abnormal. This hybrid system has shown an accuracy of 89%, and it was validated using k-fold cross-validating technique. To establish the efficiency of proposed method, it is compared with various machine learning algorithms such as SVM, Naïve Bayes and Decision Tree. The results shows that the proposed algorithm achieves better performance than the existing machine learning models.
 
Code generation is a key technique for model-driven engineering (MDE) approaches of software construction. Code generation enables the synthesis of applications in executable programming languages from high-level specifications in UML or in a domain-specific language. Specialised code generation languages and tools have been defined; however, the task of manually constructing a code generator remains a substantial undertaking, requiring a high degree of expertise in both the source and target languages, and in the code generation language. In this paper, we apply novel symbolic machine learning techniques for learning tree-to-tree mappings of software syntax trees, to automate the development of code generators from source–target example pairs. We evaluate the approach on several code generation tasks, and compare the approach to other code generator construction approaches. The results show that the approach can effectively automate the synthesis of code generators from examples, with relatively small manual effort required compared to existing code generation construction approaches. We also identified that it can be adapted to learn software abstraction and translation algorithms. The paper demonstrates that a symbolic machine learning approach can be applied to assist in the development of code generators and other tools manipulating software syntax trees.
 
The use of technology in agriculture has grown imperative. To provide the expanding population's needs for food, agricultural productivity should rise. Computer vision technology has been used to solve the difficulties associated with manual yield estimation. This article present efficient mango fruit yield estimation system with color based pixel classification method with support to that a benchmark mango tree dataset is presented. Dataset is collected temporally under varying illumination conditions, distance and time for 5 months from its blossoming phase to the ripen phase of the fruit. The repository accounts for 21,000 images of mango trees. The proposed work initially preprocess the RGB image by converting into grayscale, HSV and YCbCr color models, each layers of color model are separately extracted and each layer is enhanced by applying techniques like Gaussian blur, histogram equalization to study the features and superiority of the mango images and best color layer which exhibits most dominant features are selected for next level processing. Further, proposed a two stage algorithm using color features to classify the pixels of mango fruit region. Finally, after fruit pixel classification the method is followed by mango fruit detection using Hough transform circle fitting technique. The proposed method could count up to 80% of mango fruits present in the image. This work offers specialized help for the visual recognizable proof and yield estimation of mango fruits and also for other fruits available in the environment.
 
Structure of the system that is being proposed
From the graph, it seems that the connection is not linear
OLS regression results
If the residuals have a normal distribution, the Q–Q plot will show that. If the points in a scatter plot have a normal distribution, they will cluster in a straight line at 45°
The World Health Organization (WHO) reports that in 2018, 422 million people throughout the globe are living with diabetes, making it one of the most widespread chronic life-threatening conditions. Early diagnosis is often favoured for clinically relevant findings due to the comparatively longer asymptomatic period associated with diabetes. It is estimated that around 50% of people with diabetes go undiagnosed because of the length of time it takes for symptoms to appear. The appropriate evaluation of both common and less common sign symptoms, which may be present at various times between the onset of the illness and diagnosis, is essential for early detection of diabetes. Researchers have relied heavily on data mining-based categorization algorithms for illness risk prediction models. To estimate a person's risk of developing diabetes, it is required to have access to data on people who have recently developed diabetes or who are at high risk of developing diabetes. A dataset of 768 instances was provided to us via Kaggle and was created by the National Institute of Diabetes and Digestive and Kidney Diseases. This set of examples was narrowed down from a bigger database using a variety of criteria. All our female patients are at least 21 years old and are indigenous Pima. We performed statistical analysis on the dataset using the Naïve-Bayes Algorithm, the Logistic Regression Algorithm, and the Random Forest Algorithm. We found that Random Forest provided the best accuracy for this dataset when evaluated using both tenfold Cross-Validation and the percentage split method. The National Institute of Diabetes and Digestive and Kidney Diseases is the original source of this data. The goal is to diagnose a patient and then forecast whether they have diabetes based on those results.
 
A block representation of Reverse Dictionary
Survey overview
A general framework of Information Retrieval System based RD
Association matrix
In view of the limitation of the forward dictionaries to attend to the needs specific to the language producers, an alternate resource in the form of ‘Reverse Dictionary’ needs to be built. Reverse Dictionary aims to lexicalize the concept in the user’s mind by taking as input a natural language description of the concept and returning word/s that are in semantic correspondence to the description. A critical survey of the existing Reverse Dictionary works is presented in this paper. It is concluded that this problem has been addressed through five categories of approaches, namely, Information Retrieval-based approach, Graph-based approach, Mental dictionary-based approach, Vector Space Model-based approach, and Neural Language Model-based approach. We identify and highlight that the works reported so far do not account for human perceptions in the user input. However, as a NL is a system of perceptions and a Reverse Dictionary deals with natural language input, dealing with perception based information in the user input is important to capture his/her intent. To address the identified research gap, we have considered the concept of Precisiated Natural Language (PNL) based on Zadeh’s paradigm of Computational Theory of Perceptions. We have proposed to incorporate it into the traditional Information Retrieval (IR) architecture in building a Reverse Dictionary. To gain insights for the same, we have reported an experimental analysis of IR system based Wordster Reverse Dictionary.
 
Network scenario
Proposed method
Detection time comparison
Energy consumed
Power consumed
The wireless sensor network is set up in isolated locations and is open to numerous threats. Using the crucial data received from the captured node by an adversary, an adversary can deploy a significant number of replicated/clone nodes in the network. The network may suffer from these clone nodes. Node replication attack and clone node attack are two names for this attack. There have been a lot of efforts in this field that all use a random key, code, or piece of location information to identify duplicated nodes. This study describes a technique for clone node discovery that makes use of cuckoo filters. Due to its simplicity and efficiency in insertion and deletion when compared to other filters, the suggested method gives better detection time, power consumption, and detection accuracy. The proposed method is simulated and results are compared.
 
Proposed methodology
Comparison of de-noised brain MRI
MRI is one of the major modality in assisting the medical experts to diagnose and treat different brain diseases such as cancer, epilepsy and stroke. Poor light illumination of MRI scanners, electronic interferences and radiofrequency emissions cause noise in MR images. Handling such noises in MR brain images is a challenging task. To overcome the issue, this paper aims to develop an enhanced model that has the ability to handle three different types of noises namely salt and pepper noise, speckle noise and Gaussian noise. The proposed enhanced intuitionistic fuzzy adaptive filter (IFAF) model removes noises in MR brain images by categorizing pixels into membership and non-membership grade. IFAF preserves the edges and detailed information of the images by adapting contrast enhancement histogram equalization. Conventional denoising models face uncertainties in distinguishing normal and noise affected pixel of an image. The performance of IFAF is compared over different noises and the results show IFAF provides better results than the other conventional noise filtering models.
 
A robotic parallel manipulator is implemented by employing embedded systems integrated with a set of sensors. More than one type of sensor are implemented together with the control input data from a human limb. Initially, a data set is collected on the map to certain equivalent actuations at the manipulator, and then using an appropriate machine learning algorithm, the control data value for the continuous position of the actuator is generated. A substantial amount of work is done on mapping the position of the limb to the actuator position by creating a three-dimensional model conventional 3D conversion, which is used on the boundary values of the input and output matched with a certain level of intermediate values, a proper training dataset for a machine learning algorithm can be created. The position of the manipulator is monitored by an IoT system and a set of sensors installed at the end. In addition, applied sciences transmits the possession date of the equator, and this information can be viewed remotely from any device connected to the internet.
 
Detecting skin cancer at an early stage is of great importance and importance. Skin cancer is now recognized as one of the most dangerous forms of cancer found in humans. Detecting melanoma cancer in its early stages can help cure it. The skin is divided into two parts, an inner layer called the dermis and an outer layer called the epidermis which contains melanocytes. These produce pigments that produce melanism. If this melanium is exposed to heat or sunlight, it will darken the skin. This article presents a method for detecting melanoma skin cancer using image processing tools. The input to the system is an image of a skin lesion, which is analyzed using image processing techniques to infer the presence of skin cancer. After cancer is predicted, feature extraction is performed. By using the SVM algorithm, the extracted feature parameters are used to classify the images as non-melanoma and melanoma cancer lesions.
 
With the rapid development of science and technology-enhanced economic growth in our society, using the applications of Artificial Intelligence is much more generalized comes society's needs. Its product has a highly reflective impact on our daily work and lifestyle. In e-commerce and e-Governance, AI Technology consumption has also matured and achieved desired outcomes. AI has converted a significant way to force aimed at the development of e-commerce along with e-Governance operations effectively. This paper defines the e-commerce and e-Governance systems' work functional outcome in various fields and projections of AI technology. It analyzes the present application of AI technology in e-commerce and e-Governance, focusing on Cyber security syntactical approaching strategies.
 
Data generation and collection is a continuous process throughout the world. It is going to be difficult to handle various challenges that arise because of various categories of data collected through various sources. The journey of any data analytics work starts with preprocessing. This is the only biggest challenge that takes much time for data separation and categorization. Once this step completed, then by applying several tools, practitioners can process it and can move forward to the next step of centralization, indexing, etc. Many scholars are putting their efforts to get quick responses from the information and hence data analytics task is getting easier. Previous work focused on several challenging issues and provide solutions to issues on preprocessing and data management. Now proposed a big data processing framework Smart Query Processor—SQP which provides the solution to challenges in query processing and processes about 500 GB of data. This paper describes a novel approach using hybrid algorithms and got results in 5X times faster than existing approaches. Finally, compared the results with the previously published work achieved an accuracy of up to 95–96%. In the future, the work will be extend to process several TB of data on highly configured workstations available in the labs.
 
Finding a good compromise between intensification and diversification mechanisms is very challenging task when solving multi-objective optimization problems (MOPs). In this paper, we propose an Ant Colony Optimization (ACO) algorithm coupled with multi-objective local search procedure, and evolve into a multi-directional framework. The developed MD-HACO algorithm optimizes the overall quality of Pareto set approximation using different configurations of the hybrid approach by means of different directional vectors. During the construction process, Ants optimize different search directions in the objective space trying to approximate small parts of the Pareto front. Afterward, a local search phase is applied to each sub-direction to enhance the search process toward the extreme Pareto-optimal solutions with respect to the weight vector under consideration. A multi-directional set holding the non-dominated solutions according to all directional archives is maintained. MD-HACO is tested on widely used multi-objective multi-dimensional knapsack problem (MOMKP) instances and compared to well-known state-of-the-art algorithms. Experiments highlight that the use of a multi-directional paradigm as well as a hybrid schema can lead to interesting results on the MOMKP and ensure a good balance between convergence and diversity.
 
The paper presents formal algebra-algorithmic models for parallel program design and auto-tuning aimed to achieve the highest degree of computation performance. Parallel programs are modeled in terms of high-level schemes represented in Glushkov’s system of algorithmic algebra and a concept of a transition system implemented in the term rewriting technique. The optimization techniques (the methods of asynchronous loop computations and choosing the optimal strategy of indexed data structures traversal) are proposed for auto-tuning. The evaluation of the performance impact of corresponding code transformations is given. The models of parallel program execution and auto-tuning are developed. As the structure and logic of tuned programs can be changed, an approach to verification of correctness of optimization transformations of parallel code performed by an auto-tuner is developed. In some partial cases, this validation can be automated and implemented by checking defined source code characteristics using rewriting rules framework, so correctness verification is reduced to checking equivalence by result property. The developed methods are demonstrated on the examples of optimization of parallel Brownian motion simulation and weather forecasting programs. The multiprocessor speedup for the Brownian motion simulation program was close to the theoretical limit, and the improvement of the overall execution time for the meteorological forecasting program exceeded 13%. The developed formal models and software methods are effective means of achieving the main goal of increasing the multiprocessor speedup of parallel programs as well as of verification of the correctness of the applied code transformations.
 
Example of an open-domain dialogue between a human user and a personalized dialogue agent. For this example, the persona information is presented in the form of several textual statements
Overview of DLVGen. The dashed lines and solid lines represents the connections that are present only during training and inference respectively. The bidirectional connections indicate the computation of KL divergence. ⊕\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\oplus$$\end{document} refers to concatenation
Diagram depicting the GPT-2 decoder during inference. After the latent variables are concatenated and fed into a linear layer WLV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W_{\text {LV}}$$\end{document}, the resultant combined latent variable is added to the positional encoding and token embedding at every decoding step. x0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{0}$$\end{document} and xT\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_{\text {T}}$$\end{document} represent the first and last token embedding of the dialogue context. y0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y_{0}$$\end{document} and yM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y_{\text {M}}$$\end{document} represent the first and last token of the generated response
Personalized dialogue agents are capable of generating responses consistent with a specific persona. Typically, personalized dialogue agents generate responses based on both the dialogue history and a representation of the agent’s desired persona. As it is impractical to obtain the persona representations for every interlocutor in real-world implementations, recent works have explored the possibility of generating personalized dialogue by finetuning the agent with dialogue examples corresponding to a given persona instead. However, in real-world implementations, a sufficient number of corresponding dialogue examples are also rarely available. Hence, in this paper, we introduce the Dual Latent Variable Generator (DLVGen), a variational personalized dialogue agent capable of generating personalized dialogue without any persona information or any corresponding dialogue examples. Unlike previous works, DLVGen models the latent distribution over potential dialogue response intents as well as the latent distribution over the agent’s potential persona. During inference, latent variables are sampled from both distributions and fed to the decoder. Extensive experiments on the popular ConvAI2 personalized dialogue corpus show that DLVGen is capable of generating natural, persona consistent responses. Additionally, we also introduce a variance regularization and response selection approach which further improved overall response quality.
 
Transthoracic Doppler echocardiography (TTDE) data are acquired as a video, assembled into an image, each vertical slice of which is a greyscale histogram of the Doppler blood-flow velocities at that timepoint. Many sources of noise are layered onto the Doppler signal, so there is no internal reference to inform machine learning regarding the true information content. In this typical recording of 18 heartbeats, the data recorded for the first 10 beats represent physiologically realistic flow patterns, while the 11th through 16th beats display corrupted data due to movement of the transducer relative to the vessel being monitored. Electrocardiogram and respiratory recordings underlie the TTDE signal and assist in indexing the heart beat and identifying when predictable physiological phenomena such as breathing have occluded the TTDE data. Image from Bartlett et al. [1]
A typical pressure–volume “loop” (PV-loop) dataset. PV-loops are created by measuring paired values of pressure and volume in the left ventricle at 1000 Hz. The “loop” shape seen in PV-loop data can be understood in terms of the properties of a heart beat. Starting from the lower left, the low-pressure filling, followed by a near-fixed-volume increase in pressure, followed by a fixed pressure decrease in volume, and then a relaxation to baseline pressure to fill again, completes a single beat of the heart. Measured PV values over 46 heart beats are colored temporally in the figure on a rainbow gradient from Red (initial beat) to Indigo (last beat). PV-loops are not identical beat-to-beat due to real physiological differences in the beat-to-beat filling and contraction of the heart. Image from Bartlett et al. [1]
Umbilical artery Doppler flow sonograms from a normally developing pregnancy (top) and from a pregnancy developing intrauterine growth restriction (bottom). The average systolic to diastolic ratio—which is the current Doppler standard for predicting IUGR—in the top image is approximately 5.1, and in the lower image approximately 5.3 (essentially they are indistinguishable), yet preliminary data demonstrate that ML can differentiate between these and other similar UADF images with over 90% accuracy
Accuracy (left y-axis) and loss (right y-axis) of the DNN with the training data (tan circles and green plus, respectively) and validation data (blue squares and black x, respectively) by epoch. As expected, the DNN on training data eventually becomes 100% accurate with a steady decrease in loss, due to memorization. Validation accuracy largely levels off, while validation loss reaches a minimum, and then climbs for the remainder of the 2000 epochs (data beyond 660 epochs not shown). Each early stopping rule application (described in the text and Table 2) is indicated at the epoch where the stopping rule was triggered. The best performance is around epoch 100 for generalization error, and the Patience3 procedure was the closest to that ideal in this scenario. Training the DNN beyond the invasively determined information ceiling at 97% (horizontal brown dashed line) should be impossible without overfitting by learning training-data-specific features. Assuming zero information loss in the indirect, non-invasive data, our information-ceiling method would trigger stopping at approximately 120 epochs. Image from Bartlett et al. [1]
The sonographic images distinguish between IUGR fetuses vs. control using the Xception DNN architecture (ML analysis of Doppler Images), while the systolic/diastolic (S/D) ratio alone, or the S/D data and the clinical data, both have predictive performance that is markedly reduced compared to the image analysis. These data indicate that the Doppler alone contains, and ML can effectively extract, predictive information not previously available in routine clinical work
Early stopping is an extremely common tool to minimize overfitting, which would otherwise be a cause of poor generalization of the model to novel data. However, early stopping is a heuristic that, while effective, primarily relies on ad hoc parameters and metrics. Optimizing when to stop remains a challenge. In this paper, we suggest that for some biomedical applications, a natural dichotomy of invasive/non-invasive measurements, or more generally proximal vs distal measurements of a biological system can be exploited to provide objective advice on early stopping. We discuss the conditions where invasive measurements of a biological process should provide better predictions than non-invasive measurements, or at best offer parity. Hence, if data from an invasive measurement are available locally, or from the literature, that information can be leveraged to know with high certainty whether a model of non-invasive data is overfitted. We present paired invasive/non-invasive cardiac and coronary artery measurements from two mouse strains, one of which spontaneously develops type 2 diabetes, posed as a classification problem. Examination of the various stopping rules shows that generalization is reduced with more training epochs and commonly applied stopping rules give widely different generalization error estimates. The use of an empirically derived training ceiling is demonstrated to be helpful as added information to leverage early stopping in order to reduce overfitting.
 
This is an informative article of Data Analytics for data. This article covers the various aspects of Data Analytics such as definition of Data analytics, various terminology used in Data Analytics related to data, Types of Data, and various methods of data collection. This paper include the concept the correlation coefficient to evaluate the student performance in internal as well as external examination. I applying linear correlation coefficient technique to find the highly correlated features that is student attendance from the student data set and its impact over the student performance in internal as well as external assessment. Student performance Variable is considered as dependent and independent variable considered as attendance. Using correlation coefficient methodology, I predicate the value of r (Correlation Coefficient) in its throughput range that is 1 or -1 to indicate the strong and weak relationship.
 
For agriculture to be sustainable, it is essential to monitor a plant’s health and look for diseases. It is quite challenging to manually monitor plant diseases. To improve the plant lifetime, plant disease must be effectively identified. Several diseases cause the plant's leaves to die. In some cases, farmers face issues in finding the type of leaf disease as well as its future symptoms. The proposed plant leaf disease detection scheme uses enhanced deep learning techniques to find causes of leaf disease and offer treatment suggestions. The proposed work relies on Tensor Flow to identify illnesses in plant leaf pictures. The proposed approach is trained with the convolution neural network to automatically diagnose disease using the object detection API tensor flow. In order to treat the sickness, the proposed effort will also identify the causes and symptoms of the illness. In the proposed work, advanced deep learning models based on particular convolutional neural network topologies were created to recognize plant diseases using photos of healthy or diseased plants’ leaves. In comparison to existing models, the proposed model offers a 95% accuracy level for detecting disease leaves.
 
This work is motivated by a real-world problem of coordinating B2B pickup-delivery operations to shopping malls involving multiple non-collaborative logistics service providers (LSPs) in a congested city where space is scarce. This problem can be categorized as a vehicle routing problem with pickup and delivery, time windows and location congestion with multiple LSPs (or ML-VRPLC in short), and we propose a scalable, decentralized, coordinated planning approach via iterative best response. We formulate the problem as a strategic game where each LSP is a self-interested agent but is willing to participate in a coordinated planning as long as there are sufficient incentives. Through an iterative best response procedure, agents adjust their schedules until no further improvement can be obtained to the resulting joint schedule. We seek to find the best joint schedule which maximizes the minimum gain achieved by any one LSP, as LSPs are interested in how much benefit they can gain rather than achieving a system optimality. We compare our approach to a centralized planning approach and our experiment results show that our approach is more scalable and is able to achieve on average 10% more gain within an operationally realistic time limit.
 
Deep reinforcement learning agents have achieved unprecedented results when learning to generalize from unstructured data. However, the “black-box” nature of the trained DRL agents makes it difficult to ensure that they adhere to various requirements posed by engineers. In this work, we put forth a novel technique for enhancing the reinforcement learning training loop, and specifically—its reward function, in a way that allows engineers to directly inject their expert knowledge into the training process. This allows us to make the trained agent adhere to multiple constraints of interest. Moreover, using scenario-based modeling techniques, our method allows users to formulate the defined constraints using advanced, well-established, behavioral modeling methods. This combination of such modeling methods together with ML learning tools produces agents that are both high performing and more likely to adhere to prescribed constraints. Furthermore, the resulting agents are more transparent and hence more maintainable. We demonstrate our technique by evaluating it on a case study from the domain of internet congestion control, and present promising results.
 
The automated toll collection system is a relatively recent piece of technology that has the potential to collect tolls in a manner that is both more efficient and expedient. It is an excellent alternative to the requirement of having to wait for a considerable length of time at manual toll plazas. A toll collection system that is based on RFID technology was built with the help of the Raspberry Pi. This system is fully automated. This was done in order to reduce the amount of time and gasoline that was squandered. The city's registration office issues RFID cards, which are one-of-a-kind identifiers, to each and every vehicle in the city. These cards can be read using radio waves. When a vehicle that has such a unique ID drives up to a toll plaza, the RFID card reader that is attached to the toll plaza will read the card and then send the unique ID of the vehicle to the Raspberry Pi. As a direct consequence of this, the processor carries out its duties and deducts an established sum of money from the prepaid card. If the card ID being used is valid and has sufficient balance, the central processing unit will issue a command to the servo motor, instructing it to begin operating and open the gate. This will make it possible for the car to move through the space. If the card is not genuine or if there is not enough money on the card, it will ask you to move the vehicle to the lane where manual tolls are collected, and you will be required to do so. Additionally, a notification will be sent to the mobile number that was registered.
 
Visible light communication (VLC) systems have relatively higher security compared with traditional radio frequency (RF) channels due to line-of-sight (LOS) propagation. However, they still are susceptible to eavesdropping. The proposed solution of the papers have been built on existing work on hyperchaos-based security measure to increase physical layer security from eavesdroppers. A fourth-order Henon map is used to scramble the constellation diagrams of the transmitted signals. The scramblers change the constellation symbol of the system using a key. That key on the receiver side de-scrambles the received data. The presented modulation scheme takes advantage of a higher degree of the map to isolate the data transmission to a single dimension, allowing for better scrambling and synchronization. A sliding mode controller is used at the receiver in a master-slave configuration for projective synchronization of the two Henon maps, which helps de-scramble the received data. The data are only isolated for the users aware of the key for synchronization, providing security against eavesdroppers. The proposed VLC system is compared against various existing approaches based on various metrics. An improved Bit Error Rate and a lower information leakage are achieved for a variety of modulation schemes at an acceptable Signal-to-Noise Ratio.
 
The design of microstrip antennas for a 6G application is the main emphasis of this work. In the area of mobile communication, the rate of scientific and technological advancement has never slowed. To handle the enormous quantity of data traffic brought on by the increase in wirelessly linked devices, 6G was created. In addition, it provides customers with massive capacity, low latency, high bit rates, a variety of brand-new services, and vertical applications. Due to this, the new THz band frequency range is in focus. This study focuses on designing an antenna that operates at 250 GHz, where ground plane and patch antenna is used which is made of copper material and substrate is made of polyimide film which has ε = 3.4 where the antenna's performance was examined and improvements were made. The simulation results show that the proposed single-element 6G antenna has a greater gain of 6.41 dBi and a larger bandwidth of 8.25 GHz between 246.50 GHz and 254.75 GHz at − 10 dB level.
 
Flowchart of the proposed methodology
Evaluation of students’ feedback is essential in education as it helps the instructors to check the effectiveness of their teaching. The feedback collected at the end of the semester comprises questionnaires and open-ended questions. It is very difficult to manually analyze comments given by the students in response to open-ended questions. This paper proposes a method to extract opinions from students’ feedback that will help to improve the teaching–learning process. It deals with different aspects of teaching i.e. punctuality, the pace of the teaching, subject knowledge, etc. It also incorporates a hybrid approach that combines the lexicon and machine learning approaches. In the lexicon approach, various linguistics features i.e. negation, contact shifters, and modifiers have been considered as these change or add to the orientation of the sentence. SentiWordNet dictionary has been used to assign score to words in the sentences and based on the score, the sentence has been classified as positive, negative, or neutral. After assigning the orientation, the dataset has been resampled using various resampling techniques i.e. ENN, TL, OSS, NCR, SMOTE, ADASYN, Borderline SMOTE, SMOTE-ENN, SMOTE-Tomek, etc. These techniques have been applied to balance the class distribution of each aspect. Then, machine learning algorithms i.e. SVM, MNB, LR, RFC, DTC, and KNN have been applied to the dataset. Experimental results indicate that the proposed approach outperforms other baseline methods in the context of automated sentiment scoring and has achieved 98.7% aggregate accuracy using the RFC algorithm on the students’ feedback dataset.
 
The structure of compressed video stream steganography using proposed TDSFO-based DCNN
The structure of DCNN
Experimental outcomes of DCT quantization
Experimental outcomes of TDSFO-DCNN
Information security from intruders has been around since ancient times. A steganography is used to maintain the secrecy of information. A video Steganography is the transmission of a secret message hidden within an ordinary video stream. A video steganography becomes popular due to its massive capability of accommodating higher payload. The embedded frames resist the video compression in raw domain video steganography. In compressed domain video steganography compressed parameters are a better choice for data embedding. In this paper, an efficient Tasmanian Devil Sail Fish Optimization (TDSFO) algorithm is presented for compressed video stream. A video steganography consists of two main phases, embedding phase and extraction phase. In embedding phase, input video acquired from a database is subjected to key frame extraction. The motion estimation of key frames is done to extract the motion vectors of macroblocks. The optimal selection of macro-blocks is carried out using DCNN. The proposed TDSFO algorithm is used to train DCNN. The proposed TDSFO algorithm is devised through the integration of Tasmanian Devil Optimization (TDO) and Sail Fish Optimizer (SFO). A secret image is embedded within motion-vector using 5-Embed approach. Then embedding of bit-stream is done and subsequently, motion compensation is carried out. After embedding, Discrete Cosine Transform (DCT) quantization and entropy coding are performed. The compressed bit-stream is generated by entropy coding. At the extraction phase a compressed bit-stream is decoded. Embedded motion-vector extraction is carried out using 5-Embed approach followed by bit-stream extraction to extract the secret message bit. The input video is extracted effectively at the extraction phase. Implementation of the proposed technique is carried out the Python tool. The performance of the proposed method is analysed using performance metrics, namely Correlation Coefficient (CC) and Peak Signal Noise Ratio (PSNR) and compared with the existing methods to reveal the effectiveness of the proposed method.
 
Top-cited authors
Iqbal H. Sarker
  • Edith Cowan University
Md. Milon Islam
  • University of Waterloo
Muhammad Lawan Jibril
  • Federal University, Kashere
Sani Sharif Usman
  • Federal University, Kashere
Safial Islam Ayon
  • Green University of Bangladesh