The article presents a case study on demographic sequences analysis through modern machine learning (ML) techniques. The studied data contains demographic and socioeconomic events, where the events are presented as sequences of statuses. The involved demographers are interested in applications of advanced ML techniques and interpretable patterns for their needs. We show how Shapley value-based explanations can be obtained for such sequential data with powerful ML approach, namely gradient boosting over decision trees. Thus, it helps to understand the critical and influencing events for a particular individual life-course sequence and explain predictions.
Optimal motion planning involves obstacles avoidance whereas path planning is the key to success in optimal motion planning. Due to the computational demands, most of the path planning algorithms can not be employed for real-time-based applications. Model-based reinforcement learning approaches for path planning have received particular success in the recent past. Yet, most such approaches do not have deterministic output due to randomness. In this paper, we investigate existing reinforcement learning-based approaches for path planning and propose such an approach for path planning in the 3D environment. One such reinforcement learning-based approach is a deterministic tree-based approach, and the other two approaches are based on Q-learning and approximate policy gradient, respectively. We tested the preceding approaches on two different simulators, each of which consists of a set of random obstacles that can be changed or moved dynamically. After analysing the result and computation time, we concluded that the deterministic tree search approach provides highly stable results. However, the computational time is considerably higher than the other two approaches. Finally, the comparative results are provided in terms of accuracy and computational time.
Today’s datasets are usually very large with many features and making analysis on such datasets is really a tedious task. Especially when performing classification, selecting attributes that are salient for the process is a brainstorming task. It is more difficult when there are many class labels for the target class attribute and hence many researchers have introduced methods to select features for performing classification on multi-class attributes. The process becomes more tedious when the attribute values are imbalanced for which researchers have contributed many methods. But, there is no sufficient research to handle extreme imbalance and feature selection together and hence this paper aims to bridge this gap. Here Particle Swarm Optimization (PSO), an efficient evolutionary algorithm is used to handle imbalanced dataset and feature selection process is also enhanced with the required functionalities. First, Multi-objective Particle Swarm Optimization is used to transform the imbalanced datasets into balanced one and then another version of Multi-objective Particle Swarm Optimization is used to select the significant features. The proposed methodology is applied on eight multi-class extremely imbalanced datasets and the experimental results are found to be better than other existing methods in terms of classification accuracy, G mean, F measure. The results validated by using Friedman test also confirm that the proposed methodology effectively balances the dataset with less number of features than other methods.
AFM atomic force microscopes are recognized as one of the main identification equipment in nanoscale. This microscope, with the use of the needle (Probe), is very sharp, which is the ideal case in which only one atom of the place of language. properties of the samples about the analysis for non-direct offers, and plays an important role in the progress of research in various sciences, including nanotechnologies, electronics, energy, etc., Astronautics, and so on is played. When it is necessary to create the topography of a surface with a resolution of tens of angstroms up to atomic, AFM is a powerful tool to use. This article studies the non-contact AFM while its tip is excited by dual external harmonic forces. The Van der Waals force is considered as the tip-sample interaction force, which makes the system non-linear. To study the effects of amplitudes of excitations on the dynamic response, the frequency response equations have been used which are obtained by the Van der Pol average method.
In this article, we create a mathematical model for the robot inspector while moving on the electrical transmission line (ETL). Here, we assume that the ETL is a taut string that is hanged between two fixed supports with an external force. We consider the external excitation, as a boundary condition to our model, while the robot is considered as a moving point force (load). It should be noted that, in this work, we assume that the robot moves without acceleration, i.e., with the constant velocity. The created mathematical model is solved using Duhamel's principle and results are illustrated.
Future generation vehicles equipped with modern technologies will impose unprecedented computational demand due to the wide adoption of compute-intensive services with stringent latency requirements. The computational capacity of the next generation vehicular networks can be enhanced by incorporating vehicular edge or fog computing paradigm. However, the growing popularity and massive adoption of novel services make the edge resources insufficient. A possible solution to overcome this challenge is to employ the onboard computation resources of close vicinity vehicles that are not resource-constrained along with the edge computing resources for enabling tasks offloading service. In this paper, we investigate the problem of task offloading in a practical vehicular environment considering the mobility of the electric vehicles (EVs). We propose a novel offloading paradigm that enables EVs to offload their resource hungry computational tasks to either a roadside unit (RSU) or the nearby mobile EVs, which have no resource restrictions. Hence, we formulate a non-linear problem (NLP) to minimize the energy consumption subject to the network resources. Then, in order to solve the problem and tackle the issue of high mobility of the EVs, we propose a deep reinforcement learning (DRL) based solution to enable task offloading in EVs by finding the best power level for communication, an optimal assisting EV for EV pairing, and the optimal amount of the computation resources required to execute the task. The proposed solution minimizes the overall energy for the system which is pinnacle for EVs while meeting the requirements posed by the offloaded task. Finally, through simulation results, we demonstrate the performance of the proposed approach, which outperforms the baselines in terms of energy per task consumption.
Attractors in a multistable system have memory due to inertial properties of the dynamical system. If the driving force in a nonautonomous system with coexisting periodic orbits is turned off for some time and then turned on again, the system either returns to the same attractor or goes to another coexisting attractor. The attractor memory is the maximum driving-off time after which the system comes back to the same attractor. The attractor memory depends on the phase of the driving force, i.e., on the time when the driving is turned off. The duration of relaxation oscillations when the system returns to the same attractor grows exponentially as the driving-off time is increased, saturating to the memory time. The length of the phase-space trajectory during the driving-off time (memory distance) correlates with the system variable, but not with the attractor memory. The attractor memory concept is illustrated on the example of a multistable erbium-doped fiber laser with four coexisting periodic orbits.
The growth and development of scientific applications have demanded the creation of efficient resource management systems. Resource provisioning and scheduling are two core components of cloud resource management systems. Cloud resource scheduling is the most critical problem to solve efficiently due to the heterogeneity of resources, their inter-dependencies, and unpredictability of load in the cloud environment. In this paper, we review the background of scheduling and state-of-the-art scheduling techniques in cloud computing. We first introduce the general background, and phases of scheduling. A comprehensive survey of existing resource scheduling problems proposed so far is presented considering high-level taxonomy. This high-level taxonomy considers Virtual Machine (VM) placement, Quality of Service (QoS) parameters, heuristic methods, and other miscellaneous techniques for resource scheduling. This study also discusses scheduling in Infrastructure as a Service (IaaS) clouds and comparison based on important parameters is also investigated. The importance of meta-heuristic methods and artificial intelligence for resource scheduling methods in cloud computing is discussed thoroughly. The objective of this work is to help the researchers to understand the basic concepts related to scheduling and facilitate the process of designing new scheduling methods by addressing issues raised in the scheduling and studying the existing methodologies.
Epilepsy is a neurological disorder distinguished by sudden and unexpected seizures. To diagnose epilepsy, clinicians register the signals of brain electric activity (electroencephalograms, EEG) and extract segments with seizures. It enables characterizing their type and finding an onset zone, a brain area where they originate. This procedure requires manual EEG deciphering, which is slow and necessitates the assistance of machine learning (ML) algorithms. Traditionally, ML handles this issue in a supervised fashion, i.e., after the training on the representative data, it constructs a boundary in the feature space that separates classes. As the number of features grows, this boundary becomes complex and less generalized. The feature space of brain data is high dimensional. The standard recording includes 30 signals and 50 frequencies resulting in 1500 features. Using additional time-domain features may further enlarge the feature space. Thus, selecting appropriate features is a big part of the successful classification. The selection procedure relies on either a data-based mathematical approach (e.g., principal components, PCs) or the expert domain knowledge of data (explainable features, EFs). Here, we demonstrate the benefits of using EFs. For the EEG data of 30 epileptic patients, we trained a RandomForest algorithm using PCs and EFs. The feature importance analysis revealed that explainable features outperform principal components.
We study the specific features of the organization of the functional brain networks of children with autism spectrum disorder (ASD) by analyzing at the source level the data obtained in the EEG experiment in the resting-state paradigm. We pay special attention to age-related changes in the characteristics of functional networks during the particularly important age period from early childhood to adolescence. The analyzed experimental groups consisted of 148 ASD children and 173 neurotypical children that were considered as a control group. In the theta band, we revealed an age-independent functional connectivity pattern, consisting of the brain areas responsible for emotions and consciousness, where the strength of connections is higher in neurotypical children compared to ASD children. Moreover, we discovered lower network global clustering in the delta + theta band in ASD children. Thus, more segregated, but more highly connected subnets are formed in the delta + theta band in neurotypical individuals compared to ASD ones. We can suggest increased control over emotions and stronger interaction between the emotional and conscious domains in neurotypical children. In the extended alpha band, we revealed an age-dependent functional connectivity pattern, demonstrating hyper-activation in the ASD group for ages below 6–7 years old and hypo-activation—for older ages. Also, we discuss the development of effective approaches to autism therapy, which should be based on the normalization of aberrant functional connections.
One of today's inspiring issues is the 2D histogram-based multilevel threshold selection which is used for segmenting images into several regions. The image analysis warrants exploration of multiclass thresholding techniques using various entropy-based objective functions. In this context, the Shannon type of entropic function without inherent decision making capacity has been widely used for threshold selection in the last decade. Furthermore, a 2D histogram was constructed using local average intensity values resulting in loss of some edge information. To address these problems, this study proposes a new methodology using a novel practical decisive row-class entropy (PDRCE) based fitness function for multilevel thresholding. The PDRCE values are computed using the newly constructed 2D histogram-based on normal local variance. Further, an opposition flow directional algorithm (OFDA) is proposed to maximize the fitness function. The performance of the proposed technique is compared with five state-of-the-art 2D histogram-based entropic fitness functions. Moreover, the performance of OFDA is investigated through comparison with other global optimizers namely the genetic algorithm, particle swarm optimization and artificial bee colony. An image segmentation evaluation dataset (BSDS500) is used in this experiment. It is witnessed that the proposal is more efficient than state-of-the-art methods. Our fitness function would be useful for registration, segmentation, fusion, etc. INDEX TERMS Image processing, multilevel thresholding, entropy, computational intelligence, machine learning.
The paper deals with elastostatic calibration of industrial robots. We compared three identification strategies - namely 6 DoF, 4-6 DoF after 3 DoF, 4-6 DoF after 6 DoF. The comparison is based upon the conventional measures of robot position and on the measures of robot arm positions. Here we present the analysis of model production techniques and dataset filtering and fusion methods. All hypotheses were tested on the real experimental data collected with absolute measurement system. The results showed that last joint parameter can hardly be estimated from real experimental data and accuracy analysis can be done separately for z-direction and in-plane deflections. The identified elastostatic model parameter allowed to completely compensate the deflections in z-direction with deviation 0.3mm or 80% of entire positioning error.
The issues of measurability and evaluation of work and various services, the feasibility of orders in production systems have been studied by many authors. Here we consider the development of these methods for organizing the management of manufacturing enterprises in the context of the transition to flexible production. The necessary and sufficient conditions for the feasibility of the work are formulated. Examples of providing car rental at specified time intervals and evaluating the feasibility of requests for equipment repair in a service center in the structure of a machine-building enterprise are considered.
The paper show a state-of-the-art approach for programming industrial robotics manipulator via mixed reality holographic interface. In mixed reality, we implement basic functionality for robot programming: setting up the base and tool frames, setting the sequence of Cartesian poses and Joint positions for robot motion, robot movements simulation. The software developed is based on Unity, ROS, and MoveIt frameworks. We compared cursor-based interface (Microsoft HoloLens 1) and full hand tracking (Microsoft HoloLens 2 and Oculus Quest 2) functionalities for interactive robot programming. We tested our approach on the UR10e collaborative robot in the virtual scene and real environment. The full hand tracking human-robot interaction approach improves the setup of Cartesian and Joints robot goals in respect to cursor based interaction.
In this paper, we discuss a Machine Learning pipeline for the classification of EEG data. We propose a combination of synthetic data generation, long short-term memory artificial neural network (LSTM), and fine-tuning to solve classification problems for experiments with implicit visual stimuli, such as the Necker cube with different levels of ambiguity. The developed approach increased the quality of the classification model of raw EEG data.
As a result of experimental studies, data on the maximum radial loads for a cable with a carbon matrix and the effect of copper conductors on the crack formation process under radial loading of a pusher cable were obtained at the temperature of 20 °C. A finite-element model of a pusher cable is developed, which takes into account the presence of a matrix, copper conductors, and their insulation, which allows us to construct stress and strain fields. The created finite element model allows you to predict maximum loads for composite pusher cables. The design of a stand for testing samples of cables with carbon matrices for radial compressive strength is described. Experimental data on the study of the strength properties of two types of cables at a temperature of 20 °C are presented.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.