Article

3D Optical Machine Vision Sensors With Intelligent Data Management for Robotic Swarm Navigation Improvement

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

the optimized communication within robotic swarm, or group (RG) in a tightly obstacled ambient is crucial point to optimize group navigation for efficient sector trespass and monitoring. In the present work the main set of problems for multi-objective optimization in a non-stationary environment is described. It is presented the algorithm of data transfer from 3D optical sensor, based on the principle of dynamic triangulation. It uses the distributed scalable big data storage and artificial intelligence in automated 3D metrology. Two different simulations in order to optimize the fused data base for better path planning aiming the improvement of electric wheeled mobile robots group navigation in unknown cluttered terrain is presented. The optical laser scanning sensor combined with Intelligent Data Management permits more efficient dead-reckoning of the RG.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... 9,10 Furthermore, optical laser sensors have been explored for vision tasks in previous studies. [11][12][13][14] However, these sensors often come with a higher cost. With the development of deep-learning technology, monocular RGB images can also achieve pose recognition. ...
... In this section, by the fitness distribution experiment, it is verified that the fitness function equation (11) can transform the PM ratio estimation and target's pose detection problems into optimization problems. It is also confirmed that the proposed method can estimate 3D target pose by using stereo vision and one PM ratio unknown photo. ...
... These results prove that the problem of detecting the pose of a marine creature from a picture of unknown shooting status is transformed into an optimization problem. And through the pose estimation experimental results in this subsection, it is confirmed that (1) the proposed expanded photo-model-based method can estimate a target object's pose by using stereo vision and only one PM ratio unknown photo; (2) the fitness function equation (11) can transform the target pose and PM ratio estimation problem into an optimization problem that GA can solve. ...
Article
Full-text available
Model-based stereo vision pose estimation depends on the establishment of the model. The photo-model-based method simplifies the model-building process with just one photo. Programming languages do not predefine the shapes, colors, and patterns of objects. In the past, however, it was necessary to calculate a pixel per metric ratio, that is, the number of pixels per millimeter of the object, based on the photo’s shooting distance to generate a photo-model with the same size (length and width) as the actual object. It restricts the real application. The proposed method extends the traditional photo-modeling algorithm and relaxes the photo prerequisite for target pose determination. Various pixel per metric ratios will be assumed to generate 3D photo-models of different sizes. These models will then be employed in stereo vision image matching techniques to detect the pose of the target object. Since it is not a data-driven method, it does not require many pictures and pretraining time. This article applies the algorithm to the cleaning of seaports and aquaculture, aiming to locate dead or diseased marine life on the water surface before collection. Pose estimation experiments have been conducted to detect an object’s pose and a prepared photo’s pixel per metric ratio in real application scenarios. The results show that the expanded photo-model stereo vision method can estimate the pose of a target with one pixel per metric ratio unknown photo.
... Хоча деталі використовуваних алгоритмів часто є конфіденційними з питань інтелектуальної власності, наявність таких систем надає деякі вказівки щодо використовуваного обладнання та програмного забезпечення, що можуть бути корисними в цій галузі досліджень. Вимоги до таких систем також вказують на очікування щодо рішень, заснованих на CV, що застосовуються у спорті[2] [3]. ...
... У випадку з тенісом, який по свої суті дуже близький до настільного, обробка вхідних даних в системі «Hawk-Eye» складається з чотирьох етапів: 1) 2D-інформація з кожної камери використовується для обробки CV з метою виявлення центру м'яча, 2) 3D-дані обчислюються за допомогою триангуляції з усіх доступних камер, 3) ці 3D-дані обробляються для отримання точної трекінгової інформації про рух м'яча [5]. ...
Article
В даній роботі представлено дослідження застосування систем компютерного зору для спортивних подій, а саме «електронних суддів». Було проведено аналіз відомих систем для суддівства. Виходячи з аналізу аналогів запропоновано архітектуру системи “електронний суддя для пінг понгу”, описано апаратну складову, розроблено алгоритм роботи системи та програмне забезпечення та проведено тестування системи.
... The primary objective is to decrease coordinate measurement uncertainty in laser scanning systems using DC motors based on dynamic triangulation theory [12]. These systems are widely used in practical applications, including robot vision and navigation [11]- [13]. The research involves deriving theoretical concepts, studying the TVS system, estimating friction parameters, and digitally implementing the nonlinear control system. ...
... For the proposed cSORPD control (12), gains κ p , κ d , κ r , and k 0 must be defined. Values of δ = 0.01 and α = 0.5 were chosen for the nonlinear function (13) to balance performance and chattering. The maximum control input was set as u max = 24V , and the upper bound for nonlinear friction was assumed as c 0 = 6.35. ...
... However, traditional methods depend on pre-welding offline process predictions and path planning, which are strictly followed during welding without any adjustments. This method is inadequate for meeting quality requirements, as traditional offline methods cannot sense real-time changes due to deformation and errors such as clamping and continuous thermal deformation during welding [5]. ...
Article
Full-text available
Large steel structures inherently have errors such as clamping and continuous thermal deformation, which complicate traditional robotic multi-layer multi-pass (MLMP) welding of medium-thick plates to ensure quality, efficiency, and universality. To address these conditions, this paper proposes an adaptive process and path planning method for MLMP welding of V-grooves utilizing line laser displacement sensors. First, dimensions and profile of the groove are obtained, feature points are extracted, and the robotic path and welding process for the root welding are planned based on the V-groove’s feature information. Then, reference process parameters is used for filling and cover welding. After each pass/layer, feature points of the V-groove will be extracted, and the previous pass/layer formation will be analyzed. And then algorithm proposed in this paper will be used to adjust the process and robotic path to ensure that, before cover welding, the average groove depth will remain within a specific range. Finally, MLMP experiments have been conducted on two V-grooves of different sizes. The results indicate that this method effectively fills the groove. Maintain the average remaining depth of the groove between 1 and 2.5 mm before cover welding, meeting the requirements of different sizes of V-grooves. The entire surface and cross-section of the weld seam are free of defects, and the fusion between the weld seam and the sidewall exceeds 1.5 mm, meeting the requirements for industrial use.
... With the continuous development of multimedia technology, virtual reality, and augmented reality technology, 3D data have become a key component of today's information multimedia systems. Sergiyenko proposed a primitive optical laser sensor and intelligent data management method for 3D machine vision, which is a very effective method for exploring any spatial sector using robots [1]. The combination of optoelectronic devices and robotic systems has played an important role in the attitude estimation, 3D reconstruction, and digitization of cultural heritage. ...
Article
Full-text available
Three-dimensional models represent the shape and appearance of real-world objects in a virtual manner, enabling users to obtain a comprehensive and accurate understanding by observing their appearance from multiple perspectives. The semantic retrieval of 3D models is closer to human understanding, but semantic annotation for describing 3D models is difficult to automate, and it is still difficult to construct an easy-to-use 3D model knowledge base. This paper proposes a method for building a 3D model knowledge base to enhance the ability to intelligently manage and reuse 3D models. The sources of 3D model knowledge are obtained from two aspects: on the one hand, constructing mapping rules between the 3D model features and semantics, and on the other hand, extraction from a common sense database. Firstly, the viewpoint orientation is established, the semantic transformation rules of different feature values are established, and the representation degree of different features is divided to describe the degree of the contour approximating the regular shape under different perspectives through classification. An automatic output model semantic description of the contour is combined with spatial orientation. Then, a 3D model visual knowledge ontology is designed from top to bottom based on the upper ontology of the machine-readable comprehensive knowledge base and the relational structure of the ConceptNet ontology. Finally, using a weighted directed graph representation method with a sparse-matrix-integrated semantic dictionary as a carrier, an entity dictionary and a relational dictionary are established, covering attribute names and attribute value data. The sparse matrix is used to record the index information of knowledge triplets to form a three-dimensional model knowledge base. The feasibility of this method is demonstrated by semantic retrieval and reasoning on the label meshes dataset and the cultural relics dataset.
... Moreover, advances in 3D vision systems offer greater potential for precise and fast detection of target regions. For example, Sergiyenko and Tyrsa [7] developed a 3D optical machine vision sensor with intelligent data management, which significantly improved robotic swarm navigation performance and enhanced 3D mapping capabilities. Similarly, Ivanov et al. [8] demonstrated the effectiveness of 3D data cloud fusion technology in improving the autonomous navigation accuracy of robotic groups in unknown terrains. ...
Article
Full-text available
During the rice harvesting process, severe occlusion and adhesion exist among multiple targets, such as rice, straw, and leaves, making it difficult to accurately distinguish between rice grains and impurities. To address the current challenges, a lightweight semantic segmentation algorithm for impurities based on an improved SegFormer network is proposed. To make full use of the extracted features, the decoder was redesigned. First, the Feature Pyramid Network (FPN) was introduced to optimize the structure, selectively fusing the high-level semantic features and low-level texture features generated by the encoder. Secondly, a Part Large Kernel Attention (Part-LKA) module was designed and introduced after feature fusion to help the model focus on key regions, simplifying the model and accelerating computation. Finally, to compensate for the lack of spatial interaction capabilities, Bottleneck Recursive Gated Convolution (B-gnConv) was introduced to achieve effective segmentation of rice grains and impurities. Compared with the original model, the improved model’s pixel accuracy (PA) and F1 score increased by 1.6% and 3.1%, respectively. This provides a valuable algorithmic reference for designing a real-time impurity rate monitoring system for rice combine harvesters.
... (1) The visual recognition system, which is based on a monocular camera and SSD algorithm in [28], [29], [30], is applied to measure the direction and distance to the target floating garbage. ...
Article
Full-text available
This paper introduces an intelligent controlled cleaning tool for picking up underwater floating garbage. An inhalation-type garbage cleaning robot is designed to improve the efficiency of garbage collection and simplify the controller design. A fuzzy-based PID control strategy is proposed for the position control of the designed cleaning robot. At first, the dynamic model of the robot is built with its parameters estimated by empirical formulas and Fluent software. Then, a fuzzy logic controller is designed to achieve self-adjusting of the PID parameters according to the position error and error change ratio of the robot. On this basis, an incremental PID control technique integrated with the PID parameters given by the fuzzy controller is presented. Furthermore, the cascade PID and fuzzy logic control algorithms for the designed cleaning robot are also developed. Finally, numerical simulations are conducted for comparison with the cascade PID control, the fuzzy logic control, and the proposed fuzzy-based PID control. The simulation results are discussed from different perspectives, showing that the adaptability, robustness, and overall control performance of the robot under the fuzzy-based incremental PID control are better than those of the fuzzy logic control and the cascade PID control.
... In integrated circuit wafer processing, SCARA robots, as an indispensable part, can transport and align wafers in different processes to meet the requirements of high speed and high precision in wafer handling [12]. Meanwhile, with the rapid development of 3D measurement technology, 3D vision sensors have been applied to robotic systems to perform object grasping tasks [13]. Wang et al. [14] built an experimental platform for the vision system based on the PointNetRGPE deeplearning method, combined with environmental factors, to estimate the grasping posture of SCARA robots, thereby improving the success rate of grasping. ...
Article
Full-text available
Due to the high cost of robots, the algorithm testing cost for physical robots is high, and the construction of motion control programs is complex, with low operation fault tolerance. To address this issue, this paper proposes a low-cost, cross-platform SCARA robot digital-twin simulation system based on the concept of digital twins. This method establishes a 5D architecture based on the characteristics of different platforms, classifies data and integrates functions, and designs a data-processing layer for motion trajectory calculation and data storage for a virtual-reality robot. To address the complexity of data interaction under different cross-platform communication forms, an editable, modular, cross-platform communication system is constructed, and various control commands are encapsulated into simple programming statements for easy invocation. Experimental results showed that, based on modular communication control, users can accurately control data communication and synchronous motion between virtual and physical models using simple command statements, reducing the development cost of control algorithms. Meanwhile, the virtual-robot simulation system, as a data mapping of the real experimental platform, accurately simulated the physical robot’s operating state and spatial environment. The robot algorithms tested using the virtual simulation system can be successfully applied to real robot platforms, accurately reproducing the operating results of the virtual system.
... These make it impossible to obtain normally or the image quality obtained is very poor. If not handled properly, they can cause problems such as missing point cloud data and excessive noise [18]. So the study identifies that the conveyor belts' transfer points, machine heads, and other parts are prone to longitudinal tearing. ...
... Point clouds are a widely used representation of 3-D objects, offering a rich expression of their geometric information. This versatility has led to their broad adoption across various application scenarios, including autonomous driving [24], robotics [6,26], and the metaverse [28]. In the early developing stage of point cloud understanding, there are lots of related work proposed to enhance the ability of networks for understanding point cloud [9,14,21,22,35] where the networks typically need fully-supervised training from scratch. ...
Preprint
Masked autoencoder has been widely explored in point cloud self-supervised learning, whereby the point cloud is generally divided into visible and masked parts. These methods typically include an encoder accepting visible patches (normalized) and corresponding patch centers (position) as input, with the decoder accepting the output of the encoder and the centers (position) of the masked parts to reconstruct each point in the masked patches. Then, the pre-trained encoders are used for downstream tasks. In this paper, we show a motivating empirical result that when directly feeding the centers of masked patches to the decoder without information from the encoder, it still reconstructs well. In other words, the centers of patches are important and the reconstruction objective does not necessarily rely on representations of the encoder, thus preventing the encoder from learning semantic representations. Based on this key observation, we propose a simple yet effective method, i.e., learning to Predict Centers for Point Masked AutoEncoders (PCP-MAE) which guides the model to learn to predict the significant centers and use the predicted centers to replace the directly provided centers. Specifically, we propose a Predicting Center Module (PCM) that shares parameters with the original encoder with extra cross-attention to predict centers. Our method is of high pre-training efficiency compared to other alternatives and achieves great improvement over Point-MAE, particularly outperforming it by 5.50%, 6.03%, and 5.17% on three variants of ScanObjectNN. The code will be made publicly available.
... Therefore, both for separately collected and for mixed MSW, further separation is performed at Materials Recovery Facilities (MRF) and comprised of many consistent sorting operations. This section aims to define a place of machine vision [165] in contemporary waste sorting and to identify relevant tasks, challenges, working conditions, and performance requirements. For this purpose, the authors: ...
Chapter
A huge variety of Municipal Solid Wastes (MSWs) caused by global urbanization and progress in development of packaging materials requires advanced sorting systems. Accurate and efficient recovery of recyclables from MSW, such as glass, paper, and plastics, is crucial to the development of a circular economy. Currently used optical solutions based near-infrared (NIR) and Visible Spectrum (VIS) enriched with machine learning methods for data analysis do not demonstrate adequate performance and efficiency in terms of solving detection and classification tasks. Still the sorting problem remains a complex set of tasks ranging from hardware and sensing to data analysis and inference. Moreover, the sorting process happens at the speed of around 6 m/s requiring perfect synchronization and high performance of individual units as well as the entire sorting system. In this chapter, we first start with the deep analysis of requirements and challenges in the scope of MSW automated sorting. Afterward we discuss hardware and sensing technologies serving as a data collection tool. We then cover the image processing topic stressing the following areas: image fusion and processing, computer vision methods, computer vision datasets, and data augmentation.
... Sergiyenko et al. proposed the use of three-dimensional optical sensor data transmission algorithms for robot group navigation and visual sensor problems. Automated three-dimensional metrology was combined to optimize the database and improve the PP capability of robots [16]. Yue et al. ...
... Foremost among these challenges is the development of robust sensor fusion algorithms capable of seamlessly integrating data from disparate sensors while compensating for errors, biases, and uncertainties inherent in each sensor modality. Furthermore, ensuring the resilience of these systems to environmental factors such as electromagnetic interference, dynamic motion, and changing lighting conditions presents a significant hurdle [11,12]. Therefore, the state estimation problem holds great significance in the INS/CNS navigation [13]. ...
Article
Full-text available
For the degradation of the filtering performance of the INS/CNS navigation system under measurement noise uncertainty, an adaptive cubature Kalman filter (CKF) is proposed based on improved empirical mode decomposition (EMD) and a resampling-free sigma-point update framework (RSUF). The proposed algorithm innovatively integrates improved EMD and RSUF into CKF to estimate measurement noise covariance in real-time. Specifically, the improved EMD is used to reconstruct measurement noise, and the exponential decay weighting method is introduced to emphasize the use of new measurement noise while gradually discarding older data to estimate the measurement noise covariance. The estimated measurement noise covariance is then imported into RSUF to directly construct the posterior cubature points without a resampling step. Since the measurement noise covariance is updated in real-time and the prediction cubature points error is directly transformed to the posterior cubature points error, the proposed algorithm is less sensitive to the measurement noise uncertainty. The proposed algorithm is verified by simulations conducted on the INS/CNS-integrated navigation system. The experimental results indicate that the proposed algorithm achieves better performance for attitude angle.
... Eqs. (3)(4)(5)(6) show the border regression process: ...
Article
Full-text available
This study aims to enhance the safety and efficiency of port navigation by reducing ship collision accidents, minimizing environmental risks, and optimizing waterways to increase port throughput. Initially, a three-dimensional map of the port’s waterway, including data on water depth, rocks, and obstacles, is generated through laser radar scanning. Visual perception technology is adopted to process and identify the data for environmental awareness. Single Shot MultiBox Detector (SSD) is utilized to position ships and obstacles, while point cloud data create a comprehensive three-dimensional map. In order to improve the optimal navigation approach of the Rapidly-Exploring Random Tree (RRT), an artificial potential field method is employed. Additionally, the collision prediction model utilizes K-Means clustering to enhance the Faster R-CNN algorithm for predicting the paths of other ships and obstacles. The results indicate that the RRT enhanced by the artificial potential field method reduces the average path length (from 500 to 430 m), average time consumption (from 30 to 22 s), and maximum collision risk (from 15 to 8%). Moreover, the accuracy, recall rate, and F1 score of the K-Means + Faster R-CNN collision prediction model reach 92%, 88%, and 90%, respectively, outperforming other models. Overall, these findings underscore the substantial advantages of the proposed enhanced algorithm in autonomous navigation and collision prediction in port waterways.
... A number of SLAM algorithms are applied to commercial applications, which means that mobile robots have the ability of autonomous navigation in structured environments such as offices and factories [4,5]. It is also a current research trend to adopt 3D optical sensors based on dynamic triangulation for the SLAM of a robotic swarm [6]. To deal with a group of robots for navigation, the idea of data transferring was proposed [7]. ...
Article
Full-text available
SLAM (simultaneous localization and mapping) plays a crucial role in autonomous robot navigation. A challenging aspect of visual SLAM systems is determining the 3D camera orientation of the motion trajectory. In this paper, we introduce an end-to-end network structure, InertialNet, which establishes the correlation between the image sequence and the IMU signals. Our network model is built upon inertial measurement learning and is employed to predict the camera’s general motion pose. By incorporating an optical flow substructure, InertialNet is independent of the appearance of training sets and can be adapted to new environments. It maintains stable predictions even in the presence of image blur, changes in illumination, and low-texture scenes. In our experiments, we evaluated InertialNet on the public EuRoC dataset and our dataset, demonstrating its feasibility with faster training convergence and fewer model parameters for inertial measurement prediction.
... I N RECENT decades, there have been more demands for the multiview microscopic 3-D measurement (MVMs-3DM) systems from various verticals, including printed circuit board assembly and semiconductor packaging applications [1]. The triangulation [2], [3], [4], [5] associated with a digital projector [6], [7], [8] is widely adopted instead of laser scanning [9], [10] due to its inherent advantage (e.g., full-field data acquisition, high accuracy, and high speed). With the exact projectors' coordinates retrieved by the absolute phase [11], [12], [13], the surface topography can be measured accurately [2], [14], [15]. ...
Article
This article proposes a novel calibration framework for a multiview microscopic 3-D measurement system comprising multiple tilted-cameras with the Scheimpflug condition and one vertical projector. Conventionally, the accurate and independent point correspondences between images and planar targets achieve successful calibration. However, affected by our system's large imaging magnification and anisotropic perspectivity, targets’ poses must be limited in the shallow common focus volume to ensure correspondences’ extracting accuracy of all views. This imaging constraint will lead to insufficiently independent correspondences in calibration computation, where constraints will be insufficient in parameter estimation (under-constraint problems) and bundle adjustment (falling in local optima caused by ambiguous models). To this end, our method first relies on insufficiently independent correspondences to tackle the above problems. In calibrating multiple tilted-cameras, the proposed initialization method incorporates the lens equation as an additional constraint to solve the under-constraint problem. The subsequent multicamera bundle adjustment disambiguates by stereo constraints. Moreover, for the vertical projector, another novel unconventional calibration method utilizes geometric constraints to determine its projection matrix. The experiments prove that the estimation and optimization of parameters for multiple tilted-cameras are significantly enhanced with the smallest reprojection error near 0.1 pixel. In addition, the 3-D measurement result of the standard concentric cylinders has also been applied to demonstrate the accuracy of the vertical projector calibration and the whole calibration framework.
... The measuring tool for the inspection system is a Laser Scanner known as Technical Vision System or TVS [7]. This Scanner works with the principle of dynamic triangulation (see Figure 1) to measuring spatial coordinates of the objects highlighted by a laser. ...
... M ULTISENSOR fusion, which could take advantage of heterogeneous sensors with complementary characteristics to overcome the drawback of a standalone sensor, has become a research hotspot in recent years [1], [2], [3]. Following this trend, the robotics, augmented reality, and selfdriving car communities proposed numerous multisensor fusion systems for localization and perception applications, ranging from camera/inertial measurement unit (IMU) systems [4] and light detection and ranging (LiDAR)/IMU systems [5] to camera/IMU/LiDAR systems [6]. ...
Article
Nowadays, multisensor fusion, which could compensate for the deficiencies of stand-alone sensors, has been widely accepted as a feasible solution for navigation and perception tasks. To achieve optimal information fusion, accurate intersensor spatiotemporal transformation is a fundamental prerequisite. However, few existing calibration methods could accurately and flexibly calibrate spatiotemporal parameters for camera/inertial measurement unit (IMU)/light detection and ranging (LiDAR) sensor suites without relying on artificial targets and manual intervention, especially for those low-cost suites that employ rolling shutter (RS) cameras. To this end, building upon our previous work (i.e., two-step LiDAR-IMU-camera (LIC)-calib), we propose a spatiotemporal calibration method for heterogenous-camera/IMU/LiDAR systems, which supports both global shutter (GS) and RS cameras, and enables intrinsic refinement. In particular, a novel initialization method is developed to provide accurate initial guesses, requiring no prior knowledge or calibration infrastructure. To further improve the calibration consistency and accuracy, the split calibration of camera/IMU and LiDAR/IMU in two-step is unified into a global nonlinear optimization problem. Both simulated and real-world experiments were conducted for quantitative evaluation, and the results demonstrate that the proposed method outperforms its predecessor and other state-of-the-art methods.
... Stereo vision is a significant subfield of computer vision. It emulates the human visual system to perceive the objective world and finds extensive applications in traditional air-only environments, such as industrial inspection [1,2], medical imaging [3,4], self-driving [5,6], smart robot [7][8][9], and virtual reality [10,11]. While stereo vision suffices to meet the requirements of measurement environments with air only, the scope of its applications continues to expand, imposing increasingly stringent demands on equipment and algorithms. ...
... Although this method is relatively straightforward to use and enhances welding efficiency, it is vulnerable to weldment deformation resulting from the high temperature. Therefore, the method of filling directly following the trajectory may not be able to guarantee the welding quality [2][3]. In addition, although traditional machine vision methods can enhance the accuracy of initial weld seam localization, they may be sensitive to ambient lighting during welding, thereby resulting in error amplification. ...
Article
Full-text available
The welding process for medium and thick plates typically involves multi-layering and multi-channeling, but its quality and reliability require further improvement. Therefore, this study introduces the convolution neural network algorithm to establish a deep learning model for weld seam recognition. Additionally, the structured light imaging method is used to accurately position the V-shaped weld groove. Meanwhile, a machine vision based multi-layer and multi-pass dynamic routing planning algorithm was also studied and designed, and a contour feature point recognition algorithm for the filling layer was developed. Thus, dynamic routing planning is achieved. It is demonstrated that the difference between the coordinates acquired by the deep learning model and the ideal region decreases steadily and reaches a minimum of (200,80). The confidence level of weld seam detection gradually increases with the adjustment of the welding robot, reaching a maximum of 98%. The confidence level of the detected feature points reaches 100%. In the meantime, the remaining height of the fusion after welding is 2.5mm. There are no negative phenomena present on the surface of the weld seam, meeting the necessary process requirements. Such discrepancies as undercut, incomplete penetration, slag inclusion, and porosity are absent. It shows that the welding technology based on machine vision has strong feasibility, effectively improves the automation level and efficiency of welding technology, and provides reliable technical support for the development of modern machine vision welding technology.
... In addition, it is important that robots understand human behavior and the characteristics of environments, using different tools, such as GProxemic Navigation [7], which delimits the proxemic zone according to the characteristics of the environment, and Dialogflow, which works with natural language processing. These particularities improve the HRI, as they provide the ability to analyze speech, being able to obey voice commands, such as in [8,9,10], and the capacity of respecting social constraints during robots' navigation (i.e., social navigation). ...
Chapter
Full-text available
Social robotics is an increasing area that has boosted the integration of robots and people in common environments. Thus, new human-robot interaction (HRI) techniques have emerged to make robots behave in acceptable and social ways. Proxemic interaction is one of these new techniques, that dictates the interaction between people and devices based on distance, which in turn defines proxemic zones. In this sense, robots must respect the proxemic zones of people around them, while navigating in the shared spaces. In this work, we propose a social aware navigation system based on proxemic that responds to voice commands integrated with a chatbot to define path planning for a wheelchair, in around a crowded environment. This social navigation system is integrated into GProxemic Navigation, a system that automatically provides the robot location and decides the proxemic zones of people that robots (an autonomous wheelchair, for this work) must not transverse during their navigation, according to the environment characteristics. With this implementation, the autonomous wheelchair can be driven to make the most efficient path respecting the social constraints of the environment
... Optical sensors generate varying degrees of electrical signals from different photoelectric effects by collecting information on the intensity of light emitted by the light source on the sensor [17]. This principle enables the conversion of light intensity into quantitative digital data. ...
Article
This article introduces a multiview 3-D microscopic profilometry system equipped with tilted cameras adhering to the Scheimpflug condition and a vertically aligned projector. It utilizes a novel 3-D cross-ratio invariant (3D-CRI) model that offers inherent self-correction, enhanced global consistency, and computational efficiency. The inherent self-correction mitigates optical contaminations like multiple reflections by using deviations from the epipolar line for optimal candidate selection. The model is also designed as spatial lines to associate the approximate linear error distribution across depths, thereby facilitating its further correction (e.g., proposed embedded linear compensation method to address data inconsistency among various binocular projector-camera setups). Additionally, the model simplifies computational demands by directly correlating phase differences to spatial coordinates. Furthermore, a conversion method from the triangular stereo to the 3D-CRI model is developed, avoiding the complex per-pixel calibration. The experiments are conducted to verify robustness, globally consistent accuracy, and speed performance. The experimental results validate that the system is robust to multiple reflections by testing on the printed circuit boards with dense components. The global consistency is also verified by the experiments with better quantitative metrics within the whole measurement range over ten times the depth of field. In particular, the quantitative metrics of the Z -axis are halved. The speed performance shows that our method is the fastest among competing techniques.
Chapter
The chapter discusses a system for measuring the three-dimensional geometry of ice that forms on the functional elements of a wind turbine’s surface within a climatic wind tunnel. A software and hardware complex has been developed to accurately measure three-dimensional ice geometry using the phase triangulation method. The phase triangulation method has been adapted for measurements under conditions of limited volume and optical signal refraction. A calibration method for the measuring system has been developed to establish a one-to-one correspondence between the coding phase of the sine wave and the three-dimensional coordinates of the object. This accounts for the limited volume and optical signal refraction. Additionally, a software module has been created to optimize measuring system parameters, considering the ice surface’s light scattering properties, external lighting conditions, and minimizing measurement errors. The software also supports automated search for optimal photodetector parameters based on intermediate measurement data. An automated search method for the optimal frequency of phase modulation based on calibration data has been developed and implemented. Another software module has been developed for analyzing the icing of an object based on the results of measuring its three-dimensional geometry. The developed software and hardware complex is extensively used in the process of scientific research within a climatic wind tunnel to evaluate the efficacy of methods for mitigating functional element icing in wind turbines.
Chapter
Laser scanner systems are widely used nowadays for objects and surface detection in several applications with different goals; some of them are fault detection, pattern recognition, object detection, and photogrammetry, among others. Cameras are commonly selected as collaborative elements for such systems; they are used to enhance the information quality acquired by the laser scanner. Details such as color, texture, temperature, and spatial data adjustment are some of the advantages of their application. Nevertheless, the data (commonly 3D point clouds) acquired by these systems still have to be post-processed to fit the requirements of specific applications. The proposed chapter will present a quick overview of similar scanning systems and their methodologies as the introductory section. Furthermore, it presents a laser scanner’s design, features, and mathematical formalism. The following section is divided into subchapters. The optical aspects of the scanner functioning are brought into the discussion. After, electromechanical analysis of the laser manipulator where friction compensation is done to achieve rotational stability and precision improvement. Then, the collaborative functioning and mathematical relationship between the laser scanner and stereo vision system is presented; this provides the scanning system with a so-called “multi-view.” Further data processing of the resulting measurements to increase the systems’ capability of surface characterization by texture and color identification is shown. The final section of the chapter lists possible applications for the proposed scanning system and provides advantages of the implementation. Proper conclusions to the chapter are presented in the final section.
Article
Full-text available
The existing segmentation-based scene text detection methods mostly need complicated post-processing, and the post-processing operation is separated from the training process, which greatly reduces the detection performance. The previous method, DBNet, successfully simplified post-processing and integrated post-processing into a segmentation network. However, the training process of the model took a long time for 1200 epochs and the sensitivity to texts of various scales was lacking, leading to some text instances being missed. Considering the above two problems, we design the text detection Network with Binarization of Hyperbolic Tangent (HTBNet). First of all, we propose the Binarization of Hyperbolic Tangent (HTB), optimized along with which the segmentation network can expedite the initial convergent speed by reducing the number of epochs from 1200 to 600. Because features of different channels in the same scale feature map focus on the information of different regions in the image, to better represent the important features of all objects in the image, we devise the Multi-Scale Channel Attention (MSCA). Meanwhile, considering that multi-scale objects in the image cannot be simultaneously detected, we propose a novel module named Fused Module with Channel and Spatial (FMCS), which can fuse the multi-scale feature maps from channel and spatial dimensions. Finally, we adopt cross-entropy as the loss function, which measures the difference between predicted values and ground truths. The experimental results show that HTBNet, compared with lightweight models, has achieved competitive performance and speed on Total-Text (F-measure:86.0%, FPS:30) and MSRA-TD500 (F-measure:87.5%, FPS:30).
Article
Full-text available
This paper introduces an autonomous robot designed for in-pipe structural health monitoring of oil/gas pipelines. This system employs a 3D Optical Laser Scanning Technical Vision System (TVS) to continuously scan the internal surface of the pipeline. This paper elaborates on the mathematical methodology of 3D laser surface scanning based on dynamic triangulation. This paper presents the mathematical framework governing the combined kinematics of the Mobile Robot (MR) and TVS. It discusses the custom design of the MR, adjusting it to use of robustized mathematics, and incorporating a laser scanner produced using a 3D printer. Both experimental and theoretical approaches are utilized to illustrate the formation of point clouds during surface scanning. This paper details the application of the simple and robust mathematical algorithm RANSAC for the preliminary processing of the measured point clouds. Furthermore, it contributes two distinct and simplified criteria for detecting defects in pipelines, specifically tailored for computer processing. In conclusion, this paper assesses the effectiveness of the proposed mathematical and physical method through experimental tests conducted under varying light conditions.
Article
This paper proposes an enhancement of a singular spectrum analysis (SSA) technique based on Hilbert Matrix (SSA-H) for autonomous navigation systems in a harsh environment. SSA is a technique that has caught the attention of different fields, and is used to extract trends and patterns in the time series domain. In this work, SSA-H is used to transform raw optical signals into meaningful information to build machine learning (ML) models. A technical vision system (TVS) with laser scanning for depth measurements is presented. According to the implementation of the TVS outdoors, some particular issues, such as interference radiation detected by its photodiode, can affect the system’s performance in determining depth measurements. This research extracts the laser beam patterns in a real environment to create ML models to solve the problem and address some of the technical challenges in a real environment. The main contribution of this work is designing an ML framework for recognizing the laser beam of the TVS based on SSA-H. Furthermore, a comparative analysis of ML techniques to discriminate sunlight interference was studied and compared with different configurations of SSA known in the literature. According to the results, the use of SSA-H can enhance the ML models such as Ensemble Learning (EL), Feed-Forward Neural Networks (FNN), Support Vector Machines (SVM), Naive Bayes (NB) in conjunction with autocorrelation function coefficients (ACC) as features.
Article
The high precision measurement of binocular camera depends on the calibration accuracy. Mainstream methods focus on the calibration of intrinsic parameters and the pose relationship of two cameras in sequence. However, the transmission of error will decrease the calibration quality. It is still challenging especially in the case of an unknown imaging model due to the intervention of baffle and multimedia refraction. In this article, a general calibration method for a binocular vision system with an unknown imaging model is proposed. Based on the one-to-one mapping between 4-D image coordinates and 3-D Cartesian coordinates of spatial points, the intersection line of two calibration planes is extracted. All calibration planes are organized in the form of a three-plane group, and in each group, the poses are solved by the intersection lines. These poses are further optimized to acquire accurate 3-D coordinates of points in calibration planes. Combined with the correspondences of image pixels and points in a calibration plane, the line-plane intersection point between the projection line and this plane is calculated. Finally, projection lines are fitted based on corresponding line-plane intersection points. By modeling the relationship between image pixels and object-end projection lines, the uncertainty of the imaging model is solved. The proposed method is universal and stable for different imaging models including refraction, and the results of simulation and experiment show its effectiveness.
Article
Full-text available
Machine vision, an interdisciplinary field that aims to replicate human visual perception in computers, has experienced rapid progress and significant contributions. This paper traces the origins of machine vision, from early image processing algorithms to its convergence with computer science, mathematics, and robotics, resulting in a distinct branch of artificial intelligence. The integration of machine learning techniques, particularly deep learning, has driven its growth and adoption in everyday devices. This study focuses on the objectives of computer vision systems: replicating human visual capabilities including recognition, comprehension, and interpretation. Notably, image classification, object detection, and image segmentation are crucial tasks requiring robust mathematical foundations. Despite the advancements, challenges persist, such as clarifying terminology related to artificial intelligence, machine learning, and deep learning. Precise definitions and interpretations are vital for establishing a solid research foundation. The evolution of machine vision reflects an ambitious journey to emulate human visual perception. Interdisciplinary collaboration and the integration of deep learning techniques have propelled remarkable advancements in emulating human behavior and perception. Through this research, the field of machine vision continues to shape the future of computer systems and artificial intelligence applications.
Chapter
Social robotics is a growing field that aims to enhance the interaction between robots and humans in everyday settings. To achieve this goal, various new techniques for human-robot interaction (HRI) have emerged. One such technique is proxemic interaction, which governs how people and robots interact based on their distance from each other, leading to the definition of proxemic zones. In our work, we present a socially-aware navigation system built on proxemic principles. This system responds to voice commands and incorporates a Chatbot to determine the robot’s path within a crowded environment. This innovative social navigation system is seamlessly integrated into GProxemic Navigation, a system that not only provides the robot’s location but also intelligently identifies the proxemic zones that the robot should avoid while navigating. These proxemic zones are determined based on the characteristics of the environment. To showcase the functionality and suitability of our proposed proxemic navigation system, we have implemented it in an autonomous Pepper Robot This implementation allows the Pepper Robot to navigate efficiently while respecting the social constraints imposed by the environment, enhancing the robot’s ability to coexist harmoniously with people in shared spaces.
Article
Full-text available
As a biological feature with strong spatio-temporal correlation, the current difficulty of gait recognition lies in the interference of covariates (viewpoint, clothing, etc.) in feature extraction. In order to weaken the influence of extrinsic variable changes, we propose an interval frame sampling method to capture more information about joint dynamic changes, and an Omni-Domain Feature Extraction Network. The Omni-Domain Feature Extraction Network consists of three main modules: (1) Temporal-Sensitive Feature Extractor: injects key gait temporal information into shallow spatial features to improve spatio-temporal correlation. (2) Dynamic Motion Capture: extracts temporal features of different motion and assign weights adaptively. (3) Omni-Domain Feature Balance Module: balances fine-grained spatio-temporal features, highlight decisive spatio-temporal features. Extensive experiments were conducted on two commonly used public gait datasets, showing that our method has good performance and generalization ability. In CASIA-B, we achieved an average rank-1 accuracy of 94.2% under three walking conditions. In OU-MVLP, we achieved a rank-1 accuracy of 90.5%.
Article
Full-text available
In this study, we introduce a novel design for a three-dimensional (3D) controller, which incorporates the omni-purpose stretchable strain sensor (OPSS sensor). This sensor exhibits both remarkable sensitivity, with a gauge factor of approximately 30, and an extensive working range, accommodating strain up to 150%, thereby enabling accurate 3D motion sensing. The 3D controller is structured such that its triaxial motion can be discerned independently along the X, Y, and Z axes by quantifying the deformation of the controller through multiple OPSS sensors affixed to its surface. To ensure precise and real-time 3D motion sensing, a machine learning-based data analysis technique was implemented for the effective interpretation of the multiple sensor signals. The outcomes reveal that the resistance-based sensors successfully and accurately track the 3D controller’s motion. We believe that this innovative design holds the potential to augment the performance of 3D motion sensing devices across a diverse range of applications, encompassing gaming, virtual reality, and robotics.
Article
Full-text available
The design of a swarm optimization-based fractional control for engineering application is an active research topic in the optimization analysis. This work offers the analysis, design, and simulation of a new neural network- (NN) based nonlinear fractional control structure. With suitable arrangements of the hidden layer neurons using nonlinear and linear activation functions in the hidden and output layers, respectively, and with appropriate connection weights between different hidden layer neurons, a new class of nonlinear neural fractional-order proportional integral derivative (NNFOPID) controller is proposed and designed. It is obtained by approximating the fractional derivative and integral actions of the FOPID controller and applied to the motion control of nonholonomic differential drive mobile robot (DDMR). The proposed NNFOPID controller’s parameters consist of derivative, integral, and proportional gains in addition to fractional integral and fractional derivative orders. The tuning of these parameters makes the design of such a controller much more difficult than the classical PID one. To tackle this problem, a new swarm optimization algorithm, namely, MAPSO-EFFO algorithm, has been proposed by hybridization of the modified adaptive particle swarm optimization (MAPSO) and the enhanced fruit fly optimization (EFFO) to tune the parameters of the NNFOPID controller. Firstly, we developed a modified adaptive particle swarm optimization (MAPSO) algorithm by adding an initial run phase with a massive number of particles. Secondly, the conventional fruit fly optimization (FFO) algorithm has been modified by increasing the randomness in the initialization values of the algorithm to cover wider searching space and then implementing a variable searching radius during the update phase by starting with a large radius which decreases gradually during the searching phase. The tuning of the parameters of the proposed NNFOPID controller is carried out by reducing the MS error of 0.000059, whereas the MSE of the nonlinear neural system (NNPID) is equivalent to 0.00079. The NNFOPID controller also decreased control signals that drive DDMR motors by approximately 45 percent compared to NNPID and thus reduced energy consumption in circular trajectories. The numerical simulations revealed the excellent performance of the designed NNFOPID controller by comparing its performance with that of nonlinear neural (NNPID) controllers on the trajectory tracking of the DDMR with different trajectories as study cases. 1. Introduction Mobile robots serve platforms with huge versatility within their environment; they are not limited to one location because they can be pushed autonomously in their own circumference. In other words, it has the capability of implementing tasks without assistance from external operators [1]. Mobile robots are unique to move freely in a predefined workplace to accomplish the preferred objectives. This skill of mobility makes the mobile robots appropriate for vast application fields in unstructured and structured surroundings. The ground mobile robot can be categorized into a wheeled mobile robot (WMR) and legged mobile robot (LMR). The WMRs are prevalent because they are tailored to particular applications with reasonably small mechanical complexities and power drain [2]. Within the last decades, many different control structures were introduced into industrial societies to handle the restrictions of the classical controllers. The PID controller which has been dominated by the industrial organizations has been changed using the concept of differentiators and integrators of fractional power. It was shown that a combination of further degrees of freedom with differentiators and integrators of fractional power provided a greater degree of flexibility and performance that would otherwise be hard (even impossible) to come by the conventional PID controllers [3, 4]. The fractional-order PID (FOPID) controllers are the generalization of widely applicable PID controllers; in recent years, they have drawn much attention from both academics and industry [3–5]. Fractional controls are less sensitive to parameter changes in a controlled system. A fractional control unit can easily achieve the isodamping property. On the other hand, incorporating an integral action in the feedback loop has the advantage of eliminating steady-state errors on account of reducing relative stability of the system. It can be concluded that by designing more general controller laws in the form (1/sⁿ), (sⁿ), n ∈ R⁺, the feedback system with more favorable solutions between undesirable and constructive consequences of the above scenario could be attained and by combining these control actions. Tuning of a fractional PID controller is difficult as five parameters have to be tuned, which means two more parameters compared to a traditional PID controller [3–5]. Some methods were proposed for the proper choice of the parameters’ values of the PID controller. The method of Ziegler–Nichols tuning strategy was acquainted in 1942 for the parameters regulation of the PID controller coefficients; this tuning technique is utilized if the model of the system is a first order plus dead time. In recent years, methods of optimization which are theoretically different from classical optimization have been invented. They depend on specific properties and behavior of organic herd of birds and nature-inspired and neurobiological systems. These metaheuristic procedures have been developed in the last decade and are evolving as common methods for solving numerical optimization and intricate industrial case studies. Particle swarm optimization (PSO) is a metaheuristic optimization method which depends on the motion and intellect of bird’s colony behavior or fish bevy schooling. Kennedy and Eberhart initially suggested particle swarm optimization (PSO) technique in 1995 [6]. Advantages of PSO are as follows: (1) it does not need the derivative of the cost function, (2) it can be parallelized, and (3) it has fast convergence behavior. On the other hand, the fruit fly optimization (FFO) swarm technique is one of the state-of-the-art evolutionary computation techniques based on the foraging behavior of fruit flies which was pointed out by Pan [7]. The olfactory organ of a fruit fly can collect different smells from the air and even locate the source of the food from a distance of 40 km. Subsequently, the fruit flies travel to the source of the food and use their acute visionary system to locate the food destination (minimum or maximum of the function) where their companions form a swarm and then travel in that direction. The FFO algorithm seems to be an excellent optimization algorithm; it has numerous benefits such as speed to acquire solutions, the simplicity of its structure, and ease of implementation. So, FFO was effectively used and applied in a diverse class of applications [7–9]. However, FFO algorithm suffers from some shortcomings. Firstly, there is inadequacy in the FFO algorithm concerning the searching policy, a necessary step to yield new solutions of the FFO algorithm using random information of the previous solutions. Moreover, the FFO algorithm has weak exploration ability, low convergence precision, and jumps out of the local minimum. Finally, the candidate solutions cannot be generated in a uniform manner in the domain. On the other hand, PSO experiences the premature convergence, a common phenomenon in the evolutionary methods in very sophisticated applications such as path planning and motion control of mobile robots. Also, it relies on user experience to find the optimum values of some parameters like the inertia weights and social and cognitive coefficients. Moreover, standard swarm optimization algorithms do not find the optimum solutions in a rational time [9]. Therefore, the structure of the FFO and PSO algorithms requires further improvements for attaining the optimum solutions to the real-world applications. The motivation for the hybridization between the MAPSO and EFFO algorithms is an attempt to combine the beneficial features of MAPSO and EFFO algorithms and conduct a sequential operation for these two optimization algorithms over the progression of the process. Moreover, the hybridization between MAPSO and EFFO algorithms will overcome the limitations of the individual MAPSO and EFFO algorithms mentioned above. This hybridization will be accomplished as described later in this paper. Many researchers have conducted research studies on motion control problem of DDMR under nonholonomic constraints, and so various kinds of controllers were demonstrated in the literature for the mobile robots to track specific trajectories. Trajectory tracking of wheeled mobile robots using hybrid visual servo equipped with onboard vision systems is described in [10]. In [11], the authors addressed the output feedback trajectory tracking problem for a nonholonomic wheeled mobile robot in the presence of parameter uncertainty, exogenous disturbances, and without velocity measurements using fuzzy logic techniques. The work in [12] focused on the localization, kinematics, and closed-loop motion control for a DDMR. The authors of [13] developed an online nonlinear optimal tracking control method for unmanned ground systems by firstly establishing the nonlinear tracking error model for unmanned ground systems (UGSs), and then the tracking control problem for UGS was converted to a continuous nonlinear optimal control problem with the help of a symplectic pseudospectral method based on the third kind of generation function. In [14], the authors proposed a kinematic-based neural network controller for nonlinear control of the DDMR with nonholonomic constraints. In [15], an iterative learning control over a wireless network for a class of unicycle type mobile robot systems is proposed, and the study included the channel noise effect and the robustness analysis of the proposed system. In [16], a sliding mode-based asymptotically stabilizing controller law has been proposed for a mobile robot. A dynamic prediction-based model predictive control method is offered in [17] for wheeled mobile robots taking into account the tire sideslip. Fuzzy based controllers for autonomous mobile robots have been argued in [18, 19], where the work in [19] dealt with unstructured environments. The work in [20] proposed a disturbance observer based on biologically inspired integral sliding mode control for trajectory tracking of mobile robots. A time-optimal velocity tracking controller for DDMR is presented in [21]. The authors in [22, 23] investigated a model predictive control (MPC) for differential drive mobile robots. Backstepping nonlinear control has been investigated on DDMR in [24]. Recently, researchers are applying a new control paradigm named active disturbance rejection control (ADRC) [25–30] on a wide range of applications [31, 32] and particularly on DDMR [33]. The authors in the literature proposed many algorithms for tuning parameters of the FOPID controller with different applications, where [34, 35] used the genetic algorithms and [36–39] utilized PSO algorithm. Others like Rajasekhar et al. [40] applied the gravitational search optimization technique based on the Cauchy and Gaussian mutation, and El-Khazali [41] exploited the artificial bee colony algorithm. Frequency-domain methods for the design of the FOPID controllers can be found in [42]. Finally, other algorithms like GA can be used to tune the FOPID controller and more complex controllers like [43]. The contributions in this research work lie in twofold:(1)Development of a MAPSO-EFFO algorithm: developing a modified adaptive particle swarm optimization (MAPSO) algorithm by adding an initial run phase with a massive number of particles. At the end of this initial running point, the smaller group of these fitness particles will be selected to continue with an adaptive PSO (APSO) algorithm. Moreover, the conventional fruit fly optimization (FFO) algorithm has been modified by increasing the randomness in the initialization values of the algorithm to cover wider searching space and then implementing a variable searching radius during the update phase by starting with a large radius which decreases gradually during the searching phase. Finally, adopting a hybridized MAPSO-EFFO algorithm by the serial blending of the MAPSO algorithm with EFFO one, i.e., the input to the EFFO algorithm is the output of the MAPSO. The hybridized MAPSO-EFFO technique is used for the evaluation of the parameters of the NNFOPID.(2)New nonlinear fractional control structure: a new NNFOPID controller is proposed in this paper which employs the structure of the neural networks (NNs). With suitable arrangements of the hidden layer neurons using sigmoid nonlinear activation and linear functions in the hidden and output layers, respectively, and with appropriate connection weights between different neurons in different layers, a new class of nonlinear neural FOPID controller is obtained by approximating the fractional derivative and integral actions of the FOPID controller. The outputs of the neural networks are the control actions used to drive the motors of the DDMR. The paper is organized in the following structure. Section 2 gives the motivation and problem statement. Section 3 presents the fractional calculus and the theoretical background of the fractional PID controllers. The kinematic model of the DDMR is introduced in Section 4. The proposed nonlinear neural conventional and fractional PID controllers for the trajectory tracking of the DDMR are explained in Section 5. Section 6 discusses the results and simulations of the designed motion controllers for the DDMR based on different trajectories. Finally, the conclusions are given in Section 7. 2. Problem Statement Given a nonholonomic differential drive mobile robot (DDMR) following a particular path, the tracking error occurs because of many factors like noise, disturbances, slippage, and the errors measured from sensors due to both interior and exterior causes. These issues also make the mobile robot has the difficulty to turn left or right by direction set or by using various sensors. Therefore, the DDMR kinematic model has been employed in this paper to synthesize neural network fractional-order PID (NNFOPID) controllers to regulate its speed so that it would track the required path in the plane as fast as possible with minimum mean square tracking error; this is called trajectory tracking problem. Three FOPID controllers will be designed to control the position (x and y) of the DDMR in the 2D plane and its orientation . Moreover, the aim of the proposed tracking FOPID controllers is reducing the energy consumed by the left and right motors of the DDMR. Given a reference path that needs to be followed by the DDMR, which consists of a set of positions in the 2D plane together with the orientation, i.e., , , and , the actual path consists of a set of positions , , and . Then, it is required to design a NNFOPID controller to generate the control velocities of the kinematic model of the DDMR such that the mean square error between and , and , and and is minimum with minimum peak of the left and right velocities of the DDMR. In contrast to the FOPID controller, the NNFOPID controller has more capability to capture the nonlinearity of the DDMR model due to the nature of the neural network structure employed in the design where nonlinear activation functions are used with hidden layer that has adaptable parameters (NNFOPID controller parameters). 3. Fractional Control Analysis Fractional calculus is a part of the mathematical analysis, which demonstrates the likelihood of the differential operator orders to be the complex or real number for the differentiation and integration. Generally, the form of the fractional-order operation represented by is called as differintegral operator. The sign of controls the action of differintegral whether to be an identity operator, an integrator, or a differentiator. The fractional integral and derivative using Grünwald–Letnikov (GL) definition follow the same procedures based on the multiderivative integer calculus. The general GL definition is stated as [42, 44]where refers to the integer part, a and b are the start and final limit values, and h is the sampling time. The utilization of GL of (1) in the computation of the output response of any fractional-order system can be illustrated as follows. Given any fractional system expressed by the fractional-order linear constant coefficients differential equation as [44],where are constant and are real numbers. Without loss of generality, the parameters and might be , and . Consider (2) with its right-hand side equal to u (t) such that Recall GL definition in (1); then, by substituting (1) into (3), the numerical solution of (3) can be evaluated as [44]and can be evaluated in recursive manner as follows: Now reconsider (2), where its right-hand side is equal to ǔ (t): Thus, ǔ (t) may be calculated firstly using (1), and then the output response due to ǔ (t) is computed from the solution of (6) as The FOPID controller increases the efficiency and the possibility of better system performance because of its five parameters. The differential equation of the FOPID controller with fractional power denoted as PIλDα is described by Taking Laplace transform to (8), we havewhere , , and are derivative, proportional, and integral control parameters, respectively, is the order of the fractional integral, and is the fractional derivative order. It is obvious that the FOPID controller has the three standard coefficients , , and in addition to parameters λ and α, which are fractional powers for derivative and integral actions, respectively. The values of α and λ are nonintegers with the restriction of being positive real numbers. Discretization methods of continuous-time fractional operators have been studied widely by many researchers [45, 46]. The fundamental principle to discretize a continuous fractional-order operator ( R) is to define it by what is called as the generating function s = ω (z⁻¹). Examples of such transformations are Euler, Tustin, and Simpson transformations. A more recent transformation formula is found as a weighted interpolation between the Euler and Tustin [45]. Usually, the aforementioned transformation schemes lead to a nonrational polynomial in z. To get a rational polynomial, one may find the power series expansion (PSE) of s = ω (z⁻¹) and then truncate the z-polynomial function (in the form of finite impulse response (FIR) filters) to compute the final approximation. The Tustin method is applied since it is more accurate compared to other transforms such as backward and forward difference as given below: Based on the above analysis, the nonlinear control law that drives wheels of the DDMR will be derived in detail in the next sections. Before that, a concise review of DDMR modeling will be developed. 4. Kinematic Modeling of Nonholonomic DDMR The position of the DDMR in the world coordinates axis {o, x, y} is illustrated in Figure 1. The kinematics model of the DDMR as shown in the figure consists of a castor wheel in the head of the cart and two driving wheels attached on one axis located at the back. The motion and the orientation of DDMR are achieved via two DC motors which form the actuators of the right and left wheels. Table 1 lists the parameters that have been used in the derivation of the DDMR kinematics.
Article
Full-text available
Planning an optimal path for a mobile robot is a complicated problem as it allows the mobile robots to navigate autonomously by following the safest and shortest path between starting and goal points. The present work deals with the design of intelligent path planning algorithms for a mobile robot in static and dynamic environments based on swarm intelligence optimization. A modification based on the age of the ant is introduced to standard ant colony optimization, called modified aging ant colony optimization (AACO). The AACO was implemented in association with grid-based modeling for the static and dynamic environments to solve the path planning problem. The simulations are run in the MATLAB environment to test the validity of the proposed algorithms. Simulations showed that the proposed path planning algorithms result in superior performance by finding the shortest and the most free-collision path under various static and dynamic scenarios. Furthermore, the superiority of the proposed algorithms was proved through comparisons with other traditional path planning algorithms with different static environments.
Article
Full-text available
The main aim of this paper is to solve a path planning problem for an autonomous mobile robot in static and dynamic environments. The problem is solved by determining the collision-free path that satisfies the chosen criteria for shortest distance and path smoothness. The proposed path planning algorithm mimics the real world by adding the actual size of the mobile robot to that of the obstacles and formulating the problem as a moving point in the free-space. The proposed algorithm consists of three modules. The first module forms an optimized path by conducting a hybridized Particle Swarm Optimization-Modified Frequency Bat (PSO-MFB) algorithm that minimizes distance and follows path smoothness criteria. The second module detects any infeasible points generated by the proposed hybrid PSO-MFB Algorithm by a novel Local Search (LS) algorithm integrated with the hybrid PSO-MFB algorithm to be converted into feasible solutions. The third module features obstacle detection and avoidance (ODA), which is triggered when the mobile robot detects obstacles within its sensing region, allowing it to avoid collision with obstacles. The simulation results indicate that this method generates an optimal feasible path even in complex dynamic environments and thus overcomes the shortcomings of conventional approaches such as grid methods. Moreover, compared to recent path planning techniques, simulation results show that the proposed hybrid PSO-MFB algorithm is highly competitive in terms of path optimality.
Article
Full-text available
Pose estimation is a critical problem in autonomous vehicle navigation, especially in circumstances where sensor failure or attacks exist. In this paper, a filter-based secure dynamic pose estimation approach is proposed such that vehicle pose can be resilient under the possible sensor attacks. Our proposed estimator coincides with conventional Kalman filter when all sensors on autonomous vehicles are benign. If less than half of the measurement states are compromised by randomly occurring deception attacks, it still gives stable estimates of the pose states, i.e., an upper bound for the estimation error covariance is guaranteed. Pose estimation results with single and multiple attacks on the testing route validate the effectiveness and robustness of the proposed approach.
Article
Full-text available
Determining unit position is very important for indoor autonomous aerial robots. In prior research, we used externally installed cameras to detect natural unit features to reduce the burden of measuring position, focusing on the elliptical trajectory of the rotating rotors while the unit was in flight as the natural feature of interest. Ellipse detection within images allows calculation of unit position. An outstanding problem is that ellipse detection takes a considerable amount of time, and in some environments it is difficult to distinguish between the rotor and the background. In this study, we investigate methods for addressing these problems, proposing a novel algorithm for fast ellipse detection. Furthermore, we record and visualize change in the rotating rotor pattern over time to enable detection against previously problematic backgrounds. For verification, we fly a general-purpose unit in a variety of environments and measure unit position. The results show that the proposed method reduces processing times for ellipse detection and that position can be measured without depending on the composition of flight environment, and thus that unit position detection is improved through the use of infrastructure cameras.
Article
Full-text available
This paper presents a meta-heuristic swarm natural activities of actual ants inspire which named to find the shortest and safest path for nonzero size for the mobile robot has been considered for the actual size of the mobile robot further modifications. Simulations results, which carried out using MATLAB suggested algorithm outperforms the standard version of ACO algorithm for the same problem with the same environmental conditions by providing the shortest path for multiple testing environments.
Article
Full-text available
The field of autonomous mobile robotics has recently gained many researchers' interests. Due to the specific needs required by various applications of mobile robot systems, especially in navigation, designing a real time obstacle avoidance and path following robot system has become the backbone of controlling robots in unknown environments. Therefore, an efficient collision avoidance and path following methodology is needed to develop an intelligent and effective autonomous mobile robot system. This paper introduces a new technique for line following and collision avoidance in the mobile robotic systems. The proposed technique relies on the use of low-cost infrared sensors, and involves a reasonable level of calculations, so that it can be easily used in real-time control applications. The simulation setup is implemented on multiple scenarios to show the ability of the robot to follow a path, detect obstacles, and navigate around them to avoid collision. It also shows that the robot has been successfully following very congested curves and has avoided any obstacle that emerged on its path. Webots simulator was used to validate the effectiveness of the proposed technique.
Article
Full-text available
Many laser scanners depend on their mechanical construction to guarantee their measurements accuracy, however, the current computational technologies allow us to improve these measurements by mathematical methods implemented in neural networks. In this article we are going to introduce the current laser scanner technologies, give a description of our 3D laser scanner and adjust their measurement error by a previously trained feed forward back propagation (FFBP) neural network with a Widrow-Hoff weight/bias learning function. A comparative analysis with other learning functions such as the Kohonen algorithm and gradient descendent with momentum algorithm is presented. Finally, computational simulations are conducted to verify the performance and method uncertainty in the proposed system.
Article
Full-text available
ToF cameras are now a mature technology that is widely being adopted to provide sensory input to robotic applications. Depending on the nature of the objects to be perceived and the viewing distance, we distinguish two groups of applications: those requiring to capture the whole scene and those centered on an object. It will be demonstrated that it is in this last group of applications, in which the robot has to locate and possibly manipulate an object, where the distinctive characteristics of ToF cameras can be better exploited. After presenting the physical sensor features and the calibration requirements of such cameras, we review some representative works highlighting for each one which of the distinctive ToF characteristics have been more essential. Even if at low resolution, the acquisition of 3D images at frame-rate is one of the most important features, as it enables quick background/foreground segmentation. A common use is in combination with classical color cameras. We present three developed applications, using a mobile robot and a robotic arm, to exemplify with real images some of the stated advantages.
Article
Full-text available
Autonomous micro helicopters are about to play a major role in tasks like search and rescue, environment monitoring, security surveillance, inspection, etc. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based on GPS information only is not sufficient. Fully autonomous operation in cities or other dense environments requires micro helicopters to fly at low altitudes---where GPS signals are often shadowed---or indoors, and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The SFLY project was an EU-funded project with the goal to create a swarm of vision-controlled micro aerial vehicles (MAVs) capable of autonomous navigation, 3D mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an IMU. This paper describes the technical challenges that have been faced and the results achieved, from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, datasets, and videos are publicly available to the Robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3D mapping and optimal surveillance coverage are presented.
Article
Full-text available
Three dimensional recording of the human body surface or anatomical areas have gained importance in many medical applications. In this paper, our 3D Medical Laser Scanner is presented. It is based on the novel principle of dynamic triangulation. We analyze the method of operation, medical applications, orthopedically diseases as Scoliosis and the most common types of skin to employ the system the most proper way. It is analyzed a group of medical problems related to the application of optical scanning in optimal way. Finally, experiments are conducted to verify the performance of the proposed system and its method uncertainty.
Article
Full-text available
Important engineering constructions require geometrical monitoring to predict their structural health during their lifetime. Monitoring by geodetic devices, vibration-based techniques, wireless networks or with GPS technology is not always optimal; sometimes it is impossible as shown in this article. A method based on geodetic measurements automation applying local optical scanners for monitoring is proposed. Its originality and contribution is based on the novel method of precise measurement of plane spatial angles. It considers robust invariant AD-conversion angle-to-code for dynamic angle, signal energetic center search method, initial reference scale adjustment, and uncertainty decrease using mediant fractions formalism for result approximation. An algorithm of electromechanic parts interaction for spatial angle encoding with beforehand set accuracy is described. It is shown that by its nature it is appropriate for practically unlimited uncertainty reduction: it is only limited by reasonable ratio ‘‘uncertainty/operation velocity’’. The operation range, accuracy, scanner operation velocity, and its adaptation to objects are determined and described. The experimental results confirm that optical device of presented passive scanning aperture overcomes all recently known compared optical devices for 3D coordinates recognition in a part of offset uncertainty. It is shown that the presented optoelectronic scanning aperture is a universal tool for various practical solutions.
Article
Full-text available
In densely populated cities or indoor environments, limited accessibility to satellites and severe multipath effects significantly decrease the accuracy and reliability of satellite-based positioning systems. To meet the needs of “seamless navigation” in these challenging environments, an advanced terrestrial positioning system is under development. A new principle of mobile robot navigation capable of working in a complex unknown landscape (another planet or just on a cross-country terrain) is proposed. The optoelectrical method proposed has a good spatial domain resolution and immunity to multipath, as well as new optical means for “technical vision” realization. Two related problems are solved: creation of a technical vision system for recognition of images of an unfamiliar landscape and determination of the direction to the initial point of the movement trajectory of the mobile transport robot. Issues of principle design and also of functioning and interaction of system units and elements are described. A mathematical apparatus for processing digital information inside the system and for determining the distances and angle measurements in the system proposed is developed. Some important parameters are analytically determined: expected accuracy, functioning speed, range of action, power issues, etc. Key wordslaser positioning system–passive optical scanning–spatial coordinate measurement–optical signal processing–mobile robot navigation
Article
Full-text available
Bayesian filtering is a general framework for recursively estimating the state of a dynamical system. Key components of each Bayes filter are probabilistic prediction and observation models. This paper shows how non-parametric Gaussian process (GP) regression can be used for learning such models from training data. We also show how Gaussian process models can be integrated into different versions of Bayes filters, namely particle filters and extended and unscented Kalman filters. The resulting GP-BayesFilters can have several advantages over standard (parametric) filters. Most importantly, GP-BayesFilters do not require an accurate, parametric model of the system. Given enough training data, they enable improved tracking accuracy compared to parametric models, and they degrade gracefully with increased model uncertainty. These advantages stem from the fact that GPs consider both the noise in the system and the uncertainty in the model. If an approximate parametric model is available, it can be incorporated into the GP, resulting in further performance improvements. In experiments, we show different properties of GP-BayesFilters using data collected with an autonomous micro-blimp as well as synthetic data.
Article
Full-text available
In this paper, we propose a low-cost contact-free measurement system for both 3-D data acquisition and fast surface parameter registration by digitized points. Despite the fact that during the last decade several approaches for both contact-free measurement techniques aimed at carrying out object surface recognition and 3-D object recognition have been proposed, they often still require complex and expensive equipment. Therefore, alternative low cost solutions are in great demand. Here, two low-cost solutions to the above-mentioned problem are presented. These are two examples of practical applications of the novel passive optical scanning system presented in this paper.
Article
Full-text available
In this paper, the fast least-mean-squares (LMS) algorithm was used to both eliminate noise corrupting the important information coming from a piezoresisitive accelerometer for automotive applications, and improve the convergence rate of the filtering process based on the conventional LMS algorithm. The response of the accelerometer under test was corrupted by process and measurement noise, and the signal processing stage was carried out by using both conventional filtering, which was already shown in a previous paper, and optimal adaptive filtering. The adaptive filtering process relied on the LMS adaptive filtering family, which has shown to have very good convergence and robustness properties, and here a comparative analysis between the results of the application of the conventional LMS algorithm and the fast LMS algorithm to solve a real-life filtering problem was carried out. In short, in this paper the piezoresistive accelerometer was tested for a multi-frequency acceleration excitation. Due to the kind of test conducted in this paper, the use of conventional filtering was discarded and the choice of one adaptive filter over the other was based on the signal-to-noise ratio improvement and the convergence rate.
Article
Full-text available
In this paper, the least-mean-squares (LMS) algorithm was used to eliminate noise corrupting the important information coming from a piezoresisitive accelerometer for automotive applications. This kind of accelerometer is designed to be easily mounted in hard to reach places on vehicles under test, and they usually feature ranges from 50 to 2,000 g (where is the gravitational acceleration, 9.81 m/s(2)) and frequency responses to 3,000 Hz or higher, with DC response, durable cables, reliable performance and relatively low cost. However, here we show that the response of the sensor under test had a lot of noise and we carried out the signal processing stage by using both conventional and optimal adaptive filtering. Usually, designers have to build their specific analog and digital signal processing circuits, and this fact increases considerably the cost of the entire sensor system and the results are not always satisfactory, because the relevant signal is sometimes buried in a broad-band noise background where the unwanted information and the relevant signal sometimes share a very similar frequency band. Thus, in order to deal with this problem, here we used the LMS adaptive filtering algorithm and compare it with others based on the kind of filters that are typically used for automotive applications. The experimental results are satisfactory.
Article
Nowadays artificial intelligence and swarm robotics become wide spread and take their approach in civil tasks. The main purpose of the article is to show the influence of common knowledge about surroundings sharing in the robotic group navigation problem by implementing the data transferring within the group. Methodology provided in article reviews a set of tasks implementation of which improves the results of robotic group navigation. The main questions for the research are the problems of robotics vision, path planning, data storing and data exchange. Article describes the structure of real-time laser technical vision system as the main environment-sensing tool for robots. The vision system uses dynamic triangulation principle. Article provides examples of obtained data, distance-based methods for resolution and speed control. According to the data obtained by provided vision system were decided to use matrix-based approach for robots path planning, it inflows the tasks of surroundings discretization, and trajectory approximation. Two network structure types for data transferring are compared. Authors are proposing a methodology for dynamic network forming based on leader changing system. For the confirmation of theory were developed an application of robotic group modeling. Obtained results show that common knowledge sharing between robots in-group can significantly decrease individual trajectories length.
Chapter
Robotic group collaboration in a densely cluttered terrain is one of the main problems in mobile robotics control. The chapter describes the basic set of tasks solved in model of robotic group behavior during the distributed search of an object (goal) with the parallel mapping. Navigation scheme uses the benefits of authors original technical vision system (TVS) based on dynamic triangulation principles. According to the TVS, output data were implemented fuzzy logic rules of resolution stabilization for improving the data exchange. Modified dynamic communication network model and implemented propagation of information with a feedback method for data exchange inside the robotic group. For forming the continuous and energy saving trajectory, authors are proposing to use two-steps post processing method of path planning with polygon approximation. Combination of our collective TVS scans fusion and modified dynamic data exchange network forming method with dovetailing of the known path planning methods can improve the robotic motion planning and navigation in unknown cluttered terrain.
Article
This paper deals with the leader-following control of multiple autonomous mobile robots based on a new sensing method. In this method, we have mounted a light source on the leader robot and installed a photovoltaic sensor at the front end of the follower robot. The photovoltaic sensor consisting of seven micro solar modules acts as a light sensor for the follower robot which helps it to move in accordance with the leader robot by detecting the light source of the leader robot. From the relative angle measurement and ultrasonic measurement of the relative distance, angle offset and distance information between leader robot and follower robot can be estimated. Then, based on an intuitive control method for angle steering proposed in this paper, the follower robot can be controlled smoothly to achieve leader following purpose. The new leader-following control method could overcome the disadvantages of other existing formation control methods. The experimental results show that the leader following based on light source tracking is a comparatively efficient, accurate, and cost-effective method.
Article
Advances in fine photoelectric displacement measurement technology have made image angular displacement measurement technology via imaging method a popular research topic; it is now easier than ever to realize small-scale high-resolution angular displacement measurement. This paper proposes a small-sized high-resolution image angular displacement measurement technique based on near-field image acquisition. A measuring mechanism based on the lensless near-field image acquisition is first established. Next, based on the error code problem, a space-value-based code value correction algorithm is proposed. Finally, the proposed device is validated on an experimental system. The proposed technology can achieve 21-bit resolution and 20.14” angle measurement accuracy. The results presented here may provide a workable foundation for further research on image-type fine photoelectric displacement measurement technology.
Article
Jointly using the data of visual and inertial sensors to achieve higher accuracy and robustness is a problem of high computational complexity. In this work, this paper presents a 3D motion tracking and reconstruction system on a mobile device using camera and Inertial Measurement Unit (IMU) sensors. The mobile device has very small active infrared projection depth sensors with high-performance IMU and wide field of view cameras. We utilize Visual-Inertial Odometry (VIO) to integrate visual and IMU data. However, traditional VIO method is too complex to apply to a mobile device with limited computing resources in real time. We employ Sliding Window Filter (SWF) for the proposed system to achieve accurate 3D motion tracking and high-quality reconstruction models in real time based on delayed state marginalization. Moreover, we also extensively evaluate in the dataset and real world. Many experiments show the accurate trajectories and high quality 3D reconstruction models by the proposed system. To the end, the qualitative and quantitative experimental results indicate the proposed system based on SWF has much better performance than existing methods in motion tracking and 3D reconstruction models.
Article
Information security deals with provisioning of secrecy to data, often based on digital techniques such as encryption and data hiding. There has been significant growth in applications of smart sensors for high speed secure access and automation of indoor electromechanical systems. This work develops a preliminary study of secure sensing using optical key or card that is illuminated by panel of IR LED with some location distribution. The card provides secure access by random combinations of geometrical features mapped using binary codes in X-Y Finite Field. The physical features represent various levels of secure code and are not easily detectible between a back-to-back case or seal. The card acts like a physical encryption channel that transforms optical data from IR source for performing various predefined actions. A One Time Password feature communicated to user's mobile phone notifies of possible breach if spoofed card is used or if breaking in using the card. The analytic framework is given by architecture comprising of: (i) optical section: encryption of optical signals by card representing Generator matrix of X-Y product code, (ii) electronics section: sensing voltage levels and processing of the data for control and actuation. The article also introduces capabilities of the sensing architecture against some error patterns and likely attacks.
Article
This paper presents a novel, low-cost, expandable 3D motion tracking system with the ability to adapt to dynamic background illumination. The proposed system consists of multiple tracker modules equipped with a Linear Optical Sensor Arrays (LOSA) and a 9-DOF Inertial Measurement Unit (IMU) for position and orientation tracking of a wireless IR-LED illuminated active marker. This paper presents an extended Kalman filter based sensor fusion architecture for combining data acquired from multiple trackers and marker for enhanced accuracy and operational range. The marker system adaptively adjusts the LED intensity to optimize energy consumption and measurement accuracy. The proposed system’s ability to extract background illumination allows it to be robust toward differences in scene illuminations, both static and dynamic. Detailed hardware design and performance evaluation through experimentation have been discussed in this paper.
Article
The eddy-current displacement sensor (ECDS), due to features of small size and high measurement accuracy, has great potential for high precision measurement in a compact space. However, most applications of ECDS are one-dimensional (1D) or two-dimensional (2D) displacement measurements. In this paper, a measurement method of three-dimensional (3D) coordinates only by virtue of single ECDS is proposed. In order to realize the 3D coordinates measurement by ECDS, the virtual displacement measurement starting point (VDMSP) and the displacement vector of ECDS are used as the prior information of ECDS to obtain the 3D absolute measuring point coordinate of a spatial target by ECDS from the 1D displacement relative quantity of ECDS. Besides, a 3D measurement method by ECDS based on the spatial vector and optimization has been proposed to obtain the 3D absolute coordinates of the VDMSP and the spatial displacement vector (the prior information). Then the measurement of 3D absolute coordinates of the measuring point on the target by single ECDS can be realized due to the prior information. Moreover, the plane datum transformation method is analyzed to reduce the impact of data drift on the measurement method. Finally, experiments of six ECDSs are conducted. The results indicate that the proposed measurement method is universally applicable to different ECDSs. The average measurement accuracy is 0.0212mm and the mean square error (MSE) is 0.0127mm, which can meet the measurement requirement of 3D detection in a compact space.
Article
Rigid robot pose estimation with an RGB-D camera has attracted substantial research attention recently because RGB-D camera can capture RGB and depth information simultaneously. Despite the huge progress that has been made, there are still some unresolved issues like the pose estimation in texture-less or structure-less scenes. Aiming at this problem, this paper presents a robust real time pose estimation method with an RGB-D camera for texture-less and structure-less scenes. Our contributions are threefold. First, we present an improved ORB algorithm for extracting reliable inliers, in which adaptive threshold setting method of FAST corners decision is proposed for extracting sufficient keypoints. In addition, an effective inliers refinement method, based on motion smoothness consistency constraint, is introduced for obtaining fine inliers. Second, based on the characteristics of RGB-D camera, this paper proposes a novel hybrid reprojection errors optimization model (HREOM) to estimate pose by concurrently minimizing 3D-3D and 3D-2D reprojection errors. Third, we carry out comprehensive experiments on TUM public datasets to demonstrate the robustness, accuracy, and real-time of the proposed system. The quantitative evaluations show that our system can extract sufficient inliers in those extreme scenes. Furthermore, our method performs as good as or better than other state-of-the-art solutions. Notably, our system can operate in the texture-less and structure-less environment while other methods are prone to failure.
Article
A novel three-dimensional (3D) imaging method is proposed, based on the vortex electromagnetic (EM) waves and the stripmap synthetic aperture radar (SAR). Firstly, the imaging model is deduced and the echo characteristics are analyzed. Then, the imaging processing algorithm is proposed to reconstruct the target’s profiles, where the 3D echoes are decomposed into two two-dimensional (2D) echoes. The azimuthal profile can be obtained by performing fast Fourier transform (FFT) in the orbital angular momentum (OAM) domain, and the target information in the y dimension is calculated by the azimuthal distribution, which is as a function of the slow time. The range-Doppler (RD) algorithms used in the fast time and slow time domains to achieve the range profile and cross-range profile. Finally, the target’s 3D image in the Cartesian coordinate is obtained according to the spatial geometric relationship. This work can benefit the applications of OAM-carrying waves in radar realms as well as the development of the 3D imaging technique.
Article
In this paper, we propose Li-Tect, an algorithm to detect the shape of an object located in an indoor environment using low cost optical elements through sensing the environment's light. The algorithm analyzes, relying on the predictability of optical propagation paths, how much light is expected to propagate in the absence of obstructions caused by the presence of an object. Then, based on the received light when the object is in the room, the algorithm infers the shape of the object. In addition, the algorithm considers the reflected paths from surfaces in order to determine the object's estimated shape. We study five different scenarios characterized by different levels of complexity, room sizes and a range of reflection nodes. The algorithm is also tested in a real prototype where several experiments are carried out in two scenarios to demonstrate the capabilities of Li-Tect in two and three dimensional monitoring and shape detection cases. Finally, the results show that the shape and the detection of objects in the scenarios can be easily acquired with high accuracy, even if the number of transceivers is reduced.
Article
With the rapid development of computer vision and industrial technology, the requirements of the intelligent welding robots are increasing in the real industrial production. The traditional teaching-playback mode and the off-line programming mode cannot meet the automation demand and self-adaptive ability of welding robots. In order to improve the efficiency of welding robots, this paper proposes a novel threedimensional( 3D) path extraction method of weld seams based on stereo structured light sensor. Faced with the low efficiency of the line structured light and the poor robustness of passive vision, the seam extraction based on point cloud processing algorithm is proposed which could well adapt to the weld seams with different types and different groove sizes. Meanwhile, the position information and pose information of weld seam are established to serve the 3D path teaching of welding robot.The experimental results show that the maximum path extraction error of V-type butt joint is less than 0.7mm. The proposed scheme could well serve for the 3D path teaching task before welding.
Article
This paper proposes a new method to automatically recognize range profiles (RPs) and inverse synthetic radar (ISAR) images of targets flying in formation, despite overlap of the RPs and ISARs in a single radar beam. Contrary to the existing methods to recognize by separating each target, the proposed method utilizes the combined RP and the combined ISAR image obtained by fusing the RPs and the ISAR images in a training database based on the flight direction of the targets. Experimental results using the measured data of five scale models prove the validity of the proposed method.
Article
Fault detection in robotic swarm systems is imperative to guarantee their reliability, safety, and to maximize operating efficiency and avoid expensive maintenance. However, data from these systems are generally contaminated with noise, which masks important features in the data and degrades the fault detection capability. This paper introduces an effective fault detection approach against noise and uncertainties in data, which integrates the multiresolution representation of data using wavelets with the sensitivity to small changes of an exponentially weighted moving average scheme. Specifically, to monitor swarm robotics systems performing a virtual viscoelastic control model for circle formation task, the proposed scheme has been applied to the uncorrelated residuals form principal component analysis model. A simulated data from ARGoS simulator is used to evaluate the effectiveness of the proposed method. Also, we compared the performance of the proposed approach to that of the conventional principal component-based approach and found improved sensitivity to faults and robustness to noises. For all the fault types tested–abrupt faults, random walks, and complete stop faults–our approach resulted in a significant enhancement in fault detection compared with the conventional approach.
Article
The authenticity of the Stellar-Inertial integration ground validation system will eventually determine the navigation performance in flight. The optical reference error characteristics remain a crucial but unsolved issue due to the arc-second level accuracy in the ground validation system. The characteristic of the star simulator celestial angle error and the north reference error are discussed and analyzed in detail. A new error control method of the star simulators intersection angle is proposed with the help of the high precision three-axis turntable. The north reference error can be isolated as a certain azimuth bias error in ground validation. The synthesis of all the error sources is performed to give the criterion value of the evaluation accuracy for the ground validation system. Experimental tests confirm that the error analysis and control method is effective. The evaluation accuracy is improved from 10.99 arc-seconds to 4.88 arc-seconds under the condition of remaining the unchanged ground validation system. Meanwhile, such an in ground test system is close to the flight conditions and it can satisfy the stringent requirement for high-accuracy star sensors.
Article
Performances of centroid analysis (CA) used for extracting Brillouin frequency shift (BFS) from noisy signals are studied in this paper. The variance of extracted BFS, i.e. the minimum detectable BFS, is deduced as a function of signal-to-noise ratio (SNR), frequency step, Brillouin linewidth and the data window used in the analysis. It is found theoretically that both of the averaged BFS and its variance are susceptible to the deviation of data window center from real Brillouin central frequency, termed data window deviation (DWD). The theoretically analyzed results are verified by experiments and simulation, showing good agreement with each other. Since DWD occurs often for noisy signals, an iterative centroid analysis (ICA) is proposed and demonstrated to reduce this impact and improve the accuracy of centroid analysis. Compared with curve fitting methods (CFM) the centroid analysis has attractive features, such as extremely shorter time of processing. The centroid analysis is especially suitable for extracting BFS from complicated spectra observed often in sensors paved in fields, for which the usual CFM may yield wrong results.
Article
Traditional received signal strength indicators (RSSI‘s) based moving target localization and tracking using wireless sensor networks (WSN‘s) generally employs lateration/angulation techniques. Although this method is a very simple technique but it creates significant errors in localization estimations due to nonlinear relationship between RSSI and distance. The Generalized Regression Neural Network (GRNN) being a one-pass learning algorithm is well known for its ability to train quickly on sparse data sets. This paper proposes an implementation of GRNN as an alternative to this traditional RSSI based approach, to obtain first location estimates of single target moving in 2-D in WSN, which are then further refined using kalman filtering (KF) framework. Two algorithms namely, GRNN + kalman filter (KF) and GRNN + unscented kalman filter (UKF) are proposed in this research work. The GRNN is trained with the simulated RSSI values received at moving target from beacon nodes and the corresponding actual target 2-D locations. The precision of the proposed algorithms are compared against traditional RSSI based, GRNN based approach as well as other models in the literature such as traditional RSSI + KF and traditional RSSI + UKF algorithms. The proposed algorithms demonstrate superior tracking performance (tracking accuracy in the scale of few centimeters) irrespective of nonlinear system dynamics as well as environmental dynamicity.
Article
Purpose The purpose of this paper is to present a novel application for a newly developed Technical Vision System (TVS), which uses a laser scanner and dynamic triangulation, to determine the vitality of agriculture vegetation. This vision system, installed on an unmanned aerial vehicle, shall measure the reflected laser energy and thereby determine the normalized differenced vegetation index. Design/methodology/approach The newly developed TVS shall be installed on the front part of the unmanned aerial vehicle, to perform line-by-line scan in the vision system field-of-view. The TVS uses high-quality DC motors, instead of previously researched low-quality DC motors, to eliminate the existence of two mutually exclusive conditions, for exact positioning of a DC motor shaft. The use of high-quality DC motors reduces the positioning error after control. Findings Present paper emphasizes the exact laser beam positioning in the field-of-view of a TVS. By use of high-quality instead of low-quality DC motors, a significant reduced positioning time was achieved, maintaining the relative angular position error less than 1 per cent. Best results were achieved, by realizing a quasi-continuous control, using a high pulse-width modulated duty cycle resolution and a high execution frequency of the positioning algorithm. Originality/value The originality of present paper is represented by the novel application of the newly developed TVS in the field of agriculture. The vitality of vegetation shall be determined by measuring the reflected laser energy of a scanned agriculture zone. The paper’s main focus is on the exact laser beam positioning within the TVS field-of-view, using high-quality DC motors in closed-loop position control configuration.
Conference Paper
Unmanned Aerial Vehicle (UAV) navigation requires Machine Vision Systems to determine physical values of near distanced objects. These machine vision systems must respond in sufficiently amount of time using less scanning data from the observed objects. A novel Technical Vision System (TVS), which uses DC motors and laser triangulation to determine spatial coordinates of these objects, is offered and used for UAV navigation. Present paper thereby shows the implementation of open-loop control algorithm, to position the TVS laser ray in the UAV Field of View (FOV). Issues of this implementation, experimental realization and results are presented, as the theoretical concept of a continuous FOV derived.
Article
Near-optimal states estimation in dynamic systems with Gaussian noise has been extensively applied in research using Kalman Filtering (KF). This method is especially useful in technological applications that use noisy data from several sensors or need the estimation of an optimal state vector in systems with multiple observations. A particular situation is that in which some constraints are applied to the state vector, for instance, assigning predetermined values to a part of the state vector or imposing a particular relationship between some variables of the state vector. This paper deals with the application of an Extended Kalman Filter (EKF) for Mobile Robot (MR) indoor navigation fusing the relative positioning obtained with the robot odometry with the absolute positioning measured with a set of Ultrasonic Local Positioning Systems (ULPS). The novelty of the system lies on the use of a map description as additional information, in such a way that the estimation of the trajectory of MR does not cross walls or other physical instances at any time. If after a vector state estimation such a situation is detected, an analytical method is used to impose some constraints to a part of the state vector so the physical restrictions are considered. The proposal has been compared to a numerical method based on constrained optimization (using the function fmincon of Matlab) or based on the use of a Particle Filter (PF) that is often used to solve this kind of problems. Simulated and real test has been carried out to show the effectiveness of the method, obtaining our approach a better performance in terms of accuracy compared to the traditional EKF at the expense of a minor increase in complexity, and obtaining a better performance compared to PF.
Article
Many applications of Swarm Robotic Systems (SRSs) require each robot to be able to discover its own position. To provide such capability, some localization methods have been proposed, in which the positions of the robots are estimated based on a set of reference nodes in the swarm. In this paper, a distributed and resilient localization algorithm is proposed based on the BSA-MMA algorithm, which uses the Backtracking Search Algorithm (BSA) and the Min-Max Area (MMA) confidence factor. It is designed in a novel four-stage approach, where a new method, called Multi-hop Collaborative Min-Max Localization (MCMM), is included to improve the resilience in case of failures during the recognition of the reference nodes. The results, obtained with real Kilobot robots, show 28% to 36% of performance improvement obtained by the MCMM. Also, it is shown that the final result of the localization process is better when the MCMM is executed than if it is not executed. The experiments outcomes demonstrate that the novel four-stage approach and the use of the MCMM algorithm represents a progress in the design of distributed localization algorithms for SRS, especially with regards to its resilience.
Article
This paper focuses on the collision-free motion of a team of robots moving in a 2D environment with formation and non-holonomic constraints. With the proposed approach one can simultaneously control the formation of the team and generate a safe path for each individual robot. The computed paths satisfy the non-holonomic constraints, avoid collisions, and minimize the task-completion time. The proposed approach, which combines techniques from mathematical programming and CAD, consists of two main steps: first, a global team path is computed and, second, individual motions are determined for each unit. The effectiveness of the proposed approach is demonstrated using several simulation experiments.
Article
In many mission critical applications of mobile robotic sensor networks (MRSNs), the intersensor collaboration requires reliable application-level coordination based on strong network connectivity with some suggested solutions. In practice, however, the disturbing obstacles and harsh interferences the connectivity of the MRSN can be easily compromised, especially when the failure of some critical sensors results in disintegration of the network into two or more disjoint segments. Existing connectivity restoration schemes fail to perform under such harsh working conditions as they overlook an important fact that sensors may encounter obstacles during the relocation. Our obstacle-avoiding connectivity restoration strategy (OCRS) proposed method addresses this problem using a fully exploring mobility technique avoiding any incoming convex obstacle conditions. For which a backup selection algorithm (BSA) proactively determine the cut-vertex sensors within the network and assigns a backup sensor to each cut-vertex node. Then, a selected backup sensor avoiding obstacles uses a gyroscopic force controller displaced restoring the disturbed connectivity. Extensive simulation experiments verify OCRS capability to restore connectivity with guaranteed collision avoidance, and also to outperform contemporary schemes in terms of message complexity and traveling distance.
Article
This paper introduces a novel electronic circuit that has to be embedded in a photodiode sensor as an integrated circuit board for electronic signal processing that detects the energy center of an optical signal, which represents the most accurate position measurement from a light emitter source mounted on a structure (such as a building, a bridge, or a mine). The optical scanning sensor for structural health monitoring proposed is a flexible system that can operate with a coherent or incoherent light emitter source. It is conformable to any kind of structure surfaces and data storage budget thanks to the signal processing stage being embedded into the sensor and does not require additional software processing, which reduces the time and memory spacing requirements for information recording. The theoretical principle of operation, as well as the technological and experimental aspects of design, development, and validation is presented.
Article
Abstract In our current research, we are developing a practical autonomous mobile robot navigation system which is capable of performing obstacle avoiding task on an unknown environment. Therefore, in this paper, we propose a robot navigation system which works using a high accuracy localization scheme by dynamic triangulation. Our two main ideas are (1) integration of two principal systems, 3D laser scanning technical vision system (TVS) and mobile robot (MR) navigation system. (2) Novel MR navigation scheme, which allows benefiting from all advantages of precise triangulation localization of the obstacles, mostly over known camera oriented vision systems. For practical use, mobile robots are required to continue their tasks with safety and high accuracy on temporary occlusion condition. Presented in this work, prototype II of TVS is significantly improved over prototype I of our previous publications in the aspects of laser rays alignment, parasitic torque decrease and friction reduction of moving parts. The kinematic model of the MR used in this work is designed considering the optimal data acquisition from the TVS with the main goal of obtaining in real time, the necessary values for the kinematic model of the MR immediately during the calculation of obstacles based on the TVS data.
Article
Abstract In this paper Support Vector Machine (SVM) Regression was applied to predict measurements errors for Accuracy Enhancement in Optical Scanning Systems, for position detection in real life application for Structural Health Monitoring (SHM) by a novel method, based on the Power Spectrum Centroid Calculation in determining the energy center of an optoelectronic signal in order to obtain accuracy enhancement in optical scanning system measurements. In the development of an Optical Scanning System based on a 45° – sloping surface cylindrical mirror and an incoherent light emitting source, surged a novel method in optoelectronic scanning, it has been found that in order to find the position of a light source and to reduce errors in position measurements, the best solution is taking the measurement in the energy centre of the signal generated by the Optical Scanning System. The Energy Signal Centre is found in the Power Spectrum Centroid and the SVM Regression Method is used as a digital rectified to increase measurement accuracy for Optical Scanning System.
Article
The 3D measurements of the human body surface or anatomical areas have gained importance in many medical applications. Three dimensional laser scanning systems can provide these measurements; however usually these scanners have non-linear variations in their measurement, and typically these variations depend on the position of the scanner with respect to the person. In this paper, the Levenberg–Marquardt method is used as a digital rectifier to adjust this non-linear variation and increases the measurement accuracy of our 3D Rotational Body Scanner. A comparative analysis with other methods such as Polak–Ribire and quasi-Newton method, and the overall system functioning is presented. Finally, computational experiments are conducted to verify the performance of the proposed system and its method uncertainty.
Conference Paper
At last decade many approaches for non-contact measurement techniques for object surfaces and approaches for 3D object recognition have been proposed; but often they still require complex and expensive equipment. Not least due to the rapidly increasing number of efficient 3D hardware and software system components, alternative low cost solutions are in great demand. We propose such a low-cost system for 3D data acquisition and fast surface registration by digitized points Cartesian coordinates matrix keeping. There is presented two examples of practical application of the passive optical scanning aperture under proposition.
Article
Inertial and vision sensors are considered essential nowadays in terms of navigation and guidance measurements for autonomous aerial and ground vehicles. In this paper, the concept of aiding Inertial Navigation with Vision-Based Simultaneous Localization and Mapping to compensate for Inertial Navigation divergence is introduced. We describe the changes to the augmented state vector that this sensor fusion algorithm requires and show that repeated measurements of map points during certain maneuvers around or nearby a map point is crucial for constraining the Inertial Navigation position divergence and reducing the covariance of the map point position estimates. In addition, it is shown that such an integrated navigation system requires coordination between the guidance and control measurements and the vehicle task itself to achieve better navigation accuracy.
Article
Tool condition monitoring has gained considerable importance in the manufacturing industry over the preceding two decades, as it significantly influences the process economy and the machined part quality. Recent advances in the field of image processing technology have led to the development of various in-cycle vision sensors that can provide a direct and indirect estimate of the tool condition. These sensors are characterised by their measurement flexibility, high spatial resolution and good accuracy. This paper provides a review of the basic principle, the instrumentation and the various processing schemes involved in the development of these sensors.