Article

Track-to-Track Fusion With Asynchronous Sensors Using Information Matrix Fusion for Surround Environment Perception

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Driver-assistance systems and automated driving applications in the future will require reliable and flexible surround environment perception. Sensor data fusion is typically used to increase reliability and the observable field of view. In this paper, a novel approach to track-to-track fusion in a high-level sensor data fusion architecture for automotive surround environment perception using information matrix fusion (IMF) is presented. It is shown that IMF produces the same good accuracy in state estimation as a low-level centralized Kalman filter, which is widely known to be the most accurate method of fusion. Additionally, as opposed to state-of-the-art track-to-track fusion algorithms, the presented approach guarantees a globally maintained track over time as an object passes in and out of the field of view of several sensors, as required in surround environment perception. As opposed to the often-used cascaded Kalman filter for track-to-track fusion, it is shown that the IMF algorithm has a smaller error and maintains consistency in the state estimation. The proposed approach using IMF is compared with other track-to-track fusion algorithms in simulation and is shown to perform well using real sensor data in a prototype vehicle with a 12-sensor configuration for surround environment perception in highly automated driving applications.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The estimates are then fused, which is often called track-to-track fusion. Usually, the state vectors can be combined, with different methods available to handle correlation between the estimates via, e.g., covariance intersection [12]- [15] or inverse covariance intersection [16], [17], tracking of the cross-covariances via square root decomposition of the noise terms [18], [19], or tracking and subtracting of known duplicate information via usage of the information form [10], [20], [21]. Challenges which can occur here if the targets are extended objects are ambiguities in the parameterization, i.e., the same shape could be described by different realizations of the state. ...
... A straightforward way to handle known correlations from, e.g., a common prior or process model, is to represent the state in information form and simply subtract the duplicate information [21]. To describe the information filter [10], we define x s k|k as the local state at time k given all measurements up to time k at sensor s and x g k|k as the global estimate created from a fusion of the information from all sensors. ...
... If a common prior and process model are known, we propagate the duplicate information to the current time step and subtract it, i.e., we can fuse the sensor estimate at time k into the global state using [21] ...
... Such an object-level fusion method is the mainstream approach [9,17,19,12,31] and is still widely used on many Advanced Driver Assistance Systems (ADAS). In objectlevel fusion, object detection is independently performed on each sensor, and the fusion algorithm combines such object detection results to create so-called global tracks for kinematic tracking [1]. ...
... Data association is the most critical and challenging task in object-level fusion. Precise association can easily lead to 3D object detection and multiple objects tracking solutions [1,5]. Traditional approaches tend to manually craft various distance metrics to represent the similarities between different sensory outputs. ...
... Since the radar used here performs internal clustering, each output radar pin is on the object level. 1 There are several tens of radar pins per-frame depending on the actual scene and traffic. The attributes of each radar pin are listed in Table 1. ...
Preprint
Full-text available
Radars and cameras are mature, cost-effective, and robust sensors and have been widely used in the perception stack of mass-produced autonomous driving systems. Due to their complementary properties, outputs from radar detection (radar pins) and camera perception (2D bounding boxes) are usually fused to generate the best perception results. The key to successful radar-camera fusion is accurate data association. The challenges in radar-camera association can be attributed to the complexity of driving scenes, the noisy and sparse nature of radar measurements, and the depth ambiguity from 2D bounding boxes. Traditional rule-based association methods are susceptible to performance degradation in challenging scenarios and failure in corner cases. In this study, we propose to address rad-cam association via deep representation learning, to explore feature-level interaction and global reasoning. Concretely, we design a loss sampling mechanism and an innovative ordinal loss to overcome the difficulty of imperfect labeling and to enforce critical human reasoning. Despite being trained with noisy labels generated by a rule-based algorithm, our proposed method achieves a performance of 92.2% F1 score, which is 11.6% higher than the rule-based teacher. Moreover, this data-driven method also lends itself to continuous improvement via corner case mining.
... The data processing for each sensor system is performe individually at the sensor level in a high-level fusion system. Each sensor system output one or more tracks based on the sensor measurements, and the state estimates from multiple sensor tracks are associated and combined with a track-to-track fusion algorithm A high-level fusion of multiple sensors has been successfully implemented in man studies dealing with automotive applications [58][59][60][61][62]. Important advantages of the high level fusion approach lie in spatial and temporal alignment, modularity, an communication overhead [63,64]. ...
... Each sensor system outputs one or more tracks based on the sensor measurements, and the state estimates from multiple sensor tracks are associated and combined with a track-to-track fusion algorithm. A high-level fusion of multiple sensors has been successfully implemented in many studies dealing with automotive applications [58][59][60][61][62]. Important advantages of the high-level fusion approach lie in spatial and temporal alignment, modularity, and communication overhead [63,64]. ...
... one or more tracks based on the sensor measurements, and the state estimates from multiple sensor tracks are associated and combined with a track-to-track fusion algorithm. A high-level fusion of multiple sensors has been successfully implemented in many studies dealing with automotive applications [58][59][60][61][62]. Important advantages of the highlevel fusion approach lie in spatial and temporal alignment, modularity, and communication overhead [63,64]. ...
Article
Full-text available
Driving environment perception for automated vehicles is typically achieved by the use of automotive remote sensors such as radars and cameras. A vehicular wireless communication system can be viewed as a new type of remote sensor that plays a central role in connected and automated vehicles (CAVs), which are capable of sharing information with each other and also with the surrounding infrastructure. In this paper, we present the design and implementation of driving environment perception based on the fusion of vehicular wireless communications and automotive remote sensors. A track-to-track fusion of high-level sensor data and vehicular wireless communication data was performed to accurately and reliably locate the remote target in the vehicle surroundings and predict the future trajectory. The proposed approach was implemented and evaluated in vehicle tests conducted at a proving ground. The experimental results demonstrate that using vehicular wireless communications in conjunction with the on-board sensors enables improved perception of the surrounding vehicle located at varying longitudinal and lateral distances. The results also indicate that vehicle future trajectory and potential crash involvement can be reliably predicted with the proposed system in different cut-in driving scenarios.
... The first fusion formula was derived subsequently from [14] in [39] for the case of two sensors and generalized to the case of n-sensors in [40]. However, in [41], it is shown that [39] is only optimal in ML (maximum likelihood) sense and not in terms of MSE (mean squared error), proposing another formula, called the information matrix fusion (IMF), adapted to a sensor-to-global configuration in [42]. Besides IMF, there are three algorithms often used in automotive applications in sensor-to-sensor configuration: the covariance intersection, covariance union, and the use of cross-covariance. ...
... Although it can work asynchronously, updating the common global track list whenever a sensor transmits its data, out-of-sequence data must be handled [54]. The correlation problem must also be considered, as the state estimates of a local track are not independent over time [42]. Moreover, the local sensor and global system track of the same target also have dependent errors due to the common process noise [14]. ...
Article
Full-text available
Advanced driver assistance and autonomous systems require an enhanced perception system, fusing the data of multiple sensors. Many automotive sensors provide high-level data, such as tracked objects, i.e., tracks, usually fused in a track-to-track manner. The core of this fusion is the track-to-track association, intending to create assignments between the tracks. In conventional track-to-track association, the assignment likelihood function is derived from "diffuse prior," neglecting that the sensors may provide duplicated tracks of a target. Moreover, conventional association algorithms are usually computationally demanding due to the combinatorial nature of the problem. The first motivation of this work is to obtain a computationally efficient and robust solution, reflecting these problems. Another crucial element of track-to-track fusion is track management, which maintains the list of tracks by initializing and deleting tracks, thus having a great impact on the reliability of the fusion output. In this paper, we propose a novel track-to-track fusion architecture in which the fused tracks are fed back to the association. The proposed method comprises two main contributions. First, a computationally efficient association algorithm is provided in which the "diffuse prior" is replaced with an informative prior, exploiting the feedback loop of the fused tracks. Moreover, it tackles duplicated tracks. Second, a track management system relying on a revamped track existence probability fusion is proposed, contributing to efficient false track filtering and continuous object tracking. The proposed methodology is evaluated on real-world data of a frontal perception system. The results show that the proposed association outperforms the conventional methods; still, it maintains a favorable complexity, contributing to real-time applicability. The track management system relying on the revamped existence probability fusion can efficiently filter false tracks and continuously track objects. Moreover, the resulting overall track-to-track fusion outperforms the state-of-the-art multi-object tracking-based fusion algorithms.
... One of the most prevalent methods, particularly when the correlation between tracks is unknown or cannot be accurately determined, is covariance intersection (CI) [26]. CI approximates the correla-tion between tracks by geometrically fusing their state estimates and covariance [26,27]. A special case of CI is the two-sensor case, where the updated meanx(t|t) and covariance P(t|t) at time t for sensors i and j are estimated by ...
... As applied to this work, a global track for each vehicle must be maintained as the vehicle moves through the FoVs of different radars. Following [27], Equations (2) and (3) are modified to incorporate information from the global track: ...
Article
Full-text available
This work presents a methodology for extracting vehicle trajectories from six partially-overlapping roadside radars through a signalized corridor. The methodology incorporates radar calibration, transformation to the Frenet space, Kalman filtering, short-term prediction, lane-classification, trajectory association, and a covariance intersection-based approach to track fusion. The resulting dataset contains 79,000 fused radar trajectories over a 26-h period, capturing diverse driving scenarios including signalized intersections, merging behavior, and a wide range of speeds. Compared to popular trajectory datasets such as NGSIM and highD, this dataset offers extended temporal coverage, a large number of vehicles, and varied driving conditions. The filtered leader–follower pairs from the dataset provide a substantial number of trajectories suitable for car-following model calibration. The framework and dataset presented in this work has the potential to be leveraged broadly in the study of advanced traffic management systems, autonomous vehicle decision-making, and traffic research.
... Track-to-Track Fusion (T2TF) has been extensively studied before, e.g., in [4], [5] and different systems such as central fusion or sensor-to-sensor fusion are possible [6], [7]. ...
... , |T|] k = |T| + 1 for n = 1, . . . , N do for t ∈ T do calculate p s according to (7) for c = 1, . . . , |C| do calculate p m c according to (8) ...
... 17 For multiple target tracking (MTT), the track-to-track (T2T) fusion strategies are a common way to fuse the on-board sensors and infrastructure sensors. The T2T fusion involved are the optimal fusion using knowledge of the covariance intersection (CI), 18 information matrix fusion (IMF) 19 and equivalent measurement reconstruction. 20 CI needs no knowledge about the underlying correlations between state estimates, but it yields a less conservative fusion result than others. ...
... The MMSE estimate and MAP estimate are de¯ned in Eqs. (19) and (20), respectively: ...
Article
Full-text available
Environment perception is crucial for the development of autonomous driving and advanced driver assistance systems. The cooperative perception using the infrastructure sensors can significantly expand the field of view of on-board sensors and improve the accuracy of target tracking. In this paper, we propose a hybrid vehicular perception system that incorporates both received feature-level information from infrastructure sensors and track-level data from the multi-access edge computing server (MEC-Server). An infrastructure-enhanced multiple-model probability hypothesis density is proposed to handle the feature-level data from heterogeneous infrastructure sensors. The problem of kinematic state estimation is improved with the prior information of the road environment. Furthermore, a generic communication interface between the infrastructure sensor and MEC-Server is designed, which allows the object data to have the same notion of locality through the use of a generic object state model. Simulation results show that the presented algorithm provides higher accuracy and reliability after considering the prior information of the road environment.
... The objective of the fusion node is to fuse a global result given all the local estimates, then send this fused estimate back to each local sensor for performance improvement. In this work, it is assumed that the local sensors work in a synchronous manner, which means that both sensor 1 and sensor 2 operate with an identical sampling interval T. In addition, the out-of-sequence problem [28] is omitted for the sake of brevity, that is, the communication links between the sensors and the fusion node is without delay or data loss. Denote Z i k ≜ z i 1 , z i 2 , …, z i k as the historical measurements until time step k for the ith sensor. ...
... As shown in Fig. 4, in the proposed framework, we construct a sampling set based on the result of the generalised CI as mentioned in (37). After that, the value of the Copula fused density for each element in the sampling set is calculated according to (24)- (26), where the bivariate Gaussian Copula is calculated based on the distribution function of the pseudo-measurement given in (27) and (28). ...
Article
Full-text available
This study deals with the problem of heterogeneous sensor fusion based on Copula theory and importance sampling. The proposed fusion algorithm is grounded on Copula statistical modelling and Bayesian probabilistic theory. The distinctive advantage of this Copula‐based methodology is that it formulates the internal dependency between the local sensors' data, which is usually unknown but essential for accurate track fusion. To this end, a joint distribution of the local sensors' observations is constructed based on Copula functions, and the corresponding fusion rule is derived with a specific correlation term. In addition, a Monte–Carlo importance sampling technique is adopted to improve the computational efficiency by drawing less random samples from the local estimates to be fused. After that, a procedure of Kernel density estimation is applied to learn a Gaussian approximation of the fused density. In the end, extensive Monte–Carlo simulations are conducted to evaluate the proposed sensor fusion method in a distributed target‐tracking scenario.
... To efficiently accomplish this in complex and dynamic battlefield environments, reconnaissance aircraft utilize advanced sensors and imaging technologies [4], [5]. By comprehensively utilizing various sensor data, such as optical, infrared, and radar data, and combining image restoration and enhancement techniques, the accuracy and real-time performance of target recognition are significantly improved [6], [7], [8]. However, most battlefield targets are highly maneuverable and capable of rapid responses, meaning the images of these high-speed moving targets captured by the aircraft's imaging systems often exhibit motion blur, resulting in unpredictable distortions during the imaging process [9], [10], [11]. ...
Article
Full-text available
In military operations, the detection of enemy targets by reconnaissance aircraft usually results in obvious motion blur in imaging due to the rapid movement or position change of the target, which poses a challenge for accurate target identification. To address the imaging blur issue in high-speed dynamic scenarios and enhance the visual quality of enemy target imagery, this study proposes a blurred image restoration method based on reconnaissance imaging models. First, there is a lack of reliable simulation models and quantitative frameworks to assess the extent of such blurring. To this end, we developed an optical imaging simulation model for imaging systems that incorporates motion blur effects to more accurately reflect real-world scenarios. To fulfill the need for quantitative metrics, we introduce a velocity estimation method that calculates the blur length and accurately measures the degree of blur removal to estimate the target motion velocity to validate the real-world imaging of a reconnaissance aircraft imaging system. Additionally, to address the problem of imaging blurring, we propose a motion deblurring optimization strategy based on the pre-trained DeblurGAN-v2 model, which is specifically designed to restore the clarity of target imaging under high-speed dynamic conditions. The effectiveness of the simulation model and the deblurring strategy is rigorously verified through a series of low-speed UAV tests simulating battlefield conditions. The results show that our equivalent imaging model is able to achieve minimal relative error and accurately reproduce motion-induced blur when detecting a variety of ground targets.
... Generally, the perception in adverse time and weather conditions such as nighttime [2], rainy [3], and foggy [4] environments are considered as one of the main issues to deal with. Apart from these conditions present in actual driving scenarios, ensuring accurate and timely sensing under conditions of high-speed motion and diverse lighting environments [5] remains a significant challenge. The integration of multi-modal data sources, such as frame-based cameras, LiDAR, and event-based cameras, is essential for achieving reliable and precise sensing in dynamic traffic environments [6]. ...
Article
Full-text available
The precise perception of the surrounding environment in traffic scenes is an important part of an intelligent transportation system. The event camera could provide complementary information to traditional frame‐based cameras, such as high dynamic range, and high time resolution, in the perception of traffic targets. To improve the precision and reliability of perception as well as facilitate lots of RGB camera‐based studies introduced to event cameras directly, a refined registration method for event‐based cameras and RGB cameras on the basis of pixel‐level region segmentation is proposed, to provide a fusion method at pixel level. A total of eight sequences and a dataset containing 260 typical traffic scenes are contained in the experiment dataset, both selected from DSEC, a traffic event‐based dataset. The registered event image shows a better spatial consistency with RGB images visually. Compared to the baseline, the evaluation indicators, such as the performance of the contrast, the proportion of overlapping pixels, and average registration accuracy have been improved. In the traffic object segmentation task, the average boundary displacement error of our method has decreased and the max decline value has reached 79.665%, compared to the boundary displacement error between ground truth and baseline. These results indicate prospective applications in the perception of intelligent transportation systems combined with event and RGB cameras. The traffic dataset with pixel‐level semantic annotations will be provided soon.
... However, it still has limitations such as low resolution and an inability to classify targets accurately. To address these limitations, multi-source information fusion theory is applied to integrate the advantages of different types of sensors [9,10]. By acquiring information from multiple 1. ...
Article
Full-text available
This study presents a novel multimodal heterogeneous perception cross-fusion framework for intelligent vehicles that combines data from millimeter-wave radar and camera to enhance target tracking accuracy and handle system uncertainties. The framework employs a multimodal interaction strategy to predict target motion more accurately and an improved joint probability data association method to match measurement data with targets. An adaptive root-mean-square cubature Kalman filter is used to estimate the statistical characteristics of noise under complex traffic scenarios with varying process and measurement noise. Experiments conducted on a real vehicle platform demonstrate that the proposed framework improves reliability and robustness in challenging environments. It overcomes the challenges of insufficient data fusion utilization, frequent leakage, and misjudgment of dangerous obstructions around vehicles, and inaccurate prediction of collision risks. The proposed framework has the potential to advance the state of the art in target tracking and perception for intelligent vehicles.
... It is emergent to find a method for generating new datasets for tackling this challenge. Multi-sensor information fusion [41][42][43][44][45][46][47][48][49][50] analyzes and processes the multi-source information collected by sensors and combines them. The combination of multi-source information can be automatically or semi-automatically carried out [51][52][53][54]. ...
Article
Full-text available
Face recognition has been rapidly developed and widely used. However, there is still considerable uncertainty in the computational intelligence based on human-centric visual understanding. Emerging challenges for face recognition are resulted from information loss. This study aims to tackle these challenges with a broad learning system (BLS). We integrated two models, IR³C with BLS and IR³C with a triplet loss, to control the learning process. In our experiments, we used different strategies to generate more challenging datasets and analyzed the competitiveness, sensitivity, and practicability of the proposed two models. In the model of IR³C with BLS, the recognition rates for the four challenging strategies are all 100%. In the model of IR³C with a triplet loss, the recognition rates are 94.61%, 94.61%, 96.95%, 96.23%, respectively. The experiment results indicate that the proposed two models can achieve a good performance in tackling the considered information loss challenges from face recognition.
... In our past work, we successfully established the track-to-track multiple sensor data fusion architecture [37]. Under this architecture, we actualized multiple sensor data fusion based on the joint integrated probabilistic data association (JPDA) algorithm and the Kalman filter [38]. ...
Article
Full-text available
The perception module plays an important role in vehicles equipped with advanced driver-assistance systems (ADAS). This paper presents a multi-sensor data fusion system based on the polarization color stereo camera and the forward-looking light detection and ranging (LiDAR), which achieves the multiple target detection, recognition, and data fusion. The You Only Look Once v4 (YOLOv4) network is utilized to achieve object detection and recognition on the color images. The depth images are obtained from the rectified left and right images based on the principle of the epipolar constraints, then the obstacles are detected from the depth images using the MeanShift algorithm. The pixel-level polarization images are extracted from the raw polarization-grey images, then the water hazards are detected successfully. The PointPillars network is employed to detect the objects from the point cloud. The calibration and synchronization between the sensors are accomplished. The experiment results show that the data fusion enriches the detection results, provides high-dimensional perceptual information and extends the effective detection range. Meanwhile, the detection results are stable under diverse range and illumination conditions.
... This is a viable option is such a small system but in more complex situations, a filter might have to deal with many sensors that have different modalities and frequencies. To allow for better separation and adaptativeness, the classical solution is to use a Track-To-Track Fusion (T2TF) architecture [22], in which observations are used in one filter and the resulting estimates are used by other filters. Figure 3b illustrates the results of such an architecture compared to the previous one. ...
... In real situations, the sensors combined in an automotive application are usually asynchronous to one another. Aeberhard et al. used the information matrix fusion (IMF) algorithm to get a stable global track [18]. They found that the IMF approach provides similar performance to a centralized Kalman filter in a low-layer fusion. ...
Article
Full-text available
Abstract Nowadays, driving safety has become an important topic in the field of intelligent driving. The key step in the intelligent transportation system (ITS) is to detect and track the vehicles on the road through various sensors equipped on the vehicle. While putting the sensors aside the road can have benefits in the vehicle‐road collaborative scenario. In this paper, an efficient and applicable multi‐sensor fusion protocol for a vehicle‐road collaborative system is proposed where roadside units (RSUs) are equipped with cameras and radars. To fully use the history information and the multi‐source heterogeneous sensing data to improve the reliability of multi‐sensor fusion, a Bayesian‐based two‐layer fusion scheme is further proposed. Moreover, the efficiency and robustness of the proposed scheme are evaluated in a real highway environment in Changsha, China.
... Both versions of RET in [8,14] are for synchronized sensor case. This is not realistic in the case of decentralized architecture [15][16][17]. Therefore, in this paper, we focus on asynchronous sensors case. ...
Article
Full-text available
In multi-sensor multi-target tracking systems, the different sensors provide their detections at different times due to having different sampling periods or communication delays. Therefore, for track-to-track association algorithms, it is not realistic to assume that sensors are synchronous. In this paper, we propose an asynchronous track-to-track association algorithm based on reference topology feature. For synchronization of sensor tracks, a one-step, memoryless track propagation scheme is used. The proposed method handles both the asynchronous and synchronous sensors. An improved performance is achieved by using the generalized sub-patterns assignment metric to calculate the association costs between two reference topologies. The proposed algorithm also provides an improved performance for the synchronized sensors case.
... Nowadays, in practical application, obtaining environmental information favors from multi-type sensors fusion. [13][14][15] Merely, this also increases the complexity and difficulty of information processing owing to information sensed by different sensor needed to be integrated. There are also some researchers casting light to other types of sensors. ...
Article
Full-text available
Recently, the working scenes of the robot have been emerging as diversity and complexity with gradually mature of robotic control technology. The challenge of robot adaptability emerges, especially in complicated and unknown environments. Among the numerous researches on improving the adaptability of robots, aiming at avoiding collision between robot and external environment, obstacle avoidance has drawn much attention. Compared to the global circumvention requiring the environmental information that is known, the local obstacle avoidance is a promising method due to the environment is possibly dynamic and unknown. This study is aimed at making a review of research progress about local obstacle avoidance methods for wheeled mobile robots (WMRs) under complex unknown environment in the last 20 years. Sensor-based obstacle perception and identification is first introduced. Then, obstacle avoidance methods related to WMRs' motion control are reviewed, mainly including artificial potential field (APF)-based, population-involved meta heuristic-based, artificial neural network (ANN)-based, fuzzy logic (FL)-based and quadratic optimization-based, etc. Next, the relevant research on Unmanned Ground Vehicles (UGVs) is surveyed. Finally, conclusion and prospection are given. Appropriate obstacle avoidance methods should be chosen based on the specific requirements or criterion. For the moment, effective fusion of multiple obstacle avoidance methods is becoming a promising method.
... Likewise, numerous engineers and researchers have developed efficient technologies to make driving easy and safe for drivers by improving advanced driver assistance systems (ADASs). The main goal of ADAS is to assist drivers to recognize particular relevant aspects from the surroundings, making driving decisions in certain situations and controlling the vehicle if required [1][2][3][4][5]. Thus, ADAS must first accurately perceive the surrounding environments. ...
Article
Over the past few years, numerous technologies have emerged to enable safe and convenient driving. However, there still exist various problems autonomous vehicles should overcome. Precise detection and perception of surrounding environments are the essential foundations to overcome them. Consequently, many sensor fusion algorithms have been developed to handle more complex situations, with sensor manufacturers also making strenuous efforts to enhance sensor performance. Although Light Detection And Ranging(LiDAR) sensor generally outperforms other sensor types, they remain prohibitively expensive from car manufacturing companies perspective. Therefore, camera and radar sensors have been enhanced, and are starting to provide free space information, similar to LiDAR sensor data and somewhat different from target information they have previously provided. The aim of this paper was to utilize the free space information to improve track information for vehicles. We employ the probability model with two occupancy grid map (OGM) types, which are Bayesian theory and Dempster-Shafer theory based OGMs, to classify free space information states and to efficiently handle free space information. Final output from the proposed algorithm is the target vehicle’s compensated track. Experimental results verify superior performance compared with non-compensated algorithms.
... Systems that ignore such correlation would be expected to have poor performance. Many methods have been developed to address the correlation problem (Bar-Shalom 1981;Aeberhard et al 2012). However such methods still cannot achieve better performance than the low-level "scan" fusion method. ...
Chapter
This chapter presents an alternative framework to the traditional centralised Kalman filtering (CKF) approach for implementing the optimal state estimation algorithm in support of multisensor integration. The data fusion algorithm is implemented via a series of transformations of vectors in the so-called information space (iSpace). This chapter describes how the conventional decentralised Kalman filtering (DKF) algorithm can be derived using a unified approach. A new global optimal fusion (GOF) algorithm is derived using the iSpace approach. Two case studies are presented to illustrate applications of the multisensor algorithms for GNSS/Locata/INS and GNSS/WiFi/INS integration.
... The authors of [16] used the joint probabilistic data association algorithm to complete the association of different sensor tracks and the state estimation of the fusion track. The authors of [17] proposed a new sensor fusion method based on an information matrix, but it did not involve data association. ...
Article
Full-text available
Accurate target detection is the basis of normal driving for intelligent vehicles. However, the sensors currently used for target detection have types of defects at the perception level, which can be compensated by sensor fusion technology. In this paper, the application of sensor fusion technology in intelligent vehicle target detection is studied with a millimeter-wave (MMW) radar and a camera. The target level fusion hierarchy is adopted, and the fusion algorithm is divided into two tracking processing modules and one fusion center module based on the distributed structure. The measurement information output by two sensors enters the tracking processing module, and after processing by a multi-target tracking algorithm, the local tracks are generated and transmitted to the fusion center module. In the fusion center module, a two-level association structure is designed based on regional collision association and weighted track association. The association between two sensors' local tracks is completed, and a non-reset federated filter is used to estimate the state of the fusion tracks. The experimental results indicate that the proposed algorithm can complete a tracks association between the MMW radar and camera, and the fusion track state estimation method has an excellent performance.
... The centralized fusion [295]- [297] fuses the raw data or extracted features from each sensor and processes the combined information by centralized processing, as illustrated in Fig. 4. Compared to the decentralized fusion, the centralized fusion has less information loss and is suitable for non-orthogonal sensors, but it has high data volume [293], [294]. The decentralized fusion [298], [299] fuses the output information from each sensor, and the output information is processed separately by each sensor, as illustrated in Fig. 5. The decentralized fusion can achieve more suitable processing in each sensor and better robustness to malfunction of any sensors, but it has more information loss, compared to the centralized fusion [293], [294], [300]. ...
... Also lane markings in-with a crossroad often do not exist or do not represent the real driving behavior of the passing vehicles. Other fusion methods such as shown by Elfring et al. [5] or Aeberhard et al. [6] may also be applied. A method that is more likely to reach up-to-dateness is shown by Schrödl et al. [7]. ...
Conference Paper
The future of individual mobility lies in intelligent and automated transportation. Current highly automated driving systems heavily rely on accurate map information. To ensure the advanced safety of these systems over human drivers, aggregated knowledge on infrastructure is indispensable. Therefore a lane accurate road map is a key enabler for highly automated driving systems, as well as enabler for novel location based services. To generate such a map current approaches in research are based on single sensor vehicles, equipped with lidar and camera sensors. However this current approach has high deficiencies in both scalability and the ability to adapt to local changes. To achieve the required adaptability, high penetration within the complete road network is necessary. Such a high coverage can only be obtained by an extremely large vehicle fleet. Making use of only the basic sensors of current series vehicles – including wheel speed, yaw rate, turning angle sensors – gave us exactly this advantage. To avoid induced bias of inaccurate GPS measurements, the usage is highly constrained to limited situations. With this method we accomplished to cover individual vehicle traces in the complete road network of a large city. Road intersections are one of the main components of current road maps. In this paper we therefore focus on the construction of crossroads maps, by identifying and localizing all individual lane courses. All covered crossroads are unique and hence require an automated and unsupervised mapping algorithm. In this paper we analyzed more than 30000 individual vehicle traces recorded by a diverse set of vehicle types, driven by a representative group of different drivers. We prove the feasibility of our approach by comparing the results with ground truth data of several real world crossroads on satellite images. This paper furthermore proves that even very basic sensor information reveals road geometries on lane level accuracy usable for creating a lane accurate road map.
... At present, there are three following major solutions: Multiple Hypotheses Tracking (MHT) [3]- [5], Probabilistic Data Association (PDA) [6], [7] and Random Finite Set (RFS) [8]. In addition, these approaches have been well developed and widely applied in such as sonar [9], [10], radar [11], biology [12], autonomous vehicle [13], [14], automotive recognition [15], and sensor network [16], [17]. ...
Article
Full-text available
Due to the advantage of distributed multisensor detection system makes sonar detect further and more accurate estimation of the target state, distributed multisensor data fusion algorithms are widely applied to sonar detection system. However, on the one hand, the most asynchronous algorithms focus on how to convert asynchronous data fusion into synchronous data fusion. On the other hand, sonar detection system suffers from more serious asynchronous data problems (such as more serious random delay and packet loss defaults) than radar and other fields. Therefore, the traditional asynchronous fusion method has some limitations. When the targets are sparse, this paper proposed a novel asynchronous multisonar data integration approach, in which the Gaussian Mixture Probability Hypothesis Density (GMPHD) filter is used to filter clutter for local sonar sensor. Then, the Gaussian Mixture Model (GMM) algorithm is used to model asynchronous data over a period of time. Finally, all local sonar detection data are integrated into a surveillance region image to help detecting target. Several simulation tests and a sea test are present in this paper to test this approach performance.
... Information filtering, which is essentially a Kalman filter expressed in terms of the inverse of the covariance matrix, has been widely used in multiple sensor estimation [10]. A novel approach to track-to-track fusion in a high-level sensor data fusion architecture for automotive surround environment perception using information matrix fusion (IMF) is presented in [11]. The paper [12] utilizes the iterative joint-integrated probabilistic data association technique for multisensor distributed fusion systems. ...
Article
Full-text available
In order to save the radar resources and obtain the better low probability intercept ability in the network, a novel radar selection method for target tracking based on improved interacting multiple model information filtering (IMM-IF) is presented. Firstly, the relationship model between radar resource and tracking accuracy is built, and the IMM-IF method is presented. Then, the information gain of every radar is predicted according to the IMM-IF, and the radars with larger information gain are selected to track target. Finally, the weight parameters for the tracking fusion are designed after the error covariance prediction of every working radar, in order to improve the IMM-IF. Simulation results show that the proposed algorithm not only saves much more radar resources than other methods but also has excellent tracking accuracy.
Article
The importance of precise localization technology for the autonomous driving of industrial mobile robots is steadily increasing. Notably, research into enhancing accuracy and robustness by fusing multiple systems is actively conducted rather than relying on a single localization system. We highlight the use of track-to-track (T2T) fusion, which takes the localization results of independent systems as input. This approach eliminates system adjustments with sensor changes, offering benefits for industrial mobile robots. However, existing T2T-based fusion methods suffer from overlooking slowly changing biases that can gradually increase over time due to sensor drift errors, map biases, etc. Since biases have different values and frequencies for each system, they are challenging for conventional T2T methods to handle. This article proposes a localization fusion framework that tackles such slowly varying biases. First, estimating the distinct biases inherent to each system poses a challenging problem; therefore, we align them to a single common bias. Second, localization estimates with a common bias are fused using a split covariance intersection filter, one of the T2T fusion techniques, considering the independence and correlation within each system to ensure fusion consistency. The proposed method has been validated in both simulation and real-world environments, confirming superior performance compared to existing algorithms.
Article
Cooperative perception techniques empower connected and automated vehicles (CAVs) perception capabilities through Vehicle-to-Everything (V2X) communication. However, this advancement introduces a significant influx of information within CAVs, posing a new challenge in managing potentially intricate aspects of asynchrony and correlation in this information landscape. In this context, this paper proposes a cooperative perception algorithm that considers localization uncertainty and information correlation and asynchrony. Specifically, a hierarchical split covariance intersection (SCI) approach is proposed to efficiently fuse the information from local sensors and connected devices. To bridge the reference disparities of information among CAVs, we incorporate localization uncertainty into the connected information fusion through a coordinate transformation approach based on the cubature rule, which unifies the references. Then, the covariance boundedness of the whole proposed algorithm is theoretically analyzed, demonstrating to some extent the safety guaranteed by our algorithm in practical applications. Finally, we build a high-fidelity driving simulator and collected real trajectory data from 80 drivers. The simulation and driver data testing results show the effectiveness and superiority of the proposed algorithm.
Article
Full-text available
In the automatic driving system, the external environment of the vehicle is perceived, in fact, there is also a perception sensor that has been silently dedicated in the system, that is, the positioning module. This paper explores the role of SLAM (Simultaneous Localization and Mapping) technology in autonomous vehicles, particularly in automatic lane change behavior prediction and environment perception. It emphasizes the limitations of traditional methods and the advantages of SLAM, especially visual SLAM, for accurate positioning and mapping. The discussion covers SLAM fundamentals, challenges, and the significance of visual SLAM's higher perception ability. Case studies on Waymo and Tesla illustrate the application of visual SLAM in achieving high-precision navigation and lane change prediction. The paper concludes by highlighting future research directions to enhance the intelligence and adaptability of automated lane change systems through advancements in AI and sensor technology, alongside optimizing SLAM algorithms for reliable driving in various scenarios.
Article
Cooperative perception techniques incorporating Vehicle-to-Everything (V2X) information offer new possibilities for enhancing the perception capability of automated vehicles (AVs), but also present a new challenge of how to maximize the benefits of connected information with limited communication burden. In this context, this paper proposes a novel cooperative perception solution based on consensus theory to improve the accuracy and consistency for the detection and tracking of non-connected targets by combining V2X information. Given the common multi-sensor configurations of AVs, we design a consensus-based distributed cooperative perception (DCP) algorithm in the framework of multi-layer information fusion for local sensors and connected nodes, and give a nonlinear form based on cubature rules to include more accurate nonlinear system models and nonlinear sensors. Considering the high maneuverability of vehicle targets, we then extend the DCP algorithm to a multi-model form (DMMCP) to improve the model uncertainty of maneuvering targets via combining the prior knowledge of multiple models, which also gives a calculation method for model probability and its average consensus in the context of multiple local sensors. Besides, a new consensus information weight strategy and the properties from different consensus information weights are discussed. The simulation results demonstrate the superiority of both our algorithms over the traditional algorithms in accuracy and consistency, moreover, the DMMCP algorithm, which takes into account model uncertainty, shows better performance than DCP in complex conditions.
Thesis
Full-text available
Autonomous driving is, besides electrification, currently one of the most competitive areas in the automotive industry. A substantially challenging part of autonomous vehicles is the reliable perception of the driving environment. This dissertation is concerned with improving the sensor-based perception and classification of the static driving environment. In particular, it focuses on recognizing road boundaries and obstacles on the road, which is indispensable for collision-free automated driving. Moreover, an exact perception of static road infrastructure is essential for accurate localization in a priorly built highly precise navigation map, which is currently commonly used to extend the environment model beyond the limited range of sensors. The first contribution is concerned with environment sensors with a narrow vertical field of view, which frequently fail to detect obstacles with a small vertical extent from close range. As an inverse beam sensor model infers free space if there is no measurement, those obstacles are deleted from an occupancy grid even though they have been observed in past measurements. The approach presented here explicitly models those errors using multiple hypotheses in an evidential grid mapping framework that neither requires a classification nor a height of obstacles. Furthermore, the grid mapping framework, which usually assumes mutually independent cells, is extended to information from neighboring cells. The evaluation in several freeway scenarios and a challenging scene with a boom barrier shows that the proposed method is superior to evidential grid mapping with an inverse beam sensor model. In the second contribution, a common shortcoming of occupancy grid mapping is approached. Multi-sensor fusion algorithms, such as a Kalman filter, usually perform obstacle association and gating to improve the obstacle position if multiple sensors detected it. However, this strategy is not common in occupancy grid fusion. In this dissertation, an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor detected the same obstacle and derived free space. The quantitative evaluation with an exact navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid mapping. Whereas the first two contributions were concerned with multi-sensor fusion approaches for collision avoidance, the third uses occupancy grids for situation interpretation. In particular, this work proposes to use occupancy maps to classify the driving environment into the categories freeway, country or rural road, parking area, and city street. Identifying the current road type is essential for autonomous driving systems designed for limited environment types. Inspired by the success of deep learning approaches in image classification, end-to-end Convolutional Neural Networks are compared to Support Vector Machines trained on hand-crafted features. Two novel CNN architectures for occupancy-grid-based situation classification, designed for embedded applications with limited computing power are proposed. Furthermore, the occupancy-grid-based classification is fused with camera-image-based classification, and the road type probabilities are recursively estimated over time with a Bayes filter. The evaluation of the approaches on an extensive data set yielded accuracies of up to 99 %.
Article
Accurately identifying and tracking different types of vehicles is the basis of safe driving of intelligent vehicles. Because of the defects of the traditional rule-based association methods of radar and camera, vehicle detection and tracking method based on local trajectory information of radar and camera targets are proposed. The local trajectory information of radar and camera targets can be obtained through data preprocessing. Then, a double-layer data association structure combining spatial location association and local trajectory association is designed. First, the direct linear transformation is used to realize the coordinate alignment. Radar projection coordinates define the region of interest of the camera target, and the target association’s initial screening is carried out. Then, dynamic time warping is used to calculate the local trajectory similarity for the target that meets the spatial position relationship to judge the final result of target association. Finally, the federated filter is used to fuse the successfully associated targets. The actual vehicle test results show that the proposed algorithm can improve positioning accuracy by fusing vehicle trajectory information.
Article
Faced with cost limitation and 360-degree fusion and tracking requirement of production intelligent vehicles, this paper proposes a low-cost hybrid multi-sensor fusion method, which provides flexibility and scalability for multi-level information input and different sensor configurations. By combining radar, lane detection and ego vehicle information, we design a centralized fusion algorithm for 360-degree object tracking, which provides a low-computation multi-sensor association algorithm by a scheme of local sensor to global track association, and proves an optimal centralized estimator in the condition of multi-coordinate system Doppler information. To make the sensor fusion system compatible with multi-level information input and enhance the redundancy to improve security, a distributed fusion algorithm based on covariance intersection method is also designed as a part of fusion framework for the secondary fusion of high-level information. We validate the feasibility of our fusion method in an industrial-level controller with only 10% computing power consumption and build a ground truth system to quantitatively evaluate the method, which shows that our fusion method performs well on 360-degree object tracking and guarantees stable object IDs, and greatly improves the accuracy especially for sensor boundary regions. Furthermore, we also test and evaluate the performance of our fusion method in a variety of complex open road scenarios including expressway, curved road and lane fading scenarios.
Article
Multi-radar data fusion techniques are important for self-driving vehicles to better perceive the environment. In the process of obtaining targets location and constructing environmental models, multi-radar data fusion can be regarded as a homogeneous time-series data fusion problem which is commonly solved by weighted fusion methods. However, when most sensors data are biased, e-specially when the data are all larger or smaller than the accurate data at the same time, the weights of sensors defined by sensor consistency and sensor stability can not reflect the real reliability of sensors. Thus, in this study, the correlation change between different observed parameters of one sensor, which is calculated based on the shape-based distance, together with the sensor consistency and sensor stability is leveraged to judge the reliability of the sensors and calculate the weights of the sensors. An experimental study is carried out and the Nuscenes dataset is used where a multi-radar system consists one LIDAR and two millimeter wave radars. In the situation that most sensors data are biased, the relative error and the root-mean-square error of the method proposed in this study are significantly lower than that of the arithmetic averaging method, the trimmed mean method and the weighted fusion method which only considers the sensor consistency and stability. Besides, under other situations, the fusion accuracy is also improved.
Article
This paper presents and evaluates a unified cooperative perception framework that employs vehicle-to-vehicle (V2V) connectivity. At the core of the framework is a decentralized data association and fusion process that is scalable with respect to participation variances. The evaluation considers the effects of the communication losses in the ad-hoc V2V network and the random vehicle motions in traffic by adopting existing models along with a simplified algorithm for individual vehicle's on-board sensor field of view. Furthermore, a multi-target perception metric is adopted to evaluate both the errors in the estimation of the motion states of vehicles in the surrounding traffic and the cardinality of the fused estimates at each participating node/vehicle. The extensive analysis results demonstrate that the proposed approach minimizes the perception metric for a much larger percentage of the participating vehicles than a baseline approach, even at modest participation rates, and that there are diminishing returns in these benefits. The computational and data traffic trade-offs are also analyzed.
Preprint
Environment perception, as the essential functional-ity of advanced driving assistance system(ADAS) and autonomous driving(AD) system, should be designed to understand the surrounding environment accurately and efficiently. Automotive radar, as the only sensor has the ability to sense the range information in almost any weather conditions, with comparably low cost, is the key factor to meet such requirements of environment perception. In this paper, we go through every aspect required for designing the automotive radar based environment perception module in detail, summarize the most relevant papers published recently and from years before, and highlight the most significant technologies which provide the fundamental of architecture of environment perception system.
Article
Full-text available
The time interval of the observational data changes irregularly because of the difference of sensors' sampling rate, the communication delay and the target leaving observation region of the sensor sometimes. These problems of asynchronous observation data greatly reduce the tracking accuracy of the multi-sensors system. Therefore, asynchronous data fusion system is more practical than synchronous data fusion system, and worthier of study. By establishing an asynchronous track fusion model with irregular time interval of observation data and combining with the Track Quality with Multiple Model (TQMM), an asynchronous track fusion algorithm with information feedback is proposed, and the TQMM is used for weight allocation to improve the performance of the asynchronous multi-sensor fusion system. The simulation result shows that the algorithm has better tracking performance compared with other algorithms, so that this kind of problem of track-to-track fusion for asynchronous sensors is solved effectively.
Article
Trajectory planning of an automated vehicle requires behavior knowledge of preceding target vehicles (PTVs), and lateral states such as lateral velocity and yaw rate are key enablers for precise behavior description. However, lateral states of a PTV can hardly be measured directly by common onboard sensors. In addition, although these states can be transmitted via V2V, the accuracy is limited by the state holding process during the communication interval. Aiming at improved PTV lateral states, this study considers the estimation of these states using visual and V2V measurements, and proposes a three-stage fusion architecture consisting of input estimator, heterogeneous model-based local estimators and fusion center. Specifically, to cope with the low rate of the received steering angle, the input estimator is constructed to obtain high rate control inputs. In the local estimator design, unlike conventional modeling problems mixing control input errors with model errors in process noise, this article separates them and develops a model adaption algorithm to compensate the time-varying covariance of the errors of the estimated control inputs. Then, to fuse the heterogeneous local estimates in the fusion center, a sub-full model-based information matrix filter is designed to address this specific heterogeneous fusion problem, in which the local models are treated as sub-models of a common full model. Hardware-in-the-loop experimental results show that the proposed method gives more accurate estimates in comparison with other approaches.
Chapter
Full-text available
Asynchronous data fusion is more practical than synchronous data fusion, the model of track-to-track fusion in this case has been established and the concept of Track Quality with Multiple Model (TQMM) was put forward, furthermore a data fusion algorithm is proposed, in which the TQMM is used to assign weights, to improve tracking precision in asynchronous multi-sensor data fusion system. The simulation results show that the algorithm has a better tracking performance compared with original algorithms.
Article
Object tracking is crucial for planning safe maneuvers of mobile robots in dynamic environments, in particular for autonomous driving with surrounding traffic participants. Multi-stage processing of sensor measurement data is thereby required to obtain abstracted high-level objects, such as vehicles. This also includes sensor fusion, data association, and temporal filtering. Often, an early-stage object abstraction is performed, which, however, is critical, as it results in information loss regarding the subsequent processing steps. We present a new grid-based object tracking approach that, in contrast, is based on already fused measurement data. The input is thereby pre-processed, without abstracting objects, by the spatial grid cell discretization of a dynamic occupancy grid, which enables a generic multi-sensor detection of moving objects. On the basis of already associated occupied cells, presented in our previous work, this paper investigates the subsequent object state estimation. The object pose and shape estimation thereby benefit from the freespace information contained in the input grid, which is evaluated to determine the current visibility of extracted object parts. An integrated object classification concept further enhances the assumed object size. For a precise dynamic motion state estimation, radar Doppler velocity measurements are integrated into the input data and processed directly on the object-level. Our approach is evaluated with real sensor data in the context of autonomous driving in challenging urban scenarios.
Article
Cooperative space object tracking using a sensor network plays an important role in space situational awareness and improves the accuracy, robustness, and dependability of space object tracking over a single sensor. However, different sensors may have different sampling rates and may work asynchronously. It is difficult to fuse these asynchronous measurements together and track space objects in time using traditional consensus-based filters, which require synchronous measurements. To overcome this restriction, a universal Kalman consensus filter (UKCF)is proposed. Based on the state transition matrix, each sensor can fuse the received information within a measurement period and track the object over time. Sensors with arbitrary sampling rates and working times can be incorporated into the cooperative tracking system. Optical observations are an efficient way to track space objects, especially medium- or high-orbit objects. The ground-based and space-based optical (SBO)tracking models are established first. Then, a centralized fusion algorithm for an asynchronous sensor network is deduced. Based on this algorithm, the topology for an asynchronous sensor network is established and the UKCF is produced. To demonstrate its performance, cooperative tracking scenarios for a geosynchronous object that use SBO sensors with different working times and use both SBO and ground-based optical sensors with different sampling rates are simulated. As shown in the paper, the UKCF, which has a similar performance relative to the KCF but expands its application range, is more suitable for real cooperative space object tracking.
Article
Full-text available
Current driver assistance functions, such as Active Cruise Control or Lane Departure Warning, are usually composed of independent sensors and electrical control units for each function. However, new functions will require more sensors in order to increase the observable area and the robustness of the system. Therefore, an architecture must be realized for the fusion of data from several sensors. The goal of this paper is to give a brief overview of different types of sensor data fusion architectures, and then propose an architecture that can be best realized, in performance and in practice, for future advanced driver assistance systems. The proposed architecture is then demonstrated on a test vehicle designed for highly automated driver assistance systems, such as the emergency stop assistant.
Article
Full-text available
This article presents the architecture of Junior, a robotic vehicle capable of navigating urban environments autonomously. In doing so, the vehicle is able to select its own routes, perceive and interact with other traffic, and execute various urban driving skills including lane changes, U-turns, parking, and merging into moving traffic. The vehicle successfully finished and won second place in the DARPA Urban Challenge, a robot competition organized by the U.S. Government. © 2008 Wiley Periodicals, Inc.
Chapter
Full-text available
In exteroceptive automotive sensor fusion, sensor data are usually only available as processed, tracked object data and not as raw sensor data. Applying a Kalman filter to such data leads to additional delays and generally underestimates the fused objects’ covariance due to temporal correlations of individual sensor data as well as inter-sensor correlations. We compare the performance of a standard asynchronous Kalman filter applied to tracked sensor data to several algorithms for the track-to-track fusion of sensor objects of unknown correlation, namely covariance union, covariance intersection, and use of cross-covariance. For our simulation setup, covariance intersection and use of cross-covariance turn out to yield significantly lower errors than a Kalman filter at a comparable computational load.
Article
Full-text available
In this paper, we present a vehicle safety application based on data gathered by a laser scanner and two short-range radars that recognize unavoidable collisions with stationary objects before they take place to trigger restraint systems. Two different software modules that perform the processing of raw data and deliver a description of the vehicle's environment are compared. A comprehensive experimental evaluation based on relevant crash and noncrash scenarios is presented.
Conference Paper
Full-text available
This paper focuses on the solution of the problem of (onboard moving vehicles) multiple sensor data fusion systems. The proposed application uses distributed architectures that operate with sensors or sensor systems and give redundant or complementary information for moving objects. This architecture ensures a modular approach allowing exchangeability and benchmarking using the output of individual trackers, whereas the fusion algorithm gives a solution to the track management problem and the coverage of wide perception areas. The test case is a multi-sensor configuration, which monitors the rear and lateral areas of traffic. Results from simulations and real data show that the given approach allows maintenance of the ID of objects and recognition of the vehicle environment with acceptable rates of false alarm and misses
Conference Paper
Full-text available
This paper discusses three algorithms for the problem of asynchronous Track-to-Track Fusion (AT2TF) with track delays, where the information configuration of T2TF with partial information feedback (AT2TFpf) is used. This is the most practical configuration for AT2TF with time delays, since communication delays make full information feedback very complicated. The first algorithm is the optimal memoryless AT2TF under the Linear Gaussian (LG) assumption (denoted as AT2TFpfwoMopt), which is also the linear minimum mean square error (LMMSE) fuser without the Gaussian assumption. The second is the Information Matrix Fusion for asynchronous T2TF (denoted as AT2TFpfIMF), which is an extension of the synchronous IMF algorithm. Both algorithms are novel. The third is an approximate AT2TF algorithm proposed by Novoselsky (denoted as AT2TFpfAppr). Among the three algorithms, only the first one theoretically guarantees the consistency of the fuser. The latter two algorithms involve certain degrees of heuristics. Simulation results show that, for the scenarios considered, AT2TFpfIMF is also consistent and has similar level of tracking accuracy as AT2TFpfwoMopt. On the other hand, AT2TFpfAppr has consistency problems. Further more, due to the simplicity of AT2TFpfIMF compared to AT2TFpfwoMopt, it is an appealing candidate for practical applications.
Article
Full-text available
This paper presents the TerraMax vision systems used during the 2007 DARPA Urban Challenge. First, a description of the different vision systems is provided, focusing on their hardware configuration, calibration method, and tasks. Then, each component is described in detail, focusing on the algorithms and sensor fusion opportunities: obstacle detection, road marking detection, and vehicle detection. The conclusions summarize the lesson learned from the developing of the passive sensing suite and its successful fielding in the Urban Challenge.
Article
Full-text available
Advanced driver assistance systems aim at an improved traffic safety, enhanced comfort and driving pleasure. Sensors perceive the objects surrounding the vehicle and produce an environment description. The assistance systems support the driver by assessing the situation recognized by this vehicle environment description. Current research in the area of advanced driver assistance systems aims at increased functionality. Comfort systems, such as the ACC, are expected to support the driver not only in normal driving phases, but also in more complex situations such as traffic jams. Safety systems will trigger collision avoidance or mitigation measures in a number of potential crash configurations and not only in well defined rear end collision situations. These future advanced applications impose strong requirements on the sensors and demand novel and complex signal processing algorithms for vehicle detection, tracking and situation assessment. The present work addresses challenging traffic scenarios. Novel sensor signal processing, sensor data fusion and tracking algorithms were developed which provide precise and consistent motion estimates for complex intersection scenarios, lane change maneuvers of vehicles, trucks on neighbouring lanes and highly dynamic situations. A novel emergency brake application offers a general solution to the situation assessment and aims in particular at so far unresolved intersection scenarios.
Article
Full-text available
Track-to-track fusion is an important part in multisensor fusion. Much research has been done in this area. Chong et al. (1979, 1986, 1990) among others, presented an optimal fusion formula under an arbitrary communication pattern. This formula is optimal when the underlying systems are deterministic, i.e., the process noise is zero, or when full-rate communication (two sensors exchange information each time they receive new measurements) is employed. However, in practice, the process noise is not negligible due to target maneuvering and sensors typically communicate infrequently to save communication bandwidth. In such situations, the measurements from two sensors are not conditionally (given the previous target state) independent due to the common process noise from the underlying system, and the fusion formula becomes an approximate one. This dependence phenomena was also observed by Bar-Shalom (1981) where a formula was derived to compute the cross-covariance of two track estimates obtained by different sensors. Based on this results a fusion formula was subsequently derived (1986) to combine the local estimates which took into account the dependency between the two estimates. Unfortunately, the Bayesian derivation made an assumption that is not met. This work points out the implicit approximation made and shows that the result turns out to be optimal only in the ML (maximum likelihood) sense. A performance evaluation technique is then proposed to study the performance of various track-to-track fusion techniques. The results provide performance bounds of different techniques under various operating conditions which can be used in designing a fusion system.
Article
Full-text available
This paper presents a data fusion Kalman based method applied to obstacle detection in road situations. We describe the principle and the implementation of fusion process. It is based on tracks produced separately by single sensors. Results with Infrared and Radar sensors are shown, proving the benefit of this dual approach. Another application with Radar and LIDAR is also illustrated.
Conference Paper
Contemporary driver assistance systems obtain a steadily increasing grade of automation. A novel system, the emergency stop assistant, performs completely autonomous driving in emergency situations. This system takes over the vehicle’s control, steers the vehicle safely to the roadside and stops, if the driver suffers a health irregularity such as a heart attack. In order to navigate through the traffic, sensors have to gain information, concerning the road and the surrounding vehicles. Subsequently, the gathered data is used to analyze and interpret the traffic situation. Depending on the current situation, driving strategies are derived, which navigate the vehicle autonomously to the roadside. The information concerning suitable areas to stop the vehicle is obtained from a digital map. This paper presents a novel method of situation interpretation based on static and dynamic state space assessment. Furthermore, the decision-making process in driving strategies using a combination of hybrid automata and decision trees will be presented.
Article
In the project Stadtpilot, introduced in [1], the object based environment perception system developed by the urban challenge team CarOLO at Technische Universitat Braunschweig, as presented in [2], has been enhanced. The context of this new project is more challenging as now because it includes public traffic on large inner-city loops. Other vehicles are described by the project's sensor data fusion by an open polyline (contour) with many points. Some of these points lie on straight lines or they represent noise of the contour which do not contribute to the object's description. These extra points complicate an effective tracking and deform the contour of the object hypothesis. Because of the numerous traffic and due to the change in the environment's type, surrounded vehicles very often create a change of view. This results in no or less measurement updates of some points in the contour and can result in its deformation. In an effort to overcome this problem, the contour estimating Kalman filter, presented in [3], has been enhanced by improved point update algorithms as well as a contour classifier based upon evidence theory. These enhancements allow the decrease of the used points. Changes of view, due to passing traffic, are better identified because the classifier identifies the most likely shape explicitly.
Article
A system, particularly a decision-making concept, that facilitates highly automated driving on freeways in real traffic is presented. The system is capable of conducting fully automated lane change (LC) maneuvers with no need for driver approval. Due to the application in real traffic, a robust functionality and the general safety of all traffic participants are among the main requirements. Regarding these requirements, the consideration of measurement uncertainties demonstrates a major challenge. For this reason, a fully integrated probabilistic concept is developed. By means of this approach, uncertainties are regarded in the entire process of determining driving maneuvers. While this also includes perception tasks, this contribution puts a focus on the driving strategy and the decision-making process for the execution of driving maneuvers. With this approach, the BMW Group Research and Technology managed to drive 100% automated in real traffic on the freeway A9 from Munich to Ingolstadt, showing a robust, comfortable, and safe driving behavior, even during multiple automated LC maneuvers.
Article
Integration of wireless communications in vehicles presents the greatest challenge today in the field of vehicular safety, efficiency and comfort. Data fusion should be able to benefit from this additional source of information. This involves the association and fusion of received vehicle positions and other relevant data with the targets tracked by the onboard sensors' tracking systems. This paper describes a fusion approach for using wireless transmitted data in a vehicle perception system, with focus on data association and tracking methodology. The results were extracted by using two vehicles equipped with wireless communication hardware. EHICLE perception systems usually consist of a set of onboard sensors. These sensors either provide ego vehicle relevant information such as velocity or yaw rate, or perceive vehicle's surrounding environment such as lanes and adjacent objects. Object sensing is performed by active sensors such as radars and laserscanners or by passive sensors such as cameras. Unfortunately all of these sensors pose certain limitations on detection range and field of view. Multi-sensor topologies, overcome these drawbacks to some extent but with the price of increased complexity. Modern vehicular safety applications pose requirements that sometimes surpass the capabilities of a vehicular multi sensor network. Inter vehicular communications promise to overpass the barriers of range and coverage, extending the perception range with vehicles forming a mobile network and exchanging information. Wireless communications offer a new potential for further development of new or already existing applications. When all these new technologies come into production (Figure 1), result will be increased safety and comfort in the future transportations. Recently, both industries and research institutes were strongly interested on exploiting the benefits coming from Vehicular Ad Hoc Networks (VANETs) for automotive safety applications. Inside Europe a lot of projects funded by the European Commission during the 6th Framework Program, such as SAFESPOT (1), CVIS (2) and COOPERS (3), that had such research activities. Vehicular Ad-hoc Networks (VANETs) is an area in which a lot of research is ongoing, especially in the fields of
Article
Future driver assistance systems need to be more robust and reliable because these systems will react to increasingly complex situations. This requires increased performance in environment perception sensors and algorithms for detecting other relevant traffic participants and obstacles. An object's existence probability has proven to be a useful measure for determining the quality of an object. This paper presents a novel method for the fusion of the existence probability based on Dempster-Shafer evidence theory in the framework of a highlevel sensor data fusion architecture. The proposed method is able to take into consideration sensor reliability in the fusion process. The existence probability fusion algorithm is evaluated for redundant and partially overlapping sensor configurations.
Article
Track and tracklet fusion filtering is complicated because the estimation errors of tracks from two sources for the same target may be cross-correlated. This cross-correlation of these errors should be taken into account when designing the filter used to combine the track data. This paper addresses the various track and tracklet fusion methods and their impact on communications load and tracking characteristics. In track and tracklet fusion a sequence of measurements is processed at the sensor or platform level to form tracks; then sensor level track data (in the form of tracks or tracklets) for a target is distributed and fused with each other or with a global track. Track Fusion and Tracklet Fusion are also sometimes called Hierarchical Fusion, Federated Fusion, or Distributed Fusion. Track data can include features or other information useful for target classification. One characteristic of track and tracklet fusion that distinguishes one method from another is whether the local track data is combined or if global tracks are maintained and updated using the sensor track data. Another important issue is whether there is filter process noise to accommodate target maneuvers. Some filtering methods are designed for maneuvering targets and others are not. This paper enumerates the various track and tracklet fusion methods for processing data from distributed sensors and their impact on filter performance and communications load.
Article
Most existing track-to-track fusion (T2TF) algorithms for distributed tracking systems are given assuming that the local trackers are synchronized. However, in the real world, synchronization can hardly be achieved and local trackers usually work in an asynchronous fashion, where local measurements are obtained and local tracks are updated at different times with different rates. Communication delays between local trackers and the fusion center (FC) also cause delays in the arrival of the local tracks at the FC. This paper presents the optimal asynchronous T2TF algorithm for distributed tracking systems under the linear Gaussian (LG) assumption, which is also the linear minimum mean square error (LMMSE) fuser without the Gaussian assumption. The information configuration of asynchronous T2TF with partial information feedback (AT2TFpf) is used. This is the most practical configuration for AT2TF with time delays, since communication delays make full information feedback very complicated. To illustrate the algorithm, a basic scenario of the fusion of two asynchronous local tracks is used, where one is available at the FC with no delay and the other is transmitted from a local tracker with a time delay. The algorithm can be extended to scenarios with more than two local trackers. The optimal asynchronous T2TF algorithm is compared with the approximate algorithms proposed by Novoselsky (denoted as AT2TFpfApprA-C) and is shown to have performance benefit in consistency as well as in fusion accuracy. The drawback of the optimal fusion algorithm is that it has high communication and computational cost. Two new approximate algorithms, AT2TFpfApprD and AT2TFpfApprE, are also proposed which significantly reduce the cost of the optimal algorithm with little loss in fusion performance.
Article
Track and tracklet fusion filtering is complicated because the estimation errors of tracks from two sources for the same target may be cross-correlated. This cross-correlation of these errors should be taken into account when designing the filter used to combine the track data. This paper addresses the various track and tracklet fusion methods and their impact on communications load and tracking characteristics. In track and tracklet fusion a sequence of measurements is processed at the sensor or platform level to form tracks; then sensor level track data (in the form of tracks or tracklets) for a target is distributed and fused with each other or with a global track. Track Fusion and Tracklet Fusion are also sometimes called Hierarchical Fusion, Federated Fusion, or Distributed Fusion. Track data can include features or other information useful for target classification. One characteristic of track and tracklet fusion that distinguishes one method from another is whether the local track data is combined or if global tracks are maintained and updated using the sensor track data. Another important issue is whether there is filter process noise to accommodate target maneuvers. Some filtering methods are designed for maneuvering targets and others are not. This paper enumerates the various track and tracklet fusion methods for processing data from distributed sensors and their impact on filter performance and communications load.© (2002) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
Conference Paper
This contribution introduces a novel approach to cross-calibrate automotive vision and ranging sensors. The resulting sensor alignment allows the incorporation of multiple sensor data into a detection and tracking framework. Exemplarily, we show how a realtime vehicle detection system, intended for emergency breaking or ACC applications, benefits from the low level fusion of multibeam lidar and vision sensor measurements in discrimination performance and computational complexity
Conference Paper
In this paper the exact algorithms for track-to-track fusion (T2TF) which can operate at an arbitrary rate for various information configurations are investigated. First, the exact algorithms for T2TF without memory (T2TFwoM) are presented for three information configurations: with no, partial and full information feedback from the fusion center (FC) to the local trackers. As one major contribution of this paper, the impact of information feedback on fusion accuracy is studied. It is shown that in T2TFwoM, which uses only the track estimates at the fusion time, information feedback will have a negative impact on the fusion accuracy. Then, the exact algorithms for T2TF with memory (T2TFwM) are derived for configurations with and without information feedback. Operating at full rate, T2TFwM is equivalent to the centralized measurement fusion (CMF) regardless of information feedback. However, at a reduced rate, a certain amount of degradation in fusion accuracy is unavoidable. In contrast to the case of T2TFwoM, information feedback is beneficial in T2TFwM.
Article
This paper describes the obstacle detection and tracking algorithms developed for Boss, which is Carnegie Mellon University 's winning entry in the 2007 DARPA Urban Challenge. We describe the tracking subsystem and show how it functions in the context of the larger perception system. The tracking subsystem gives the robot the ability to understand complex scenarios of urban driving to safely operate in the proximity of other vehicles. The tracking system fuses sensor data from more than a dozen sensors with additional information about the environment to generate a coherent situational model. A novel multiple-model approach is used to track the objects based on the quality of the sensor data. Finally, the architecture of the tracking subsystem explicitly abstracts each of the levels of processing. The subsystem can easily be extended by adding new sensors and validation algorithms.
Article
Modern driver assistance and safety systems are using a combination of two or more sensors for reliable tracking and classification of relevant road users like vehicles, trucks, cars and others. In these systems, processing and fusion stages are optimized for the properties of the sensor combination and the application requirements. A change of either sensor hardware or application involves expensive redesign and evaluation cycles. In this contribution, we present a multi sensor fusion system which is implemented to be independent of both sensor hardware properties and application requirements. This supports changes in sensor combination or application requirements. Furthermore, the environmental model can be used by more than one application at the same time. A probabilistic approach for this generic fusion system is presented and discussed.
Conference Paper
In an automotive pre-crash application, it is vital to quickly and accurately estimate the position and velocity of objects in the frontal area of the vehicle. To improve such estimations, several radar sensors are fused to detect objects. Due to their different performance characteristics, their measurements can arrive at the pre-crash processing unit out-of-sequence. This work presents several techniques to integrate measurements into a tracking algorithm that arrive with such an out-of-sequence measurement (OOSM) scenario. A comprehensive complexity analysis of the algorithms is also presented. Most importantly, the algorithms are run on a test vehicle during real crash scenarios. The algorithms' performance is evaluated against reference data from a highly accurate laser scanner. It is shown that using advanced OOSM algorithms in pre-crash systems significantly increases performance and reduces computational cost compared to previous approaches.
Article
One of the greatest obstacles to the use of Simultaneous Localization And Mapping (SLAM) in a real-world environment is the need to maintain the full correlation structure between the vehicle and all of the landmark estimates. This structure is computationally expensive to maintain and is not robust to linearization errors. In this tutorial we describe SLAM algorithms that attempt to circumvent these difficulties through the use of Covariance Intersection (CI). CI is the optimal algorithm for fusing estimates when the correlations among them are unknown. A feature of CI relative to techniques which exploit full correlation information is that it provides provable consistency with much less computational overhead. In practice, however, a tradeoff typically needs to be made between estimation accuracy and computational cost. We describe a number of techniques that span the range of tradeoffs from maximum computational efficiency with straight CI to maximum estimation efficiency with the maintenance of all correlation information. We present a set of examples illustrating benefits of CI-based SLAM.
Conference Paper
Fusing out-of-sequence information is a very important problem for multi-sensor target tracking systems. The challenge is in dealing with measurements that arrive from the various sensors at a central processor out-of-sequence; that is, signals arriving with a measurement relating to a time previous to the time of the current state. The problem of how to deal with these updates has received much attention in recent years and has been solved optimally and sub-optimally by various authors. Most of the solutions, however, treated only the problem of updating a track by out-of-sequence measurements (OOSM), but did not deal with the updating of a central track by local tracks. In this paper we adopt three solutions proposed by Bar-Shalom, for the OOSM problem, and adapt them to solving the out-of-sequence track (OOST) problem. We then demonstrate their application to the case of track-to- track fusion.
Conference Paper
This paper presents algorithms and techniques for single-sensor tracking and multi-sensor fusion of infrared and radar data. The results show that fusing radar data with infrared data considerably increases detection range, reliability and accuracy of the object tracking. This is mandatory for further development of driver assistance systems. Using multiple model filtering for sensor fusion applications helps to capture the dynamics of maneuvering objects while still achieving smooth object tracking for not maneuvering objects. This is important when safety and comfort systems have to make use of the same sensor information. Comfort systems generally require smoothly filtered data whereas for safety systems it is crucial to capture maneuvers of other road users as fast as possible. Multiple model filtering and probabilistic data association techniques are presented and all presented algorithms are tested in real-time on standard PC systems.
Article
Target tracking using multiple sensors can provide better performance than using a single sensor. One approach to multiple target tracking with multiple sensors is to first perform single sensor tracking and then fuse the tracks from the different sensors. Two processing architectures for track fusion are presented: sensor to sensor track fusion, and sensor to system track fusion. Technical issues related to the statistical correlation between track estimation errors are discussed. Approaches for associating the tracks and combining the track state estimates of associated tracks that account for this correlation are described and compared by both theoretical analysis and Monte Carlo simulations
Article
The testing of the hypothesis whether two tracks represent the same target is considered. Previous works in the literature assumed that the estimates of the same target's state from two track files are uncorrelated. A test that includes their correlation is presented.
Article
This note deals with the effect of the common process noise on the fusion (combination) of the state estimates of a target based on measurements obtained by two different sensors. This problem arises in a multisensor environment where each sensor has its information processing (tracking) subsystem. In the case of an ¿-ß tracking filter the effect of the process noise is that, over a wide range of its variance, the uncertainty area corresponding to the fused estimates is about 70 percent of the single-sensor uncertainty area as opposed to 50 percent obtained if the dependence is ignored.
Article
In multisensor target tracking systems measurements from the same target can arrive out of sequence. Such "out-of-sequence" measurement (OOSM) arrivals can occur even in the absence of scan/frame communication time delays. The resulting problem - how to update the current state estimate with an "older" measurement - is a nonstandard estimation problem. It was solved first (suboptimally, then optimally) for the case where the OOSM lies between the two last measurements, i.e, its lag is less than a sampling interval - the 1-step-lag case. The real world has, however, OOSMs with arbitrary lag. Subsequently, the suboptimal algorithm was extended to the case of an arbitrary (multistep) lag, but the resulting algorithm required a significant amount of storage. The present work shows how the 1-step-lag algorithms can be generalized to handle an arbitrary (multistep) lag while preserving their main feature of solving the update problem without iterating. This leads only to a very small (a few percent) degradation of MSE performance. The incorporation of an OOSM into the data association process is also discussed. A realistic example with two GMTI radars is presented. The consistency of the proposed algorithm is also evaluated and it is found that its calculated covariances are reliable.
Article
In target tracking systems measurements are typically collected in "scans" or "frames" and then they are transmitted to a processing center. In multisensor tracking systems that operate in a centralized manner, there are usually different time delays in transmitting the scans or frames from the various sensors to the center. This can lead to situations where measurements from the same target arrive out of sequence. Such "out-of-sequence" measurement (OOSM) arrivals can occur even in the absence of scan/frame communication time delays. The resulting "negative-time measurement update" problem, which is quite common in real multisensor systems, was solved previously only approximately in the literature. The exact state update equation for such a problem is presented. The optimal and two suboptimal algorithms are compared on a number of realistic examples, including a GMTI (ground moving target indicator) radar case.
Design and Analysis of Modern Tracking Systems This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination
  • S Blackman
  • R Popoli
S. Blackman and R. Popoli, Design and Analysis of Modern Tracking Systems. Norwood, MA: Artech House, 1999. This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 10 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Feature-level video and multibeam lidar sensor fusion for full-speed acc state estimation
  • M Mhlisch
  • T Kauderer
  • W Ritter
  • K Dietmayer
Obstacle detection and tracking for the urban challenge
  • M Darms
  • P Rybski
  • C Baker
  • C Umerson
Llinas Handbook of Data Fusion, pp.12–1 -12&ndash
  • S Julier
  • J Uhlmann
  • D Hall
Reliable automotive pre-crash system with out-of-sequence measurement processing
  • M Muntzinger
  • M Aeberhard
  • S Zuther
  • M Mhlish
  • M Schmid
  • J Dickmann
  • K Dietmayer
Generic centralized multi sensor data fusion based on probabilistic sensor and environment models for driver assistance systems
  • M Munz
  • M Mhlisch
  • K Dietmayer