Project

Optimal Data Fusion for Relative Pose Estimation for Uncooperative Targets

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
11
Reads
0 new
88

Project log

Lorenzo Pasqualetto Cassinis
added a research item
The estimation of the relative pose of an inactive spacecraft by an active servicer spacecraft is a critical task for close-proximity operations, such as In-Orbit Servicing and Active Debris Removal. Among all the challenges, the lack of available space images of the inactive satellite makes the on-ground validation of current monocu-lar camera-based navigation systems a challenging task, mostly due to the fact that standard Image Processing (IP) algorithms, which are usually tested on synthetic images, tend to fail when implemented in orbit. This paper reports on the testing of a novel Convolutional Neural Network (CNN)-based pose estimation pipeline with realistic lab-generated 2D monocular images of the European Space Agency's Envisat spacecraft. Following the current need to bridge the reality gap between synthetic and images acquired in space, the main contribution of this work is to test the performance of CNNs trained on synthetic datasets with more realistic images of the target spacecraft. The validation of the proposed pose estimation system is assured by the introduction of a calibration framework, which ensures an accurate reference relative pose between the target spacecraft and the camera for each lab-generated image, allowing a comparative assessment at both keypoints detection and pose estimation level. By creating a laboratory database of the Envisat spacecraft under space-like conditions, this work further aims at facilitating the establishment of a standardized on-ground validation procedure that can be used in different lab setups and with different target satellites. The lab-representative images of the Envisat are generated at the Orbital Robotics and GNC lab of ESA's European Space Research and Technology Centre (ESTEC). The VICON Tracker System is used together with a KUKA robotic arm to respectively track and control the trajectory of the monocular camera around a scaled 1:25 mockup of the Envisat spacecraft.
Lorenzo Pasqualetto Cassinis
added a research item
The relative pose estimation of an inactive spacecraft by an active servicer spacecraft is a critical task in the design of current and planned space missions, due to its relevance for close-proximity operations, such as In-Orbit Servicing and Active Debris Removal. This paper introduces a novel framework to enable robust monocular pose estimation for close-proximity operations around an uncooperative spacecraft, which combines a Convolutional Neural Network (CNN) for feature detection with a Covariant Efficient Procrustes Perspective-n-Points (CEPPnP) solver and a Multiplicative Extended Kalman Filter (MEKF). The performance of the proposed method is evaluated at different levels of the pose estimation system. A Single-stack Hourglass CNN is proposed for the feature detection step in order to decrease the computational load of the Image Processing (IP), and its accuracy is compared to the standard, more complex High-Resolution Net (HRNet). Subsequently, heatmaps-derived covariance matrices are included in the CEPPnP solver to assess the pose estimation accuracy prior to the navigation filter. This is done in order to support the performance evaluation of the proposed tightly-coupled approach against a loosely-coupled approach, in which the detected features are converted into pseudomeasurements of the relative pose prior to the filter. The performance results of the proposed system indicate that a tightly-coupled approach can guarantee an advantageous coupling between the rotational and translational states within the filter, whilst reflecting a representative measurements covariance. This suggest a promising scheme to cope with the challenging demand for robust navigation in close-proximity scenarios. Synthetic 2D images of the European Space Agency’s Envisat spacecraft are used to generate datasets for training, validation and testing of the CNN. Likewise, the images are used to recreate a representative close-proximity scenario for the validation of the proposed filter.
Lorenzo Pasqualetto Cassinis
added 2 research items
This paper introduces a novel framework which combines a Convolutional Neural Network (CNN) for feature detection with a Covariant Efficient Procrustes Perspective-n-Points (CEPPnP) solver and an Extended Kalman Filter (EKF) to enable robust monocular pose estimation for close-proximity operations around an uncooperative spacecraft. The relative pose estimation of an inactive spacecraft by an active servicer spacecraft is a critical task in the design of current and planned space missions, due to its relevance for close-proximity operations, such as In-Orbit Servicing and Active Debris Removal. The main contribution of this work stands in deriving statistical information from the Image Processing step, by associating a covariance matrix to the heatmaps returned by the CNN for each detected feature. This information is included in the CEPPnP to improve the accuracy of the pose estimation step during filter initialization. The derived measurement covariance matrix is used in a tightly-coupled EKF to allow an enhanced representation of the measurements error from the feature detection step. This increases the filter robustness in case of inaccurate CNN detections. The proposed method is capable of returning reliable estimates of the relative pose as well as of the relative translational and rotational velocities, under adverse illumination conditions and partial occultation of the target. Synthetic 2D images of the European Space Agency's Envisat spacecraft are used to generate datasets for training, validation and testing of the CNN. Likewise, the images are used to recreate representative close-proximity scenarios for the validation of the proposed method.
Lorenzo Pasqualetto Cassinis
added a research item
This paper reports on a comparative assessment of Image Processing (IP) techniques for the relative pose estimation of uncooperative spacecraft with a monocular camera. Currently, keypoints-based algorithms suffer from partial occlusion of the target, as well as from the different illumination conditions between the required offline database and the query space image. Besides, algorithms based on corners/edges detection are highly sensitive to adverse illumination conditions in orbit. An evaluation of the critical aspects of these two methods is provided with the aim of comparing their performance under changing illumination conditions and varying views between the camera and the target. Five different keypoints-based methods are compared to assess the robustness of feature matching. Furthermore, a method based on corners extraction from the lines detected by the Hough Transform is proposed and evaluated. Finally, a novel method, based on an hourglass Convolutional Neural Network (CNN) architecture, is proposed to improve the robustness of the IP during partial occlusion of the target as well as during feature tracking. It is expected that the results of this work will help assessing the robustness of keypoints-based, corners/edges-based, and CNN-based algorithms within the IP prior to the relative pose estimation.
Lorenzo Pasqualetto Cassinis
added a research item
The relative pose estimation of an inactive target by an active servicer spacecraft is a critical task in the design of current and planned space missions, due to its relevance for close-proximity operations, i.e. the rendezvous with a space debris and/or in-orbit servicing. Pose estimation systems based solely on a monocular camera are recently becoming an attractive alternative to systems based on active sensors or stereo cameras, due to their reduced mass, power consumption and system complexity. In this framework, a review of the robustness and applicability of monocular systems for the pose estimation of an uncooperative spacecraft is provided. Special focus is put on the advantages of multispectral monocular systems as well as on the improved robustness of novel image processing schemes and pose estimation solvers. The limitations and drawbacks of the validation of current pose estimation schemes with synthetic images are further discussed, together with the critical trade-offs for the selection of visual-based navigation filters. The state-of-the-art techniques are analyzed in order to provide an insight into the limitations involved under adverse illumination and orbit scenarios, high image contrast, background noise, and low signal-to-noise ratio, which characterize actual space imagery, and which could jeopardize the image processing algorithms and affect the pose estimation accuracy as well as the navigation filter's robustness. Specifically, a comparative assessment of current solutions is given at different levels of the pose estimation process, in order to bring a novel and broad perspective as compared to previous works.