Manuel Sanchez Gestido’s research while affiliated with Netherlands Institute for Space Research and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (6)


ESA Technology Developments in Vision-Based Navigation
  • Chapter

January 2024

·

43 Reads

·

1 Citation

Solid Mechanics and its Applications

·

·

Manuel Sanchez Gestido

·

The European Space Agency (ESA) has been developing on-board Guidance, Navigation and Control (GNC) technologies to support space activities in Earth orbit and beyond. One of the areas of development has been the use of vision-based systems to improve the performance and the autonomy of the navigation function. This paper will focus on two classes of missions enabled by vision-based navigation (VBN): rendezvous (in Earth or planetary orbit, with collaborative or non-collaborative targets) and precision descent and landing on planetary surfaces. It will present an overview of both recently completed and on-going ESA technology activities raising the Technology Readiness Level (TRL) of VBN systems for these missions.


Comet Interceptor: AOCS/GNC Design Challenges for Flying through the Dust Environment of a Comet

July 2023

·

1 Read

Per Bodin

·

Francesco Giuliano

·

Lisa Stenqvist Hanbury

·

[...]

·

Robin Courson

Comet Interceptor is an ESA mission with payload contributions from ESA member states and with an international participation by JAXA. It was selected by the Science Programme Committee (SPC) in 2019 as a Fast track (F) mission. Following a preliminary design phase, a consortium led by OHB Italia has been selected for the implementation phase, with OHB Sweden leading the development of the AOCS/GNC and propulsion subsystems. The mission aims to intercept a Dynamically New Comet or an interstellar body in a fly-by scenario using the main spacecraft complemented by two smaller probes where one of these probes is provided by JAXA, in this way gathering multi-point observations of the comet and its coma. The spacecraft will be launched towards the Sun-Earth L2 (SEL2) Lagrange point where it will wait up to typically 3 years until an interesting target object is identified. The spacecraft will then initiate a transfer phase which will last between 0.5 and 4 years. The final approach starts approximately 60 days before encounter and will include ground-based navigation using on-board cameras. The encounter for the main spacecraft occurs at nominal distance of 1000 km from the comet nucleus with a relative velocity between 10 and 70 km/s. The fly-by of the comet through the central coma region involves a dynamic dust environment, which highly affects the spacecraft design to ensure that the spacecraft survives and fulfills its performance requirements. Considerations include potential blinding or reduced performance of star trackers from straylight, sufficient control authority and bandwidth to react to dust impacts, and autonomous navigation challenges given the uncertainty in target shape and illumination. The general functionality of the Comet-I AOCS/GNC is to a large extent based on functions with flight heritage from different OHB LEO and GEO missions. The functionality required during the encounter with the comet requires however new development of several functions as well as significant modification of existing designs. The relative navigation function is a new design that uses measurements from the navigation cameras to determine the direction to the comet such that the derived guidance of the spacecraft and payloads allow observation of the target during fly-by. The attitude control functionality will require significant modifications to be able to fulfill the performance requirements as well as ensuring the safety of the spacecraft under the mechanical impact from the dust environment. The modifications include control laws that allow simultaneous use of reaction wheels and thrusters for attitude control, a robust high bandwidth controller, and the use of hot redundant equipment to ensure seamless operation in the presence of failures in AOCS/GNC sensors and actuators. This paper describes the different challenges to design the AOCS/GNC that allows flying through the cometary dust environment. The key driving requirements are identified, and the resulting subsystem architecture is outlined. The paper also describes the approach for performance verification and provides some preliminary simulation results.


MSR-ERO Rendezvous Navigation Sensors and Image Processing

July 2023

·

10 Reads

The ESA/NASA Mars Sample Return campaign aims to return to Earth samples of Mars materials. The samples will be collected by the NASA provided perseverance rover, assembled in the Orbiting Sample which will then be injected in the Mars orbit by the NASA provided Mars Ascent Vehicle. The ESA provided Earth Return Orbiter (ERO) vehicle will then autonomously detect and rendezvous with the Orbiting Sample (OS) container in low Mars orbit, capture it, seal it, and safely bring it back to Earth. Airbus Defence and Space, under ESA contract, is designing and developing the ERO spacecraft and, in particular, its vision-based GNC system to be used during rendezvous (RDV). The GNC system includes two types of vision sensors: a Narrow Angle Camera (NAC) and a Light Detection and Ranging device (LiDAR), integrating their own image processing to provide ERO navigation with measurements of OS relative position. From the navigation standpoint, the MSR-ERO rendezvous is a non-cooperative rendezvous scenario with a target vehicle (the OS) characterized by an uncontrolled fast tumbling motion. Choice has been made to divide rendezvous into a far range phase (from ~50km to ~500m) and a close range phase (from ~500m to capture). In the far range the vision system is required to provide only OS relative line of sight (LoS) using the Narrow Angle Camera (NAC), provided by Sodern, as main sensor. The NAC has a 4.5deg field of view (FoV) and 1Mpx detector (Faintstar2). It integrates a centre of brightness algorithm robust to proton impacts on the detector. In the close range the main sensor is a scanning LiDAR provided by Jena-Optronik with an adaptable FoV from 0.5° to 40° and a maximum scan frequency of 2Hz which integrates a 3D barycentring processing to assess the OS position from 3D point cloud directly measured by the LiDAR. This paper focuses on the vision sensors of MSR-ERO spacecraft. In a first section, it describes the MSR mission, including its requirements, and recalls the vision sensor trade-off done in the early phases of MSR-ERO project. Then, the vision sensor architecture within the ERO system and the sensors operation concept (board/ground) are presented in the second section and, in the third section, the selected sensors as well as their image processing are described in more details. An overview of their development plans is provided in the conclusions, together with the preliminary performance test results.


On the use of plenoptic imaging technology for 3D vision based relative navigation in space

May 2017

·

123 Reads

·

1 Citation

Plenoptic technology has captured the interest of researchers during the last years. It is based on the use of plenoptic cameras, which are one of multiple methods of capturing light fields. A light field is a function that registers the intensity of the light rays in free space. This allows to synthesize several outputs from the same data as focal stacks, depth maps, wavefront phases or even navigation vectors. ESA has been developing vision based navigation solutions using diverse sensors and also the plenoptic camera has become interesting. A proof of concept of the use of a plenoptic camera for navigation in space is shown for two application scenarios, the ENVISAT uncooperative rendezvous and moon landing. Results show that the plenoptic camera has potential, but it has also some drawbacks. Several error metrics and test scenarios under different lighting conditions are shown, allowing to get an insight of the working conditions of the technology. At the end, the conclusion is that plenoptic cameras could be useful as a complementary sensor whose measurements must be fused with other results from other sensors.


Lessons-learned from On-ground Testing of Image-based Non-cooperative Rendezvous Navigation with Visible-spectrum and Thermal Infrared Cameras

May 2017

·

261 Reads

·

5 Citations

In the context of an Active Debris Removal (ADR) for de-orbiting defunct satellites vision-based relative navigation is a key technology. Beside the overall systems design, one key engineering step is comprehensive testing of the algorithms. In particular, the ESA activity Image Recognition and Processing for Navigation (IRPN) deals with the relative navigation between chaser spacecraft and the uncooperative target object using different complementary vision-based sensors (cameras in the visible spectrum and infrared spectrum) and LIDAR. For verification of the image processing functions representative images with ground-truth are needed. Of course, images taken in-orbit are most realistic, but real space image data is rare and it is most of the time missing accurate ground truth data. Alternatives are synthetic (rendered) images or images taken on-ground in a rendezvous simulator using real cameras and spacecraft mock-ups. However, it is not straight forward to realistically simulate spacecraft and space environment. Thus, for both image generation processes compromises regarding the representativeness of the images and the effort for the image generation have to be made. This paper describes the overall IRPN system set-up, the verification and validation strategies and especially the difficulties faced in the on-ground image generation.


Figure 1 A general overview of the use case.  
Improving a Satellite Mission System by means of a Semantic Grid Architecture
  • Article
  • Full-text available

35 Reads

·

2 Citations

The use of a semantic grid architecture can make easier the deployment of complex applications, in which several organizations are involved and diverse resources are shared. This paper presents the application of the architecture defined in the Ontogrid project (S-OGSA) into a scenario for the analysis of the quality of the products of satellite missions.

Download

Citations (1)


... Palmerini [5] combined the infrared images and visible images to attain a continuous tracking of the relative position during space proximity operations. Frank et al. [6] used infrared sensors to provide complementary visual information for state estimation during relative navigation. However, these approaches regard the visible images and infrared images as two separate measurements and process them, respectively, which could not make full use of the primitive visual information. ...

Reference:

Visible and Infrared Image Fusion-Based Image Quality Enhancement with Applications to Space Debris On-Orbit Surveillance
Lessons-learned from On-ground Testing of Image-based Non-cooperative Rendezvous Navigation with Visible-spectrum and Thermal Infrared Cameras
  • Citing Conference Paper
  • May 2017