Figure - available from: Engineering Reports
This content is subject to copyright. Terms and conditions apply.
Provides a visual representation of the labeled nozzle image achieved using a two‐step learning method. In Figure 5a, it showcases a feature identified as ‘A,’ while in Figure 5b it illustrates features labeled as ‘A0,’ ‘A1,’ ‘A2,’ ‘A3,’ ‘B0,’ ‘B1,’ ‘B2,’ and ‘B3’.
Source publication
This study develops and evaluates a deep learning based visual servoing (DLBVS) control system for guiding industrial robots during aircraft refueling, aiming to enhance operational efficiency and precision. The system employs a monocular camera mounted on the robot's end effector to capture images of target objects—the refueling nozzle and bottom...
Similar publications
Camera-to-robot (also known as eye-to-hand) calibration is a critical component of vision-based robot manipulation. Traditional marker-based methods often require human intervention for system setup. Furthermore, existing autonomous markerless calibration methods typically rely on pre-trained robot tracking models that impede their application on e...