Conference Paper

Assessment of Tracking Small UAS Using IR Based Laser and Monocular-Vision Pose Estimation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Autonomous navigation of drones (UAVs) requires precise information of the agent's position, orientation, and exerted movements, commonly referred to as state [10]. Visual line of sight tracking is suitable in GPS denied environments and allows to integrate localization independent of the sensors onboard the vehicle [3]. The pose of the UAV can be estimated using a system model in conjunction with monocular image data [5]. ...
... This requires estimating the state of a system the full dynamics of which are partially unknown or modeled using reduced assumptions [10]. Extrinsic tracking using markers detected by a camera, coupled with a kinematic motion model is done by [3] and [5]. An external motion tracking system is used for indoor experiments by [13], which also propose multiple motion models of quadcopters. ...
Conference Paper
Full-text available
The usage of unmanned aerial vehicles (or drones) in inspection as well as search and rescue tasks has been rapidly increasing over the last years. Moreover, the trend of fully automating these applications continues, and with it the demand for secure and reliable localization technology. Vision based tracking is a promising approach for relative localization of drones. This requires the integration of a system model and extracting the relevant information from the images to compute the pose of the tracked vehicle with respect to the camera. The presented work tackles the evaluation of state estimation methods for drones. Different approaches are presented. The algorithms, Kalman filter, extended Kalman filter and Particle filter, are then evaluated using synthetically generated ground truth data and compared with respect to their accuracy of the final estimation result. The process shows that the rigid body state estimating Kalman filter is more accurate than the other presented alternatives when interpreting the mean squared error over the entire trajectory. Based on these insights, a model for practical implementation is recommended.
... This framework enables sensor, position, and other applicable data transfer between systems. The robotic and simulation platform has since provided valuable flight and sensor data for a wide range of organizations and engineering teams on the Virginia Tech Campus [24] [40] [41]. To make the system cheaper and easier to deploy at other academic institutions across Virginia, the SpaceDrones architecture was updated to include Steam VR's [5] motion tracking API. ...
... Sacrificing sub-millimeter accuracy for a price reduction of several orders of magnitude, a capability developed and deployed in 2019 [24]. This project endeavors to add a machine learning/computer vision capability to the SpaceDrones platform as a proof of concept for space domain applications and as a real-world flying 3D transnational and single rotational robotics motion solution capable of providing useful testbed capabilities; as well as a complete end to end solution that can be deployed to solve a wide range of applications requiring an autonomous drone technology with onboard computer vision capabilities. ...
Thesis
Full-text available
The proliferation of reusable space vehicles has fundamentally changed how we inject assets into orbit and beyond, increasing the reliability and frequency of launches. Leading to the rapid development and adoption of new technologies into the Aerospace sector, such as computer vision (CV), machine learning (ML), and distributed networking. All these technologies are necessary to enable genuinely autonomous decision-making for space-borne platforms as our spacecraft travel further into the solar system, and our missions sets become more ambitious, requiring true ``human out of the loop" solutions for a wide range of engineering and operational problem sets. Deployment of systems proficient at classifying, tracking, capturing, and ultimately manipulating orbital assets and components for maintenance and assembly in the persistent dynamic environment of space and on the surface of other celestial bodies, tasks commonly referred to as On-Orbit Servicing and In Space Assembly, have a unique automation potential. Given the inherent dangers of manned space flight/extravehicular activity (EVAs) methods currently employed to perform spacecraft construction and maintenance tasking, coupled with the current limitation of long-duration human flight outside of low earth orbit, space robotics armed with generalized sensing and control machine learning architectures is a tremendous enabling technology. However, the large amounts of sensor data required to adequately train neural networks for these space domain tasks are either limited or non-existent, requiring alternate means of data collection/generation. Additionally, the wide-scale tools and methodologies required for hardware-in-the-loop simulation, testing, and validation of these new technologies outside of multimillion-dollar facilities are largely in their developmental stages. This dissertation proposes a novel approach for simulating space-based computer vision sensing and robotic control using both physical and virtual reality testing environments. This methodology is designed to both be affordable and expandable, enabling hardware-in-the-loop simulation and validation of space systems at large scale across multiple institutions. While the focus of the specific computer vision models in this paper are narrowly focused on solving imagery problems found on orbit, this work can be expanded to solve any problem set that requires robust onboard computer vision, robotic manipulation, and free flight capabilities.
Conference Paper
Full-text available
This document describes an approach to develop a fully distributed multipurpose autonomous flight system. With a set of hardware, software, and standard flight procedures for multiple unmanned aerial vehicles (UAV), it is possible to achieve a relative low-cost plug and play fully-distributed architecture for multipurpose applications. The resulting system comprises an OptiTrack motion capture system , a Pixhawk flight controller, a Raspberry Pi companion computer, and the Robotic Operating System (ROS) for inter-node communication. The architecture leverages a secondary PID controller with the use of MAVROS, an open-source Python plugin for ROS, for onboard processing and interfacing with the flight controller. Featuring a procedure that receives the position vector from Optitrack System and returns the desired velocity vector for each time-step. This facilitates ease of integration for researchers. The result is a reliable, easy to use an autonomous system for multipurpose engineering research. To demonstrate its extensiveness, this paper shows experiments of a robotics navigation experiment utilizing the fundamentals of Markov Decision Processes (MDP) running at 60Hz, Wireless and with a network latency below 2ms. This paper reasons why fully distributed systems should be embraced as it maintains the reliability of the system with lower cost and easier implementation for the ground station. Combined with an intelligent choice approach for developing software architecture, it encourages and facilitates the use of autonomous systems for transdisciplinary research.
Article
Full-text available
As there is a rapid development of robotics in the field of automation engineering, ego-motion estimation has become a most challenging task. In this review, we presented a model to help describe the PnP problems, and introduced two most common solutions. The P3P solution is the smallest subset of control points that yields a finite number of solutions. The EPnP solution is to reduce the complexity by expressing the n 3D points as a weighted sum of four virtual control points. The former solution is widely applied while there are 3 pairs of corresponding points in the problem. However, in most real cases, the latter is more used.
Conference Paper
Full-text available
Positional tracking systems could hugely benefit a number of niches, including performance art, athletics, neuroscience, and medicine. Commercial solutions can precisely track a human inside a room with sub-millimetric precision. However, these systems can track only a few objects at a time; are too expensive to be easily accessible; and their controllers or trackers are too large and inaccurate for research or clinical use. We present a light and small wireless device that piggybacks on current commercial solutions to provide affordable, scalable, and highly accurate positional tracking. This device can be used to track small and precise human movements, to easily embed custom objects inside of a VR system, or to track freely moving subjects for research purposes.
Conference Paper
Full-text available
We present an accurate, efficient, and robust pose estimation system based on infrared LEDs. They are mounted on a target object and are observed by a camera that is equipped with an infrared-pass filter. The correspondences between LEDs and image detections are first determined using a combinatorial approach and then tracked using a constant-velocity model. The pose of the target object is estimated with a P3P algorithm and optimized by minimizing the reprojection error. Since the system works in the infrared spectrum, it is robust to cluttered environments and illumination changes. In a variety of experiments, we show that our system outperforms state-of-the-art approaches. Furthermore, we successfully apply our system to stabilize a quadrotor both indoors and outdoors under challenging conditions. We release our implementation as open-source software.
Conference Paper
Full-text available
At the current state of the art, the agility of an autonomous flying robot is limited by its sensing pipeline, because the relatively high latency and low sampling frequency limit the aggressiveness of the control strategies that can be implemented. To obtain more agile robots, we need faster sensing pipelines. A Dynamic Vision Sensor (DVS) is a very different sensor than a normal CMOS camera: rather than providing discrete frames like a CMOS camera, the sensor output is a sequence of asynchronous timestamped events each describing a change in the perceived brightness at a single pixel. The latency of such sensors can be measured in the microseconds, thus offering the theoretical possibility of creating a sensing pipeline whose latency is negligible compared to the dynamics of the platform. However, to use these sensors we must rethink the way we interpret visual data. This paper presents a method for low-latency pose tracking using a DVS and Active Led Markers (ALMs), which are LEDs blinking at high frequency (>1 KHz). The sensor's time resolution allows distinguishing different frequencies, thus avoiding the need for data association. This approach is compared to traditional pose tracking based on a CMOS camera. The DVS performance is not affected by fast motion, unlike the CMOS camera, which suffers from motion blur.
Conference Paper
Accurate positioning of objects within an indoor environment is essential for applications such as virtual reality, robotics and multirotor drones. Previous methods of obtaining six-degrees-of-freedom for a tracked object have been susceptible to latency and accumulated error, computationally intensive, inaccurate, costly, or required exotic components, setups and calibration procedures. This paper presents a novel architecture and implementation of an indoor tracking system that consists of a rotary-laser base station and a tracked object equipped with photodiode sensors. The base station emits horizontal and vertical laser lines that sweep across the environment in sequence. The base station also emits an infrared synchronization beacon that floods the environment between each sweep. The tracked object consists of multiple photodiode sensors and a processing unit. Based on the timings between the synchronization beacons and the sweeps observed by each photodiode, along with the known configuration of the photodiode constellation, the position and orientation of the tracked object can be determined with high accuracy, low latency, and low computational overhead. In addition, the system allows a large number of such objects to be tracked within the same space, as each tracked object can be a separate embedded device. The Nikon iGPS tracking system, along with the more recently announced Lighthouse technology from Valve Corporation, use multiple base stations to triangulate a tracked object, while we will show that one is sufficient for basic tracking. In addition, this paper is the first to describe the specifications for a low-cost version of such a system. Observations and performance characteristics of the constructed prototypes are discussed.
Conference Paper
Charge-coupled devices (CCDs) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer--the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response. With the active pixel, the signal is driven from the pixel over metallic wires rather than being physically transported in the semiconductor. This paper makes a case for the development of APS technology. The state of the art is reviewed and the application of APS technology to future space-based scientific sensor systems is addressed.
Rpg monocular pose estimator
  • M Faessler
  • E Mueggler
  • K Schwabe
  • D Scaramuzza