Technical ReportPDF Available

Implementation and Evaluation of the World's Largest Outdoor Optical Motion-Capture System

Authors:

Abstract and Figures

The US Army Combat Capabilities Development Command Army Research Laboratory Guidance Technologies Branch has implemented a one-of-a-kind outdoor motion-capture system for multi-agent unmanned aerial system testing. This system consists of 96 cameras housed within 16 tracking pods positioned around the perimeter of the desired capture volume. The cameras track actively illuminated LED marker strobes attached to test articles moving throughout the volume. Recent evaluation of this system demonstrated accurate marker tracking within a 460- × 110- × 70-m volume at a measurement rate of 100 Hz.
Content may be subject to copyright.
ARL-TR-8931 APR 2020
Implementation and Evaluation of the World’s
Largest Outdoor Optical Motion-Capture
System
by Daniel Everson and Barry Kline
Approved for public release; distribution is unlimited.
NOTICES
Disclaimers
The findings in this report are not to be construed as an official Department of the
Army position unless so designated by other authorized documents.
Citation of manufacturer’s or trade names does not constitute an official
endorsement or approval of the use thereof.
Destroy this report when it is no longer needed. Do not return it to the originator.
ARL-TR-8931 APR 2020
Implementation and Evaluation of the World’s
Largest Outdoor Optical Motion-Capture System
Daniel Everson
Weapons and Materials Research Directorate, CCDC Army Research Laboratory
Barry Kline
SURVICE Engineering
Approved for public release; distribution is unlimited.
ii
REPORT DOCUMENTATION PAGE Form Approved
OMB No. 0704-0188
Public reporting burden for this collection of informat ion is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the
data needed, and completing and reviewing the collection infor mation. Send comments regarding this burden estimate o r any other aspect of this colle ction of infor mation, inc luding suggest ions for reducing the
burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302.
Responde nts should be awa re that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a cu rrently
valid OMB c ontrol number.
PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.
1. REPORT DATE (DD-MM-YYYY)
April 2020
2. REPORT TYPE
Technical Report
3. DATES COVERED (From - To)
May 2017October 2019
4. TITLE AND SUBTITLE
Implementation and Evaluation of the World’s Largest Outdoor Optical
Motion-Capture System
5a. CONTRACT NUMBER
5b. GRANT NUMBER
5c. PROGRAM ELEMENT NUMBER
6. AUTHOR(S)
Daniel Everson and Barry Kline
5d. PROJECT NUMBER
5e. TASK NUMBER
5f. WORK UNIT NUMBER
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
CCDC Army Research Laboratory
ATTN: FCDD-RLW-LF
Aberdeen Proving Ground, MD 21005
8. PERFORMING ORGANIZATION REPORT NUMBER
ARL-TR-8931
9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)
10. SPONSOR/MONITOR'S ACRONYM(S)
11. SPONSOR/MONITOR'S REPORT NUMBER(S)
12. DISTRIBUTION/AVAILABILITY STATEMENT
Approved for public release; distribution is unlimited.
13. SUPPLEMENTARY NOTES
ORCID ID: Daniel Everson, 0000-0003-0466-1132
14. ABSTRACT
The US Army Combat Capabilities Development Command Army Research Laboratory Guidance Technologies Branch has
implemented a one-of-a-kind outdoor motion-capture system for multi-agent unmanned aerial system testing. This system
consists of 96 cameras housed within 16 tracking pods positioned around the perimeter of the desired capture volume. The
cameras track actively illuminated LED marker strobes attached to test articles moving throughout the volume. Recent
evaluation of this system demonstrated accurate marker tracking within a 460- × 110- × 70-m volume at a measurement rate
of 100 Hz.
15. SUBJECT TERMS
outdoor motion capture, multi-agent tracking, swarming, unmanned aerial system (UAS) operations, navigation ground truth
16. SECURITY CLASSIFICATION OF:
17. LIMITATION
OF
ABSTRACT
SAR
18. NUMBER
OF
PAGES
65
19a. NAME OF RESPONSIBLE PERSON
Daniel Everson
b. ABSTRACT
Unclassified
c. THIS PAGE
Unclassified
19b. TELEPHONE NUMBER (Include area code)
(410) 278-4693
Standard Form 298 (Rev. 8/98)
Prescribed by ANSI Std. Z39.18
iii
Contents
List of Figures v
List of Tables vii
1. Introduction 1
2. Background 1
2.1 Motion-Capture Technology Survey 3
2.2 Motion-Capture System Performance Requirements 3
2.3 Contracting for System Development 4
3. Large Outdoor Motion-Capture Technical Solution 4
3.1 Tracking Pod Design 5
3.2 Active LED Marker Design 7
3.3 Marker Detection 8
3.4 System Configuration 9
3.5 System Implementation for Collection of Motion-Capture Data 9
4. Performance Evaluation Test Event 12
4.1 System Calibration 13
4.2 System Performance Evaluation Methods 14
4.3 Collection of Data for Evaluation of Tracking Accuracy 14
4.4 Collection of Data for Evaluation of Tracking Precision 16
4.5 Tracking a Marker Integrated in a Projectile Form Factor 17
5. Analysis of Collected Data to Establish Observed System
Performance 17
5.1 Evaluation of Position-Tracking Accuracy 18
5.2 Discussion of Tracking Accuracy Error 25
5.3 Evaluation of Position-Tracking Precision 32
5.4 Demonstration of Tracking a Projectile Configuration 35
iv
5.5 Performance Criteria Not Directly Evaluated 38
5.6 Practical Considerations and Opportunities for Improvement 42
6. Conclusion 43
7. References 45
Appendix. Method for Correcting Timestamp Errors of Leica TS16 Total
Station Data 47
List of Symbols, Abbreviations, and Acronyms 55
Distribution List 56
v
List of Figures
Fig. 1 Internal components of a tracking pod including six cameras with
overlapping FOVs ................................................................................. 6
Fig. 2 Complete tracking pod atop aluminum pod stand ................................ 7
Fig. 3 Active marker strobe consisting of 96 LEDs located around the
perimeter of a custom PCB. A second PCB with a GPS receiver
provides timing for synchronization with cameras. .............................. 8
Fig. 4 Model of camera FOVs for the six cameras within each tracking
pod....................................................................................................... 10
Fig. 5 Camera FOV for all tracking pods in a notional 465- × 110-m system
layout................................................................................................... 10
Fig. 6 Planned system layout for evaluation of motion-capture system
performance ........................................................................................ 12
Fig. 7 GTB R600 UAS outfitted with marker strobe and 360° survey
prism ................................................................................................... 13
Fig. 8 UAS flight plan for collection of system-calibration data .................. 14
Fig. 9 360° survey prism mounted in a stacked configuration with a marker
strobe for the simultaneous collection of motion-capture and ground-
truth measurements ............................................................................. 15
Fig. 10 Rigid rotating-arm device for collecting data to evaluate motion-
capture system measurement precision ............................................... 16
Fig. 11 Surrogate projectile for demonstration of marker integration in a
projectile configuration and collection of position measurements for a
high-speed object ................................................................................ 17
Fig. 12 Orthogonal components of position measurement error as a function of
time ..................................................................................................... 19
Fig. 13 3-D position-error magnitude (total position error) as a function of
time for the position accuracy test data set ......................................... 20
Fig. 14 Relationship between position-error magnitude and location of the
marker within the capture volume ...................................................... 21
Fig. 15 Total position error as a function of time for motion-capture position
measurements after a moving average filter was applied to the position
measurements in an attempt to reduce measurement noise ................ 22
Fig. 16 Histograms of position error for each of the orthogonal components
............................................................................................................. 23
Fig. 17 Histogram of 3-D position error magnitude (total position error) for the
position accuracy test data set ............................................................. 24
Fig. 18 Fit of Maxwell–Boltzmann probability density function to the
position-error distribution ................................................................... 24
vi
Fig. 19 Highlighted data segment with higher than typical position error ...... 26
Fig. 20 Position error components for segment of data with higher than typical
error ..................................................................................................... 27
Fig. 21 Horizontal plane representation of motion-capture data, ground-truth
position, and measurement error for a segment with higher than typical
position error ....................................................................................... 28
Fig. 22 Example marker detection scenario for a measurement with low
position error ....................................................................................... 30
Fig. 23 Example marker detection scenario with few cameras detecting the
marker and a relatively high dilution of precision resulting in high
position error ....................................................................................... 31
Fig. 24 Example marker detection scenario with few cameras detecting the
marker and detection by a camera a long distance from the marker
resulting in high position error ............................................................ 32
Fig. 25 Marker detection scenario used for evaluation of position measurement
precision .............................................................................................. 33
Fig. 26 Result of fitting a planar circular motion to position measurements
associated with the rigid rotating-arm apparatus ................................ 34
Fig. 27 Distribution of error between position measurements associated with
the rigid rotating-arm apparatus and the planar circular motion fit best-
case scenario ....................................................................................... 34
Fig. 28 Measurement precision of the motion-capture system was
demonstrated by the consistency of position measurements associated
with the rigid rotating-arm apparatus .................................................. 35
Fig. 29 Series of trajectories for a manually launched projectile with an
integrated tracking marker .................................................................. 36
Fig. 30 Trajectory profile of manually launched projectile selected for detailed
analysis (direction of travel is from right to left) ................................ 36
Fig. 31 Fit of point-mass ballistic model to trajectory profile captured by the
motion-capture system ........................................................................ 37
Fig. 32 Configuration of camera rays tracking two markers mounted on a rigid
rotating arm. Counterclockwise rotation of the arm will result in loss
of marker detection by cameras to the left and right shortly after this
frame. .................................................................................................. 40
Fig. 33 Configuration of camera rays tracking two markers mounted on a rigid
rotating arm. Position of the two markers has overlapped from the
perspective of cameras on the right and left, resulting in loss of marker
detection by those cameras. ................................................................ 41
vii
Fig. 34 Configuration of camera rays tracking two markers mounted on a rigid
rotating arm. As the arm has continued rotating counterclockwise the
position of the two markers has once again become distinct from the
perspective of cameras on the right and left, resulting in reacquisition
of the markers by those cameras. ........................................................ 41
Fig. A-1 Segment of raw data recorded by the TS16 displayed both as a
function of (top) time and (bottom) in the horizontal plane to
demonstrate the effect of timing errors for position measurements that
are otherwise accurate ......................................................................... 48
Fig. A-2 Comparison of raw TS16 position measurements and the same
measurements with timestamp corrections applied (shown in black) 51
Fig. A-3 Distribution of time correction applied to individual TS16 position
measurements ...................................................................................... 52
Fig. A-4 Representative example of position-error traces between motion-
capture measurements and the ground-truth data implying the
relationship between position and time for calculation of the position
error ..................................................................................................... 53
List of Tables
Table 1 Motion-capture system performance requirements established by ARL
............................................................................................................... 3
1
1. Introduction
The US Army Combat Capabilities Development Command Army Research
Laboratory (ARL) is actively pursuing technologies under the Precision and
Cooperative Weapons in a Denied Environment (PCWDE) mission program to
perform navigation in GPS-denied and -degraded conditions.1,2 The objective of
guided lethality research is to provide assured delivery of a projectile payload to
increase performance and widen the engagement space. Several of the key research
areas, including vision-based navigation and swarming behavior, attempt to solve
technical challenges associated with operating in degraded or compromised
environments. To achieve research goals, researchers must be able to validate
navigation and swarming technical solutions.
To meet the PCWDE objectives, a unique experimental research capability has
been established. This large-scale, outdoor motion-tracking system will provide
researchers with the ability to precisely localize agents of an aerial swarm. Small
and medium-size unmanned aerial vehicles equipped with navigation devices will
serve as development platforms for vision-based navigation solutions and
swarming-guidance strategies. The ability to deliver precision tracking of large-
scale, multi-agent experiments in an outdoor environment will enable research that
could not be accomplished with previously existing facilities at ARL or elsewhere.
2. Background
Motion-capture at a fundamental level is the process of recording the movement of
objects through space. Modern motion-capture systems expand on this concept to
simultaneously track multiple objects or multiple points on a single object at high
precision and high measurement rates. Tracking multiple points on a rigid body
enables a motion-capture system to estimate the attitude of the object in addition to
its position as a function of time. Tracking multiple points on an articulated body
such as a robot or a human actor enables the mapping of motion-capture data to a
kinematic model of the articulated body.
Motion-capture systems are commonly used by the entertainment industry to
generate animated elements of cinematography based off of the motion of human
actors.3 Academic researchers developing complex robotic devices leverage
motion-capture systems to provide near-real-time estimation of the state of the
devices with respect to their environment. This technique enables the
implementation of control techniques requiring feedback that exceeds the
capability of onboard sensors.4 Simultaneous tracking of multiple independent
robotic devices also allows researchers to investigate swarming concepts.5–7
2
Motion-capture systems are well suited to provide a ground-truthing capability for
researchers developing navigation sensors for autonomous systems. By
independently recording the motion of a test vehicle at high spatial and temporal
resolutions, a motion-capture system can provide valuable data for evaluation of
sensor data recorded onboard the test vehicle.8
There are a variety of fundamental technologies used by different motion-capture
systems to estimate the position of objects as a function of time. These include
systems based on inertial measurement units, sensing of magnetic fields, and radio
ranging devices.9 However, the majority of motion-capture systems employed by
the entertainment industry and academia use optical sensors and stereo-vision
techniques to produce position estimates for special optical markers at the frame
rate of the vision sensors.
Optical motion-capture systems must rapidly and reliably detect markers as they
move within a designated capture volume. This requires the markers to be easily
distinguishable from the visible background. This is typically achieved by imaging
the markers in a specific wavelength and relying on optical filters to increase the
relative sensitivity of the optical sensors in that specific wavelength. Markers used
with optical motion-capture systems are typically either passive or active based on
the application and a series of advantages and disadvantages for each type. Passive
markers reflect light generated by a strobe that emits light in a wavelength
consistent with the design of the optical sensor. Active markers directly emit light
in the appropriate wavelength.
In 2016 ARL identified an emerging need for an experimental capability that did
not then exist. Evaluation of technologies developed under the PCWDE mission
research program will require experiments consisting of multiple simultaneously
operating unmanned aerial systems (UASs) acting as projectile surrogates.1013
Additional experiments will use soft-launched projectiles outfitted with candidate
electronics components. These experiments will require accurate ground truth
measurements of agent position collected at a data rate and level of precision that
exceeds the capability of current GPS technologies. The measurement attributes of
motion-capture systems make those technologies attractive options to meet this
need, but no existing system was capable of meeting ARL’s requirements.
Implementation of a motion-capture system to service a large outdoor range space
presents inherent challenges that no existing technology was capable of achieving.
3
2.1 Motion-Capture Technology Survey
ARL conducted a detailed technology survey to identify candidate technologies
with potential to meet the experimental needs of the PCWDE research effort. The
majority of possible options were either technically immature or could not be scaled
to the size of the capture volume required by ARL. The one approach that was
found to be feasible was optical motion-capture. Several vendors had demonstrated
implantation of their optical motion-capture systems in outdoor environments.
Specifically, outdoor motion-capture using actively illuminated markers was found
to be the most viable strategy. The use of high-power active markers, properly
implemented, generates enough signal above the noise of ambient solar radiation
to allow for optical detection at distances sufficient for instrumentation of a large
capture volume.
2.2 Motion-Capture System Performance Requirements
ARL established a set of requirements for an outdoor motion-capture system to
support the needs of the PCWDE research effort. Required functional performance
parameters are shown in Table 1.
Table 1 Motion-capture system performance requirements established by ARL
Requirement Objective Threshold
Provides near-real-time tracking results Yes Yes
System coverage area length/width/height (m) 500/200/200 300/60/75
Marker integration volume (cm3) 63 270
Maximum agent velocity (m/s) 100 50
Number of agents 100 10
Position accuracy (m) 0.01 0.5
Measurement of vehicle attitude (6 degrees of
freedom [DOF])
Yes No
Measurement update rate (Hz) 100 10
In addition to these performance characteristics, a series of additional design
considerations was developed to inform development of a capability suitable for
ARL’s specific needs. These considerations included the following:
Tracking reliability of markers in ambient daylight conditions
Disambiguation of markers located on multiple agents operating in close
proximity
4
Integration of markers into projectile flight hardware. Sample case provided
was an 83- ×50-mm-tall “puck” section of a cylindrical projectile
Setup/breakdown time for the system. Intended use is temporary
instrumentation of a shared use range facility for test events of 1 week’s
duration
Consideration of environmental conditions and associated weatherproofing
to support year-round testing at Aberdeen Proving Ground, Maryland
2.3 Contracting for System Development
Because the survey of existing technologies indicated that a system to meet these
requirements did not exist, a two-phase contracting approach was used to pursue a
solution. The intent of the first phase was to fund multiple vendors to demonstrate
their existing technologies and develop a proposal to extend the capability of those
technologies to meet the full-system requirements. The results of the first phase
would then be used to select a single vendor to deliver on their proposed solution
in a second phase.
In May 2017, ARL awarded a Phase 1 contract to PhaseSpace, Inc., as the only
vendor who submitted a viable response to the ARL request for proposals. At the
conclusion of the first phase, ARL subject matter experts determined that the
proposed full solution presented by PhaseSpace was viable given the maturity level
of their demonstrated existing technology. In August 2017, ARL awarded a second
phase contract for development, implementation, and demonstration of the
proposed system.
3. Large Outdoor Motion-Capture Technical Solution
The outdoor motion-capture system developed by PhaseSpace to meet ARL’s
requirements consists of 16 individual tracking pods housing six cameras each, a
total of 96 cameras. Each camera consists of a 5-megapixel vision sensor outfitted
with a 21-mm prime lens, providing a horizontal field of view (FOV) of
approximately 33°. The six cameras within each pod are arranged in two rows of
three, with the cameras oriented such that the FOV of each slightly overlaps the
adjacent cameras. This arrangement provides a total FOV for each pod of
approximately 100° horizontal by 42° vertical. Spacing 16 pods around the
perimeter of a desired capture volume with inward-facing, overlapping FOVs
enables 3-D position tracking of actively illuminated markers using stereo-vision
techniques.
5
3.1 Tracking Pod Design
Sixteen individual tracking pods serve as the primary component of the outdoor
motion-capture system. These pods allow a team tasked with setting up the system
to quickly place an array of six cameras and the associated support hardware at an
appropriate location as a single unit. The pods also serve as a weathertight enclosure
for the cameras, allowing the units to be left in place over the duration of a
multiple-day test event.
Each pod consists of an aluminum platform that mounts to a pod stand
approximately 36 inches tall. Adjustable feet on the base of the pod stand allow the
pod to be leveled as necessary to accommodate uneven ground. An internal
aluminum frame structure provides mounting surfaces for the electronics
components that make up the functional elements of each pod. Six custom-designed
cameras with purpose-built flanged housings rigidly affix to the internal frame in a
configuration that provides adjoining FOVs with a small amount of overlap. The
configuration of cameras and other internal components is shown in Fig. 1. Two
motherboards control the cameras and process image frames for detection of
markers. A network switch enables connectivity between each pod and a central
server through a system of daisy-chaining multiple pods together. Thermal
management of the internal electronics is provided by a series of fans and a large
heat-sink located on the rear exterior of the pod. An aluminum outer case provides
a weathertight enclosure to protect the internal electronics from the ambient
environment, as shown in Fig. 2. A GPS receiver is located in a separate exterior
enclosure affixed to the top of the pod with a data feed into the pod to provide
timing data necessary for synchronization of cameras and markers.
6
Fig. 1 Internal components of a tracking pod including six cameras with overlapping FOVs
7
Fig. 2 Complete tracking pod atop aluminum pod stand
3.2 Active LED Marker Design
Actively emitting LED markers provide a bright light source that is readily
detectible by the tracking pods at extended ranges in normal daylight conditions. A
marker configuration was developed by PhaseSpace to meet ARL’s requirement
for marker implementation in a projectile form factor. This configuration is also
suitable for implementation on UASs. The marker consists of an 83-mm-diameter
printed circuit board (PCB) with 96 surface-mounted LEDs located around the
annulus of the PCB, as shown in Fig. 3. A quarter-radius aluminum reflector
mounted above the LEDs results in good visibility of emitted light from locations
parallel to the plane of the PCB approximately ±40°. A GPS receiver located on a
second, companion PCB provides a timing signal that allows the marker strobe to
synchronize the LEDs with cameras in the tracking pods. The markers achieve an
extremely bright output to enable detection by the cameras by strobing the 96 LEDs
at 3 W each in sync with frame exposures on the cameras. Although the
instantaneous power consumption of the marker is high, the duty cycle is very short,
which keeps overall power consumption at an acceptable level.
8
Fig. 3 Active marker strobe consisting of 96 LEDs located around the perimeter of a
custom PCB. A second PCB with a GPS receiver provides timing for synchronization with
cameras.
PhaseSpace has implemented a patented14 method that allows disambiguation of
multiple markers being simultaneously tracked by the system. Each marker
broadcasts a unique code by modulating the output intensity of its LEDs on
successive frames. Detection and interpretation of this modulation allows unique
identification of markers to be established from a sequence of frames.
3.3 Marker Detection
The primary technical barrier to implementation of an outdoor motion-capture
system of the proposed scale is the detection of markers at extended range in
ambient outdoor lighting conditions. Radiant energy available to a sensor decreases
as a function of distance from the source according to the inverse square law.15 As
the distance between the active markers and the tracking pods increases, the relative
intensity of the marker with respect to ambient solar radiation decreases. When this
signal-to-noise ratio reaches a critically low level, the marker can no longer be
reliably detected. PhaseSpace has addressed this challenge by optimizing the
camera configuration to increase relative sensitivity to the output of the active
markers while decreasing sensitivity to solar radiation. This is achieved by using
optical notch filters tuned to the emission wavelength of the active markers. The
cameras also implement a global shutter synchronized with the output pulses of the
active markers. Minimization of the sensor integration time while maintaining an
adequate exposure interval to capture the entire pulse of the LED marker increases
9
the signal-to-noise ratio. Because the output pulses from the active markers are very
short in duration to maximize the marker intensity while controlling the overall duty
cycle of the LEDs, the markers and cameras need to be synchronized to within a
few microseconds. Timing output from GPS receivers on both the active markers
and the tracking pods allows for this level of synchronization even though the
devices are operating remotely.
3.4 System Configuration
Wired connections between tracking pods are required to provide both power and
a data path between the pods and a central tracking server. A pair of 7000-W
inverter generators are used to provide power to the system in range environments
where line power is not readily available. Each tracking pod has external connectors
for input and output AC power, allowing power for multiple pods to be
daisy-chained between pod locations using a series of 75-m-long power cables.
Similarly, a pair of external Ethernet ports are available on each pod. These ports,
in conjunction with the internal network switch present within each pod, allow a
daisy-chain data network in which each pod is connected to neighboring pods via
75-m-long Category 6 Ethernet cables. The pods closest to the central tracking
server are connected to the server directly, providing connectivity between all 16
pods and the tracking server.
Each tracking pod is capable of detecting and classifying the light output by active
markers present within the FOV of the pod’s six cameras. These data are
transmitted via the Ethernet network to the tracking server, which processes this
information using calibrated models of the cameras, pods, and system layout, to
perform a stereo-vision optimization and estimate the positions of the markers
within the capture volume.
3.5 System Implementation for Collection of Motion-Capture
Data
Implementation of the motion-capture system for a given test event starts with
pretest planning of the system layout based on the test objectives and the physical
constraints of the range environment. ARL developed a model of the camera FOV
for each tracking pod in MATLAB. A depiction of the FOVs of all six cameras
associated with a single tracking pod is shown in Fig. 4. This model is used to
visualize the camera FOVs for multiple pods in a proposed system layout. The
overlapping FOV from the array of tracking pods is assessed to verify that adequate
camera coverage in combination with orthogonal perspectives between pods is
10
available to support the test objectives. A notional system layout for a range space
with a footprint of 465 × 110 m is shown in Fig. 5.
Fig. 4 Model of camera FOVs for the six cameras within each tracking pod
Fig. 5 Camera FOV for all tracking pods in a notional 465- × 110-m system layout
Survey equipment is used to stake out the planned location of the tracking pods at
the selected test range. ARL uses a Leica TS16 total station for this task. This
instrument is not only effective for laying out the system configuration, but is also
11
used to accurately measure the position of the pods in a local coordinate system
once they have been set up. ARL has outfitted a box trailer for storage and
transportation of the tracking system hardware and associated equipment. If
feasible, this trailer is used to haul the tracking pods directly to each planned
location. Physical setup of each tracking pod consists of the following basic steps:
Place the pod stand at the planned location
Attach the pod to the stand
Orient the pod according to the planned layout
Level the pod using the adjustable feet on the pod stand
Unspool Ethernet and power cables and connect to adjacent pods as
appropriate
Prior to powering up the system and performing pretest calibration, the location of
each tracking pod must be measured in the local coordinate system that will be used
throughout the test event. This task is accomplished by using the TS16 and a Leica
360° mini-prism placed on a reference mark on the top of each pod. The pod
locations are a required input into the system-calibration routine and allow the
system to be oriented to the local coordinate system for the collection of
motion-capture data in that frame.
Calibration of the system requires collection of a multitude of frames in which a
common marker is visible from multiple cameras. The calibration data set should
preferably include instances when the marker is located throughout the planned
capture volume and spanning the FOV of each camera. These data are then fed into
an optimization routine that estimates the pose of each tracking pod and its
associated cameras. For typical motion-capture systems, this task is accomplished
by manually moving a calibration wand outfitted with multiple markers throughout
the desired capture volume. However, due to the large outdoor capture volume, this
approach is not viable. To efficiently move a marker throughout the capture
volume, a UAS is outfitted with a marker and flown in a pattern designed to cover
as much of the capture volume as is practical.
Once adequate calibration data has been collected, the tracking server processes
these data in conjunction with the surveyed location of each tracking pod to produce
a calibration solution. The calibration solution consists of the pose of each of the
96 cameras comprising the motion-capture system in addition to a model of the
camera intrinsics for each camera. This information is used by the system to
estimate the position of detected markers in near-real time once the test event
begins.
12
4. Performance Evaluation Test Event
A test event was conducted at Aberdeen Proving Ground, Maryland, during August
2019 to demonstrate the functionality of the outdoor motion-capture system and to
characterize the system performance. The range location selected for this test event
consisted of an open field approximately 120 × 700 m bordered by a perimeter
gravel road. This site is typically used by ARL for conducting navigation research
experiments with UASs acting as surrogates for tactical systems. It is expected that
future test events leveraging the capability of the outdoor motion-capture system
will also use this site, making it a suitable location for evaluation of system
performance.
Pretest planning generated a system configuration for two parallel rows of eight
tracking pods each, with pods paced at the edge of the open field. The system layout
overlaid on satellite imagery of the selected test range is shown in Fig. 6. This
configuration provides an effective capture volume of approximately 465 m long
by 110 m wide by 60 m tall.
Setup of the motion-capture system using the process described in Section 3.5
required a team of four personnel and 4.5 h of continuous effort to transition from
arrival of equipment on site in the box trailer to fully set up and ready the system-
calibration sequence. The planned location of all equipment had been surveyed and
marked by paint on a previous day.
Fig. 6 Planned system layout for evaluation of motion-capture system performance
13
4.1 System Calibration
Calibration data were collected using ARL’s GRB R600 UAS outfitted with an
active marker mounted on the underside of its reconfigurable payload platform, as
shown in Fig. 7. Prior evaluation efforts conducted during system development
identified a multi-tiered “lawnmower” pattern as an effective flight path to generate
adequate calibration data by moving the calibration marker throughout all regions
of the capture volume. The UAS flight plan for collection of system calibration data
for this test event is shown in Fig. 8. This UAS flight lasts approximately 23 min,
resulting in more than 130,000 frames of marker position to feed into the
calibration-optimization routine. Processing this calibration requires approximately
20 additional min of computer processing time on the tracking server located in the
control center. Once a calibration solution is achieved, the system is ready to track
markers in support of the test event.
Fig. 7 GTB R600 UAS outfitted with marker strobe and 360° survey prism
14
Fig. 8 UAS flight plan for collection of system-calibration data
4.2 System Performance Evaluation Methods
ARL developed three separate approaches to evaluate the performance of the
motion-capture system. A UAS outfitted with a tracking marker was
simultaneously tracked by the motion-capture system, and traditional survey
equipment with the traditional survey equipment provided ground-truth data to
evaluate the accuracy of the motion-capture system position estimates. A rigid
rotating arm was used to move a marker through a repeatable circular path of known
radius to evaluate the precision of the motion-capture system position estimates. A
manually launched projectile surrogate was used to demonstrate the ability to
integrate an active marker into a projectile form factor and to demonstrate the
ability of the system to track a marker at higher velocities than could be achieved
with the other devices.
4.3 Collection of Data for Evaluation of Tracking Accuracy
The same GTB R600 UAS was used to generate position measurements for
evaluation of the motion-capture system tracking accuracy. The UAS was flown on
a flight path consisting of five concentric circles of decreasing diameter in the
horizontal plane, with that pattern repeated at six altitude levels. This flight path
was designed to allow the UAS to efficiently generate position data throughout a
large portion of the capture volume, resulting in an effective data set for evaluation
15
of the motion-capture system performance throughout the motion-capture volume
as a whole.
The GTB R600 was outfitted with a 360° reflective prism mounted just below a
tracking marker, as shown in Fig. 9. ARL’s TS16 robotic total station16 is capable
of autonomously tracking this reflective prism and collecting survey-grade
measurements of the prism position at a measurement rate of approximately 5 Hz.
Measurements collected by the TS16 are referenced to the same local coordinate
system used for calibration of the motion-capture system and serve as an effective
ground truth to evaluate the measurement accuracy of the motion-capture system.
Note that the measurement accuracy of the TS16 is estimated by the instrument for
each recorded measurement and is a function of the range between the instrument
and the reflective prism. For the scale of this test, the largest 3-D measurement error
bound provided by the TS16 instrument was 0.03 m. This position measurement
accuracy is approximately an order of magnitude better than the required accuracy
of the motion-capture system, providing adequate accuracy to serve as a
ground-truth system.
Fig. 9 360° survey prism mounted in a stacked configuration with a marker strobe for the
simultaneous collection of motion-capture and ground-truth measurements
16
It was determined after the conclusion of the test event that the timing accuracy of
the TS16 was not as accurate as anticipated, resulting in frequent timing errors of
up to 0.08 s and occasional errors up to 0.3 s. Given that the UAS velocity during
the data-collection flights was typically between 4 and 6 m/s, timing errors of this
magnitude result in the inability to directly compare measurements produced by the
motion-capture system against raw measurements from the survey instrument.
Postprocessing of the TS16 measurements was required to correct timing errors, as
detailed in Section 5.1. In hindsight it would have been beneficial to collect an
additional series of measurements with the UAS hovering in several static
locations. These measurements would allow for spot checking of measurement
accuracy without a strong dependence on the timing accuracy of the TS16
measurements. However, due to the time investment required, repeating the test to
collect these additional measurements was not possible.
4.4 Collection of Data for Evaluation of Tracking Precision
A rigid rotating-arm device was implemented to provide repeatable, constrained
motion of a tracking marker for the generation of data suitable for the evaluation of
repeatability and precision of measurements generated by the motion-capture
system. This device consisted of a 10-ft aluminum bar with a marker attached to
one end and a counterweight attached to the other, as shown in Fig. 10. This bar
was affixed to a rotating stage mounted atop a large tripod. The result of this
configuration was a device that could be manually rotated to produce repeatable
circular motion of the marker with a fixed radius of rotation of 2.28 m.
Fig. 10 Rigid rotating-arm device for collecting data to evaluate motion-capture system
measurement precision
17
4.5 Tracking a Marker Integrated in a Projectile Form Factor
A fin-stabilized surrogate projectile was designed for integration of the 83-mm-
diameter active marker developed by PhaseSpace. This projectile was fabricated
from glass-filled nylon using a selective-laser-sintering additive manufacturing
technique. The tracking marker was located in the middle of the projectile body, as
would be conceptually feasible for an actual projectile-based test event. This
projectile as configured for testing is shown in Fig. 11. It was manually launched
using an 11-ft-long fishing rod by attaching a short piece of Dacron line to the aft
of the projectile and casting the projectile as would be done while surf casting with
a heavy weight. This method was capable of producing projectile velocities greater
than 30 m/s, trajectories of up to 90 m long, and times of flight of approximately
4 s.
Fig. 11 Surrogate projectile for demonstration of marker integration in a projectile
configuration and collection of position measurements for a high-speed object
5. Analysis of Collected Data to Establish Observed System
Performance
Position-tracking performance of the motion-capture system was evaluated by
postprocessing data in MATLAB. Data sets generated by both the motion-capture
system and the TS16 were converted into ASCII text files and then imported into
the MATLAB workspace for further manipulation. Specific analysis methods were
used on the data sets collected as described in Sections 4.3–4.5 to evaluate
measurement accuracy, measurement precision, and tracking of a projectile
configuration.
18
5.1 Evaluation of Position-Tracking Accuracy
Measurements produced by the TS16 survey instrument serve as the ground truth
for position-tracking accuracy. Ideally, these measurements would be directly
compared against measurements generated by the motion-capture system as a
function of time. Even though the measurements from the two systems are not
synchronous, simple interpolation techniques could be used to generate a basis for
comparison. However, as mentioned in Section 4.3, it was immediately obvious
upon attempting this approach that the timing accuracy of the total station
measurements was not adequate for the associated measurements to serve as a
ground truth in their raw state. Note that a GS16 survey GPS receiver was
connected to the TS16 while data were being collected in an attempt to provide a
GPS timing signal and ensure timing accuracy of the TS16 measurements, but even
this approach proved to be inadequate.
Leica technical support was contacted for an in-depth investigation into the way in
which the TS16 timestamps its recorded measurements. Based on this information
and a manual evaluation of the TS16 data, it is assumed that the position values of
collected measurements are valid within the accuracy of the instrument and only
the timestamps are inaccurate. Leveraging this assumption, an algorithm was
created to correct the timestamps of the TS16 data. Details of this algorithm and
the resulting postprocessed data used for ground-truthing are presented in the
Appendix. Note that the position estimates generated by the motion-capture system
were used as an input to the algorithm for correcting the TS16 measurement
timestamps. Accordingly, the timestamp corrections are inherently biased to reduce
the error between the motion-capture measurements and the ground truth. The
postprocessing algorithm to correct the TS16 timestamp errors was designed to
minimize this effect, as described in the Appendix. However, the measurement
accuracy results presented in this section are likely a slight underestimate of the
true measurement error.
Linear interpolation in 3-D was used on the postprocessed TS16 measurements to
generate a basis for comparison against the motion-capture measurements at the
native measurement rate of the motion-capture system. Linear interpolation
between data points of the TS16 data assumes that the UAS was operating at
constant velocity between data points. This assumption will produce negligible
error when the data points are densely spaced in time. However, larger time gaps
will result in intermediate data points that do not reflect the true position of the
UAS. To prevent these gaps in the ground-truth data from skewing the analysis
results, all time gaps greater than 0.4 s were excluded from the comparison data set.
After applying this filter, the comparison data set included 98,105 data points
19
captured by the motion-capture system, representing more than 16 min of
cumulative data collection.
Comparison of position measurements from the motion-capture system and the
ground truth in each of the principal directions of the local coordinate system was
used to generate orthogonal elements of position error as a function of time. The
error components are calculated using Eq. 1, where represents the true position,
̂ represents the estimated position, represents the position error, and the
subscripts
N
,
E
,
and
D
indicate the orthogonal components in the Northing, Easting,
and Down directions, respectively. Position error along each principal axis as a
function of time is shown in Fig. 12.
=̂ ,
=̂ ,
(1)
=̂ .
Fig. 12 Orthogonal components of position measurement error as a function of time
The three error components were combined to generate an estimate of the 3-D
position-error magnitude at each sample point using Eq. 2, where the subscript
T
represents the total position error, shown as a function of time in Fig. 13.
20
= �2+ 2+ 2. (2)
Fig. 13 3-D position-error magnitude (total position error) as a function of time for the
position accuracy test data set
It is apparent from examination of the position-error components that the position
error contains a small-magnitude high-frequency measurement noise component,
as well as a position-bias error that varies more slowly as the UAS moves through
the capture volume. The magnitude of this bias error is partially a function of the
specific location of the marker within the capture volume. As the marker moves
throughout the volume, the individual cameras detecting the marker changes, as
does the geometry of the stereo-vision solution used to estimate the marker position.
This effect can be seen in Fig. 14 by examining those portions of the capture volume
associated with higher than typical position error.
21
Fig. 14 Relationship between position-error magnitude and location of the marker within
the capture volume
The position measurement rate of 100 Hz is relatively fast compared with the
frequency content of the position flight dynamics of the UAS operating at a speed
of approximately 5 m/s. Accordingly, it may be desirable to apply a low-pass filter
to the position measurements in postprocessing to smooth the data and minimize
the effect of the measurement noise. To investigate the effect of this approach on
the measurement accuracy, a moving average filter with a filter window of 0.4 s
duration was used to smooth out the raw position measurements from the
motion-capture system. A plot of the total position error associated with this
postprocessed data is shown in Fig. 15. Comparing this plot against Fig. 13, it is
apparent that filtering the motion-capture data reduces some instances of
momentary high position error. The largest position error observed for the raw data
was 0.99 m compared with 0.67 m for the filtered position data. However, the
majority of the position error is associated with position bias as the UAS moves
throughout the capture volume and is not affected by applying a moving average
filter to the raw data.
22
Fig. 15 Total position error as a function of time for motion-capture position measurements
after a moving average filter was applied to the position measurements in an attempt to reduce
measurement noise
Characterization of the overall position-tracking accuracy of the system for
comparison against the established performance requirements requires a statistical
assessment of the position error over the duration of the test event. Histograms of
the orthogonal position error components for the entire measurement comparison
data set are shown in Fig. 16.
23
Fig. 16 Histograms of position error for each of the orthogonal components
A histogram of the total position error is shown in Fig. 17. The mean total position
error was 0.207 m. Because the total position error is the norm of component errors
in 3-D, it is appropriate to characterize this distribution as a chi distribution with
3 DOF, also known as a Maxwell–Boltzmann distribution. However, the quality of
the fit for the MaxwellBoltzmann distribution was less than desirable, as shown
by Fig. 18. This is likely because the assumption that the all three error components
are normally distributed with zero mean, and equal variance is not held true for this
data set.
24
Fig. 17 Histogram of 3-D position error magnitude (total position error) for the position
accuracy test data set
Fig. 18 Fit of MaxwellBoltzmann probability density function to the position-error
distribution
An alternative to characterizing the position-tracking accuracy with a probability
distribution is to use spherical error probable (SEP) as the performance metric. This
25
metric refers to a sphere of given radius, centered on the true value, within which a
certain percentage of the measurements occur. SEP is commonly used for
characterization of the accuracy of navigation systems17 and is similarly appropriate
for this application. Commonly used percentile thresholds for calculation of SEP
are 50% and 90%, referred to as SEP50 and SEP90, respectively.
The measurements collected during this test event exhibited an SEP50 value of
0.189 m and an SEP90 value of 0.362 m. It is also useful to use the threshold
position-tracking accuracy requirement of 0.5 m, established by ARL as a
performance benchmark. It was found that 98.6% of the observed measurements
had less than this level of error. The demonstrated level of tracking accuracy
exceeds the threshold requirement established for development of the system and
is an impressive result for position tracking within a motion-capture volume of
greater than 3 million m3.
5.2 Discussion of Tracking Accuracy Error
It is useful to examine the instances of higher than typical position error to gain
insight into specific scenarios that may be contributing to the position error.
Figure 19 highlights a specific time period where a number of samples had position
error greater than 0.6 m. Figure 20 depicts the component position errors for this
specific time period of the test as a function of time. Figure 21 depicts the horizontal
plane position estimate produced by the motion-capture system and the
postprocessed ground truth data generated by the TS16.
26
Fig. 19 Highlighted data segment with higher than typical position error
27
Fig. 20 Position error components for segment of data with higher than typical error
28
Fig. 21 Horizontal plane representation of motion-capture data, ground-truth position, and
measurement error for a segment with higher than typical position error
Visualization of the camera rays indicating detection of the marker by a given
tracking pod reveals that during the segment of time highlighted in Figs. 19–21, the
number of tracking pods detecting the marker was frequently changing. The
high-frequency measurement noise observed in this example is associated with
instantaneous jumps in the estimated position when the number of tracking pods
that contribute to the estimate changes. This characteristic of the marker-position
measurements is consistent throughout the recorded data. When the number of
tracking pods detecting the marker changes, the stereo-vision solution adjusts to the
new number of inputs resulting in a jump in the position estimate.
Investigating the segment of higher than typical error further, it is apparent that for
the specific instances of error greater than 0.6 m only two or three pods were
detecting the marker. Often, one of these pods was located over 200 m from the
marker. This scenario highlights a number of underlying factors associated with the
accuracy of position estimates generated from computer stereo vision.
Generally, computer stereo vision relies on a calibrated camera model to
characterize a projective transformation from the camera pixel space into the world
frame. It also relies on a calibration of the system configuration to estimate the pose
of each camera with respect to the local frame. Using these models, any pixel
29
location on the camera sensor represents a ray originating at the camera location
and projecting into the world frame. In the case of a perfect camera model, an object
detected by the sensor will lie somewhere along this camera ray in the world frame.
Practically, however, error is introduced in the transformation between the sensor
frame and the world frame. This error can be associated with the discrete resolution
of the sensor, errors in the camera model, errors in estimation of the camera pose,
or a combination of all these factors. Note that because the conversion from the
sensor frame to the world frame is a projective transformation, any error in the
model of this transformation results in error in the spatial coordinates of the world
frame that increases linearly with the distance between the camera and the detected
object.
Estimating the position of a detected marker requires that a minimum of two
tracking pods detect the marker simultaneously. An optimization algorithm is used
to estimate the location of the marker associated with the imperfect intersection of
camera rays emanating from each camera that detects the marker. This optimization
benefits from a perspective between cameras that is increasingly orthogonal
according to the principles of dilution of precision. Increasing the number of
cameras detecting the marker typically reduces the error in the estimated position
by reducing the dilution of precision and providing additional inputs to the
optimization algorithm. If the error associated with the contribution from each
camera is assumed to be zero mean white noise of equal variance, the measurement
error will decrease as additional observations are added to the optimization
solution.
The current implementation of the motion-capture system uses an optimization
algorithm for estimation of marker position that weights all observations equally in
the solution. However, as described previously, the error associated with a given
observation is expected to increase linearly with the distance between the camera
and the marker. This is particularly noteworthy given the scale of the system and
the ability of the cameras to detect markers at long distances. Markers were often
detected by cameras up to 250 m away. Contributions from these instances are
expected to contribute more than double the measurement error than a camera at a
more typical detection range of 100 m. It is likely that the accuracy of the position
measurements could be improved in the future by applying a weighting function in
the optimization solution based on the range of the marker from a given camera.
Qualitative examination of specific segments of data from the test reveals results
consistent with the practical limitations of a motion-capture systems based on
stereo-computer vision. Measurements with low levels of total position error often
had up to seven cameras detecting the marker, in a configuration producing a low
dilution of precision, with limited distances between the cameras and the marker.
30
A representative example is shown in Fig. 22. Conversely, measurements with
higher levels of total position error often result from few cameras detecting the
marker and either a camera configuration producing an increased dilution of
precision or large distances between the marker and some of the cameras
contributing to the solution. Representative examples of these scenarios are shown
in Figs. 23 and 24, respectively.
Fig. 22 Example marker detection scenario for a measurement with low position error
31
Fig. 23 Example marker detection scenario with few cameras detecting the marker and a
relatively high dilution of precision resulting in high position error
32
Fig. 24 Example marker detection scenario with few cameras detecting the marker and
detection by a camera a long distance from the marker resulting in high position error
5.3 Evaluation of Position-Tracking Precision
The rigid, rotating-arm apparatus described in Section 4.4 was used to collect
position measurements specifically for the purpose of evaluating measurement
precision. The rotating arm was placed in a series of positions within the capture
volume and manually spun for approximately 10 revolutions. Examination of these
data segments reveals the same measurement characteristics as observed with the
marker moved throughout the space by the UAS. When the number of cameras
detecting the marker changes, the position estimate instantaneously shifts.
However, when the number of cameras detecting the marker is stable, the planar
circular motion is remarkably consistent. It is these periods of consistent
measurement over multiple revolutions that are of interest for evaluating the
measurement precision. A specific case was selected for detailed analysis. Six
cameras were tracking the marker for the majority of the data points associated with
this case, as shown in Fig. 25. This case represents the best-case measurement
precision of the system because measurement noise associated with variation in the
number of cameras detecting the marker has been eliminated.
33
Fig. 25 Marker detection scenario used for evaluation of position measurement precision
The rigid rotating-arm apparatus generated repeatable, circular, planar motion of
the marker with a radius of rotation of 2.28 m. It is possible to use this constrained
geometry to generate a basis for evaluation of the precision of measurements from
the motion-capture system. A custom optimization algorithm was used to establish
a centroid and plane of rotation for the rotating arm of known radius, as shown in
Fig. 26. Using only measurements when the number of cameras detecting the
marker was stable, the magnitude of the error between this reference and the
motion-capture position measurements characterizes the precision of those
measurements, a best-case scenario for measurement precision of the system. A
histogram of the measurement error is shown in Fig. 27. Using the same method to
characterize this error distribution as described in Section 5.1, the mean error was
1.3 mm with an SEP50 of 1.1 mm and an SEP90 of 2.3 mm. Note that if the
additional measurements from this segment of time are included, accounting for
changes in the number of cameras contributing to the measurement, the mean error
increases to 7.2 mm with an SEP50 of 1.4 mm and an SEP90 of 24.6 mm. To
provide practical context for the level of precision achieved, Fig. 28 depicts a
segment of the measured marker path in the horizontal plane. Note that the marker
made 11 revolutions during this segment with the motion-capture system producing
very repeatable results on each revolution.
34
Fig. 26 Result of fitting a planar circular motion to position measurements associated with
the rigid rotating-arm apparatus
Fig. 27 Distribution of error between position measurements associated with the rigid
rotating-arm apparatus and the planar circular motion fit best-case scenario
35
Fig. 28 Measurement precision of the motion-capture system was demonstrated by the
consistency of position measurements associated with the rigid rotating-arm apparatus
5.4 Demonstration of Tracking a Projectile Configuration
The manually launched projectile with integrated marker described in Section 4.5
was used to verify the ability of the system to track a marker in this configuration.
This projectile was also able to achieve greater flight speeds than possible with the
UAS, increasing the velocity at which the system’s tracking performance was
verified. The projectile was launched repeatedly at various locations throughout the
motion-capture volume, as shown in Fig. 29. The flight path was primarily along
the longitudinal axis of the capture volume, as would be typical for an actual flight
experiment. Eight projectile flights were captured in this data set. The
motion-capture system was able to successfully track the projectile for each of the
eight flights. Note, however, that a gap in the tracking solution exists just after
launch in each example. A single case Fig. 30 was selected for more-detailed
analysis.
36
Fig. 29 Series of trajectories for a manually launched projectile with an integrated tracking
marker
Fig. 30 Trajectory profile of manually launched projectile selected for detailed analysis
(direction of travel is from right to left)
A point-mass ballistic model of the projectile trajectory including acceleration due
to gravity and aerodynamic drag was created to approximate the projectile flight
dynamics. The drag coefficient and initial conditions for this model were manually
37
fit to the motion-capture data to provide a clean representation of the projectile
flight path. The results of this process to approximate the flight path is shown in
Fig. 31. The launch velocity estimated by the model is 33 m/s. Using the model to
approximate the launch time, it appears that the gap in the motion-capture position-
tracking solution was 360 ms.
Fig. 31 Fit of point-mass ballistic model to trajectory profile captured by the motion-capture
system
This gap in the tracking solution raises a concern that the motion-capture system is
incapable of tracking markers at velocities over approximately 30 m/s. Engineering
analysis of the camera integration time and the spatial resolution of the cameras at
functional ranges suggest that the system should be capable of tracking markers at
velocities exceeding 100 m/s. Examination of individual frames from cameras that
should have been tracking the projectile during the interval in question revealed
that the cameras were detecting the marker but were not able to extract the
identification code of the marker from the modulation of strobe intensity. It is
speculated that rapid pitching motion of the projectile just after launch caused by
the dynamics of the trebuchet launch method generated an apparent variation in the
marker brightness when detected by the cameras. This unexpected variation in
marker intensity may have kept the cameras from properly identifying the marker
code until the pitching motion damped out. It is not possible to definitively state
the cause of the gap in the ability of the system to identify the marker code from
the currently available data, and additional testing will be required if this system
response is deemed unacceptable for specific test events. Even though the marker
38
was not identified in near-real time during the tracking gap, because the marker was
detected throughout the gap, the flight path of the projectile can be recreated in
postprocessing if desired.
The demonstration of a marker integrated into a surrogate projectile demonstrated
that this configuration is a viable capability of the system. Motion capture of the
projectile flight path in near-real time with positive identification of the marker
code was demonstrated up to 30-m/s projectile velocity. Marker detection without
marker code identification was demonstrated up to 33 m/s. It is expected that the
previous engineering analysis predicting tracking performance of markers
exceeding 100 m/s is still valid, although this performance has not been verified
simply because it is difficult to practically achieve higher velocities without a
more-advanced projectile launcher.
5.5 Performance Criteria Not Directly Evaluated
Some of the performance criteria established by ARL could not be directly
evaluated as part of the test event. These system-performance characteristics must
either be evaluated through engineering analysis of the system design or through
indirect observation.
As discussed in Section 5.4, the ability of the system to track a marker moving at
high speed was demonstrated up to a marker velocity of 33 m/s. It was not feasible
to achieve higher marker velocities to directly observe tracking performance up to
the objective requirement of 100 m/s. Engineering analysis predicts the ability to
track markers at velocities exceeding this requirement. During system
development, PhaseSpace performed a test to rapidly sweep a marker across the
pixel space of a camera as an approximation of tracking a marker moving at high
speed in the spatial domain. This test involved rapidly slewing a camera past a fixed
marker, which verified the ability of the camera to track a marker rapidly transiting
the camera pixel space. The maximum speed for a marker moving through the
capture volume at which the system can generate valid position measurements
remains unverified.
ARL specified that the motion-capture system must be capable of generating
position measurements of a marker in near-real time. Quantitative characterization
of the measurement latency would require a specific hardware configuration to log
the timing of camera exposures and position measurement solutions on a common
time base. This effort is outside the scope of the current test and would need to be
pursued in the future if specific uses of the motion-capture system require an
accurate characterization of the measurement latency. Engineering analysis of the
39
system in its current configuration predicts measurement latencies of less than
30 ms.
ARL specified a requirement to simultaneously track at least 10 markers, with an
objective requirement of 100 simultaneous markers. Testing conducted thus far has
demonstrated simultaneous tracking of only three markers, limited by the
availability of marker hardware. The bit length of the code broadcast by each
marker using modulation of the marker brightness allows for up to 64 simultaneous
marker codes. Additional simultaneous markers would be possible by extending the
bit length of the code. While the capability for detection of many simultaneous
markers remains unverified, it is expected that the system would track 64 markers
without issue.
ARL specified the ability to capture the attitude of a vehicle as an objective
requirement. The software provided by the tracking server has the ability to define
rigid bodies represented by an array of three or more markers. Once a rigid body
has been defined, the system is capable of estimating the attitude of a rigid body by
simultaneously tracking the markers associated with the rigid body. This approach
has been demonstrated by PhaseSpace on indoor motion-capture systems using
software nearly identical to that implemented for the outdoor motion-capture
system. The limiting factor in implementing this approach for the larger scale of
the outdoor system is the requirement to position the multiple markers on the
vehicle at sufficient separation from each other to produce adequate angular
resolution for estimation of vehicle attitude given the spatial resolution of the
motion-capture system. It is expected that the system is capable of estimating
vehicle attitude, although this feature has not been explicitly tested. The accuracy
of any attitude measurements generated by the system in the future would be a
function of the position-measurement precision and accuracy for individual
markers comprising a rigid body as well as the separation distance between those
markers.
The ability of the system to disambiguate multiple markers located in close
proximity is an important capability of the system if future tests are to evaluate
concepts with low-separation distance between agents or configurations with
multiple markers on a single agent. This capability was not explicitly characterized
during the test event presented in this report. However, an alternative configuration
of the rigid rotating-arm device was used to make qualitative observations on
scenarios where multiple markers may obscure each other. A second marker strobe
was added to the apparatus part-way between the pivot and the marker on the end
of the rotating arm, resulting in a separation distance of 1 m between markers.
When rotated in the horizontal plane this configuration produces a situation where
the two markers are momentarily aligned from the perspective of certain tracking
40
pods. By replaying data captured during this scenario it is possible to observe
instances when the two markers rotate such that they are nearly aligned, as shown
in Fig. 32. As they continue to rotate, the markers come into alignment from the
perspective of a certain pod, resulting in the detection of both markers being lost,
as shown in Fig. 33. As the rotating arm continues forward, the markers move out
of alignment and are once again detected by the tracking pod, as shown in Fig. 34.
Note that after the markers can be once again distinguished from each other, it takes
a series of frames for the system to reestablish the code uniquely identifying each
marker from the intensity modulation of the marker strobe. In the current
configuration, this process can take up to 11 data frames, a duration of
110 ms. The sequence of tracking/lost detection/reacquisition repeats itself as the
orientation of the arm results in marker alignment from the perspective of other
pods as the arm advances forward. Because each tracking pod observes the markers
from a different perspective, this scenario of lost marker detection occurs at
different times for each pod. The overall result is that even though marker tracking
is briefly lost for individual pods, the motion-capture system as a whole is able to
continuously track the position of both markers without any gaps in the data.
Fig. 32 Configuration of camera rays tracking two markers mounted on a rigid rotating
arm. Counterclockwise rotation of the arm will result in loss of marker detection by cameras
to the left and right shortly after this frame.
41
Fig. 33 Configuration of camera rays tracking two markers mounted on a rigid rotating
arm. Position of the two markers has overlapped from the perspective of cameras on the right
and left, resulting in loss of marker detection by those cameras.
Fig. 34 Configuration of camera rays tracking two markers mounted on a rigid rotating
arm. As the arm has continued rotating counterclockwise the position of the two markers has
once again become distinct from the perspective of cameras on the right and left, resulting in
reacquisition of the markers by those cameras.
42
5.6 Practical Considerations and Opportunities for
Improvement
In addition to the quantitative assessment of measurement accuracy discussed, the
test event provided an opportunity to evaluate the practical elements of successfully
implementing such a large-scale complex motion-capture system. Specific
elements of the system design as well as the configuration for this specific test event
highlighted a need for several improvements, which will be pursued as the system
is employed for test events supporting ARL research efforts.
The most-notable issue impacting the ability to efficiently collect data with the
motion-capture system was poor reliability of network connections between
tracking pods in the daisy-chain configuration. Because the data from pods toward
the end of the chain pass through the network switch on several other pods, the
configuration creates a “weakest link” scenario. If any interpod network connection
fails, it is likely that the connectivity to multiple pods will be lost. This situation is
exacerbated by the fact that the pods are located hundreds of meters from the
command center of the experiment, requiring significant time and physical effort
to just simply reset a pod. For unknown reasons, multiple failures of the network
cables supplied with the system resulted in significant time spent debugging cable
issues during the test event presented in this report. Replacing these cables with
stock that ARL happened to have on hand seemed to resolve the issues. It is
recommended that all of the current network cables be replaced with a ruggedized
versions prior to future testing. It is also recommended that future iterations of the
tracking pods be redesigned to allow for easier network troubleshooting.
The current aluminum pod stands with adjustable feet adequately serve the intended
purpose of providing a base for the tracking pods that can be leveled to adjust for
uneven ground at the designated test range. However, there is significant room for
improvement in this element of the system design. The adjustment range of the
stand’s feet is small and difficult to manipulate. This design limits the degree to
which unlevel ground can be accommodated. These stands are also less rigid than
would be ideal for an application that relies on robust, rigid placement of
high-resolution imaging devices. A mounting system based on the use of heavy-
duty survey equipment tripods would address these concerns and result in a design
that is both easier to set up and more robust once established.
One of the realities of such a large capture volume is the scope effort required to
calibrate the system. A 20+-min-long UAS flight is required to generate calibration
data and that data takes tens of additional minutes to process once collected. It is
impractical to recalibrate the system frequently, and performing this process once
at the start of each test day is the desired level of calibration effort. It is difficult,
43
however, in the current configuration to verify that a previously generated system
calibration is still valid. It is possible for a pod to be bumped by personnel working
on the system or disturbed by wind. These scenarios would invalidate the current
calibration solution, but there is currently no method to verify the calibration health
of the system. Because the detection range of the cameras is adequate to detect
markers located across the width of the capture volume and beyond, it would be
possible to place markers on each tracking pod and have them be detected by other
tracking pods. Adding these static markers as a fiducial for monitoring calibration
health would allow the system to identify a pod that had been bumped because the
static markers would no longer be at the pixel coordinates they were detected at
during the calibration. It is likely that these static markers located at known
surveyed locations would also provide data to constrain the calibration solution,
resulting in more-accurate calibration solutions.
The intrinsics models of the cameras located within each tracking pod were
recalibrated during the week prior to the test event discussed in this report. This
calibration process consists of mounting each tracking pod on a pan/tilt apparatus,
capturing a series of images of a checkerboard pattern displayed on a large
flat-screen television, and processing the resulting imagery to characterize each
camera. This sequence is labor-intensive and requires approximately 1 h of effort
for each pod. The accuracy of this calibration is critical to the overall motion-
capture position-tracking accuracy. However, the stability of this pod intrinsic
calibration over time is unknown. As the tracking pods are handled multiple times
during transportation, setup, and breakdown, it is possible that the detailed
properties of the projective transformation associated with each camera could
change slightly due to vibration and shock. It is also possible that the calibration of
the cameras could vary with temperature. The stability of camera calibration has
not been assessed, and to do so would require a significant investment in resources.
In lieu of a more-comprehensive understanding of this effect, the age of camera
intrinsics calibration data must be considered as an element of system usen in
support of future test events. Spot checking of system measurement accuracy using
survey equipment is probably justified as an element of each test.
6. Conclusion
ARL has implemented the world’s largest outdoor motion-capture system using
hardware and software custom-designed by PhaseSpace, Inc. This system
demonstrated the ability to meet the established performance requirements during
a recent test. The motion-capture system captured position measurements for a
marker mounted on a UAS with a position-tracking accuracy SEP50 value of
0.189 m and an SEP90 value of 0.362 m. Marker tracking was demonstrated over
44
a motion-capture volume exceeding 3 million m3. Tracking position precision was
observed to be as good as 1.3 mm in cases where the number of cameras detecting
a marker was stable. Position tracking of a marker integrated into a surrogate
projectile configuration demonstrated tracking of the marker at velocities up to
33 m/s. Over the course of the test event, cameras within the tracking pods were
able to reliably detect active marker strobes at ranges exceeding 150 m and often
detected markers as far away as 250 m. This ability to track markers at extreme
ranges for a motion-capture system suggests the possibility of establishing even
larger capture volumes than demonstrated thus far.
Minor issues associated with system reliability and practical implementation for
efficient testing still exist, but it appears that PhaseSpace has solved the major
technical barriers that have previously prevented outdoor optical motion-capture at
the scale that is now possible with this system. ARL intends to continue to refine
and resolve remaining limitations of the system as resources become available.
The capability to perform motion-capture measurements of large-scale outdoor test
events promises to advance multi-agent collaborative navigation technologies by
enabling ARL to conduct experiments that were not previously possible. This
capability also has the potential to advance other ARL research areas involving the
interaction of multiple moving agents. These areas include but are not limited to
heterogeneous swarming concepts, ground/aerial agent interactions, counter-UAS
systems, and human–agent teaming.
45
7. References
1. Allik BL, Hamaoui M, Don M, Miller C. Kalman filter aiding MDS for
projectile localization. Presented at the AIAA Scitech 2019 Forum; 2019 Jan
7–11; San Diego, CA.
2. Hamaoui M. Non-iterative MDS method for collaborative network localization
with sparse range and pointing measurements. IEEE Transactions on Signal
Processing. 2019;67(3):568–578.
3. Bregler C. Motion-capture technology for entertainment. IEEE Sig Proc Mag.
2007;24:160–158.
4. Mellinger D, Michael N, Kumar V. Trajectory generation and control for
precise aggressive maneuvers with quadrotors. Int J Robotics Res.
2012;31(5)664–674.
5. Michael N, Mellinger D, Lindsey Q, Kumar V. The GRASP multiple micro-
UAV testbed. IEEE Rob Autom Mag. 2010;17(3):56–65.
6. Kushleyev A, Mellinger D, Powers C, Kumar V. Towards a swarm of agile
micro quadrotors. Auton Robots. 2013;35(4):287–300.
7. Preiss JA, Honig W, Sukhatme GS, Ayanian N. Crazyswarm: A large nano-
quadcopter swarm. Proceedings of the IEEE International Conference on
Robotics and Automation; 2017 May 29–June 3; Singapore.
8. Godil A, Tsai R, Hong H. Ground truth systems for object recognition and
tracking. Gaithersburg (MD): National Institute of Standards and Technology,
Department of Commerce (US); 2013 Mar. Report No.: NISTIR 7923.
9. Jud D, Michel A. Motion tracking systems: an overview of motion tracking
methods. Studies on mechatronics. Zurich (Switzerland): Swiss Federal
Institute of Technology Zurich; 2011.
10. Don ML. The feasibility of radio direction finding for swarm localization.
Aberdeen Proving Ground (MD): Army Research Laboratory (US); 2017 Sep.
Report No.: ARL-TR-8114.
11. Lockspeiser JR, Don ML, Hamaoui M. Radio frequency ranging for swarm
relative localization. Aberdeen Proving Ground (MD): Army Research
Laboratory (US); 2017 Oct. Report No.: ARL-TR-8194.
12. Don ML. Dilution of precision as a geometry metric for swarm relative
localization. Aberdeen Proving Ground (MD): Army Research Laboratory
(US); 2017 Nov. Report No.: ARL-TR-8200.
46
13. Grabner M, Don M, Everson D. Constrained geometry relative swarm
localization. Aberdeen Proving Ground (MD): Army Research Laboratory
(US); 2017 Dec. Report No.: ARL-TR-8877.
14. McSheery TD, Black JR, Nollet SR, Johnson JL, Jivan VC, inventors;
Phasespace, Inc., assignee. Distributed-processing motion tracking system for
tracking individually modulated light points. United States patent US
6,324,296. 2001 Nov 27.
15. Ryer AD. Light measurement handbook. Peabody (MA): Light Technologies,
Inc.; 1997.
16. Leica Geosystems. Leica Viva TS16 – world’s first self-learning total station
[accessed 2020 Feb 24]. https://leica-geosystems.com/en-us/products/total-
stations/robotic-total-stations/leica-viva-ts16.
17. Ignagni M. Determination of circular and spherical position-error bounds in
system performance analysis. J Guid Control Dynam. 2010;33:1301–1305.
47
Appendix. Method for Correcting Timestamp Errors of Leica TS16
Total Station Data
48
The survey data collected during the test event by the Leica TS16 were intended to
be used as natively recorded for ground-truth position of the tracking marker.
However, as discussed in Section 5.1 of the main report, it was determined after the
conclusion of the test that the timing accuracy of the raw TS16 measurements was
insufficient for evaluation of the position-tracking accuracy of the motion-capture
system.
Examination of the recorded data in conjunction with information received from
Leica technical support identified that the position values of collected
measurements are valid within the accuracy of the instrument even though the
timestamps associated with those measurements are inaccurate. This effect can be
seen in Fig. A-1. In the top panel, the position of the survey prism as a function of
time exhibits the timing errors associated with the TS16 measurements, resulting
in a “noisy” position trace even though the UAS was flying in a smooth arc. In the
bottom panel, this smooth flight path is apparent by plotting the position of the UAS
in the horizontal plane independent of time for the same segment.
Fig. A-1 Segment of raw data recorded by the TS16 displayed both as a function of (top) time
and (bottom) in the horizontal plane to demonstrate the effect of timing errors for position
measurements that are otherwise accurate
49
Using the assumption that position values for the TS16 measurements are accurate
and only the timestamps contain error, local-time corrections can be made if the
velocity of the UAS is known. Position measurements from the motion-capture
system covering the same time period as the TS16 measurements can be used to
provide the necessary velocity input. It is undesirable to use the motion-capture
measurements for manipulating the TS16 measurements because this approach has
the potential to bias the ground truth in such a way as to minimize the position error
calculated in assessment of the motion-capture system performance. However,
alternate sources of the necessary data are not available, and the timestamp
correction algorithm has been designed to minimize the impact of using the
motion-capture data as an input. In addition to the assumption that only the
timestamps of the TS16 data are flawed, the following assumptions were leveraged
in creation of the timestamp correction algorithm:
1) The UAS was flying a circular pattern at a generally constant airspeed. The
design of the flight plan in conjunction with the flight dynamics of the UAS
results in a motion of the marker that does not contain rapid accelerations
or continuous motion in a straight line.
2) The motion-capture system has tracking position accuracy on the order of
1 m throughout the capture volume without discontinuities in accuracy as a
function of position within the volume.
3) Motion-capture position estimates contain error in the form of bias and
noise, but these measurements can be filtered to remove noise and
differentiated to estimate an accurate marker velocity over short segments
of time. By differentiating the position measurements to estimate velocity,
position bias error that is assumed constant over short distances does not
influence the velocity estimate.
The algorithm to correct timing errors in the TS16 measurements uses the following
approach:
1) Loop through short overlapping segments of TS16 data, using a segment
length of 20 s.
2) Identify measurements collected by the motion-capture system that overlap
this segment plus or minus a short buffer.
3) Filter the motion-capture position estimates by using a two-pass moving
average filter to reduce measurement noise.
4) Bias the motion-capture estimate to the midpoint of the TS16 segment. This
creates a reference signal equivalent to integrating the motion-capture
50
velocity forward and backward in time from the midpoint of the TS16
segment.
5) Shift the timestamps of each point of TS16 data in the segment to minimize
horizontal position offset as a function of time between the individual point
and the reference signal. A maximum time correction limit of ±0.3 s is
enforced.
6) Shift the time bias of the entire segment to minimize average position offset
between the segment and the raw motion-capture position data. This step
addresses the timing error of the TS16 point at the segment midpoint since
all other timestamps were corrected relative to this point.
7) Save the corrected time values for the middle 50% of the segment. Using
overlapping segments in this manner serves two functions in this
implementation:
Correcting only those timestamps within a short window on either side
of the midpoint of the longer segment minimizes the effect of variable
bias in the motion-capture data as a function of time.
Maintaining overall segment length adequate to ensure a curved portion
of the UAS flight is captured minimizes the possibility of shifting the
entire segment to obscure bias error in the
motion-capture data parallel to the direction of travel of the UAS.
8) Establish corrected timestamps for the entire TS16 data set by assembling
the corrected timestamps from each of the processed segments
The results of the algorithm described are shown in Fig. A-2 for a portion of the
same time segment presented in Fig. A-1. Shown in black are TS16 position
measurements as a function of time with corrected timestamps.
51
Fig. A-2 Comparison of raw TS16 position measurements and the same measurements with
timestamp corrections applied (shown in black)
It is useful to examine the distribution of time correction applied to each point from
the TS16 data to quantify the magnitude of the timing errors in the data. That
distribution is shown in Fig. A-3.
52
Fig. A-3 Distribution of time correction applied to individual TS16 position measurements
The effect of time in evaluating the accuracy of the collected motion-capture data
can be visualized by examining the error vectors tracing individual measurements
back to the ground-truth position used to calculate the error for that measurement.
Visualizing the data in this manner makes it apparent that even though the motion-
capture data exhibit a variable amount of position error in the direction parallel to
the flight path of the UAS, the error rays trace back to the ground-truth signal
incrementing steadily forward as would be expected. A representative example is
shown in Fig. A-4.
53
Fig. A-4 Representative example of position-error traces between motion-capture
measurements and the ground-truth data implying the relationship between position and time
for calculation of the position error
As discussed previously, it is important that the design of this algorithm be robust
against biasing the timestamp corrections in such a way as to obscure error in the
motion-capture system measurements. Three attributes of the implemented
approach serve this purpose. First, only the timestamps of the TS16 data are
adjusted. Measurement error in the motion-capture position measurements that is
orthogonal to the direction of travel of the UAS is not affected by adjusting the
TS16 timestamps. Second, only the filtered relative velocity estimated from the
motion-capture system measurements is used as an input to correct individual TS16
timestamps. Bias error and measurement noise in the position measurements does
not influence the point-by-point timestamp correction. Third, segments of adequate
length are used to ensure curved portions of the flight path are represented when
adjusting the time bias of the segment midpoint. This is the only step when absolute
position measurements from the motion-capture system have the potential to skew
the timestamp corrections to reduce the apparent measurement error of the
motion-capture measurements compared with the ground truth. By using curved
segments of the UAS flight path to determine the timing error of the segment
54
midpoint, any tendency of the algorithm to reduced apparent measurement error
parallel to the direction of the UAS velocity is minimized.
While the TS16 timestamp correction algorithm has been specifically designed to
reduce the possibility of skewing the ground truth toward the motion-capture
measurements, there is inherently some minor reduction in the apparent
motion-capture position-measurement error associated with this approach. It is
believed that this effect is negligible compared with the accuracy of the TS16
instrument used as the source of ground-truth data. However, the position-
measurement performance results presented in this report likely contain a slight
underrepresentation of the true error due to the use of motion-capture measurements
in correcting the TS16 timestamps.
55
List of Symbols, Abbreviations, and Acronyms
3-D 3-dimensional
AC alternating current
ARL Army Research Laboratory
ASCII American Standard Code for Information Interchange
DOF degrees of freedom
GPS Global Positioning System
LED light-emitting diode
PCB printed circuit board
PCWDE Precision and Cooperative Weapons in a Denied Environment
SEP spherical error probable
56
1 DEFENSE TECHNICAL
(PDF) INFORMATION CTR
DTIC OCA
1 CCDC ARL
(PDF) FCDD RLD CL
TECH LIB
2 CCDC ARL
(PDF) FCDD RLW LF
D EVERSON
B KLINE
... explored the impact of camera layout on the calibration stability and accuracy, obtaining results that align with the calibration outcomes presented in our study [15]. Further investigations into comparable systems by Chen (2021) and Everson (2020) have extended this body of knowledge [7] [8]. Moreover, research involving a two-camera system highlighted the correlation between measurement distance and detectable noise [11]. ...
Preprint
Full-text available
This paper presents the design and evaluation of a physical support structure for the OptiTrack X22 tracking systems, constructed from carbon fiber-reinforced polymer (CFRP) and Invar steel. These materials were chosen for their low thermal expansion, ensuring geometric stability and rigidity necessary for accurate spatial measurements. The support system is scalable and adaptable for various applications and setups. The study further investigates the effects of camera placement and separation in near-parallel configurations on measurement accuracy and precision. Experimental results show a significant correlation between camera distance and measurement precision - closer camera setups yield higher precision. The optimized camera arrangement allowed the prototype to achieve accuracies of +/-0.74 mm along the camera's line of sight and +/-0.12 mm in orthogonal directions. The experiments show that the standard deviation of the noise on a single measurement plane orthogonal to the camera's line of sight vary between 0.02 and 0.07, indicating that the measurement noise is not constant for every point on that specific plane in the meanurement space. Details of the system's design and validation are provided to enhance reproducibility and encourage further development in areas like industrial automation and medical device tracking. By delivering a modular solution with validated accuracy, this work aims to promote innovation and practical application in precision tracking technology, facilitating broader adoption and iterative improvements. This approach enhances the accessibility and versatility of high-precision tracking technology, supporting future progress in the field.
Conference Paper
Full-text available
Most existing navigation solutions for single and multi-agent gun-launched systems rely heavily on Global Positioning System (GPS) measurements. Navigation solutions for high-dynamic multi-agent aerial systems in GPS contested environments often require either expensive components, such as navigation-grade inertial measurement units (IMU), fragile and range-limited lidar, or computationally expensive image processing techniques for simultaneous localization and mapping. The sensing and computational capabilities for lowcost, low-power, gun-launched projectiles are particularly limited. In this paper we propose an unanchored localization solution for autonomous navigation of gun-launched projectiles in GPS contested environments. It is shown that radio-frequency (RF) ranging, augmented with stochastic Kalman filtering, can be processed via multi-dimensional scaling (MDS) for robust unanchored localization. The results from a simulated trajectory are shown to demonstrate algorithm effectiveness in a representative scenario, and various navigation performance metrics are discussed. Extension of this work to the anchored localization of multi-agent systems will also be discussed.
Technical Report
Full-text available
The US Army Research Laboratory (ARL) is investigating technologies to assist in swarm localization. One promising technology is radio direction finding (RDF). RDF has been used since World War I and has many applications both in the military and private sectors, but compact commercial systems suitable for deployment on small swarm agents do not exist. This has led ARL to begin development of a custom RDF system using small, standalone, software-defined radios (SDRs). First, basic RDF theory is presented. Next, a laboratory experiment to evaluate RDF using a SDR is developed. Finally, experimental data are presented supporting the feasibility of RDF for swarm localization.
Technical Report
Full-text available
Swarms of agents can use range measurements to achieve relative localization. The accuracy of this localization is affected by the geometry of the swarm agents. To characterize the swarm geometry, a metric is needed that relates the geometry to localization accuracy. Such a metric, called Dilution of Precision (DOP), is commonly used for global positioning systems (GPSs). GPS differs from swarm relative localization, however, in that there is an unknown receiver clock bias, the localization is absolute instead of relative, and all of the satellite positions are known. This report derives a DOP metric suitable for swarm relative localization and investigates its utility in several example cases.
Conference Paper
Full-text available
In this report we discuss different types of ground-truth systems used for evaluation of object recognition and tracking systems in industrial manufacturing environments. We discuss four main ways of acquiring ground truth for object recognition and tracking: 1) physics-based simulation ground-truth systems, 2) manual annotation or labeling of ground-truth systems, 3) platform-based ground-truth systems, and 4) physically-based ground-truth systems. We also include a separate discussion of motion capture systems, a special case of physically-based ground-truth systems focused on human tracking. We discuss previous efforts and discuss the different physical quantities used for ground-truth systems. Currently, there are no solid, universal solutions for ground truth measurements for human and object detection and tracking. There are a number of partial solutions suitable for specific applications, with varying drawbacks.
Article
Multi-agent localization is a basic requirement for many networked applications. The particular application to swarming Unmanned Aerial Vehicles (UAV's) or munitions requires spatial coordination of agents, including the ability to assume and maintain a prescribed flight formation. An in-flight awareness of network morphology and node location is therefore needed. While global navigation satellite systems (GNSS) offer an attractive solution, signal occlusion, spoofing, and jamming present unacceptable vulnerabilities- particularly for mission-critical operations. Alternative network localization methods using inter-agent radio frequency (RF) ranging and Angle of Arrival have been well studied over the past 15 years, but existing algorithms are not well suited to fast-moving networks. Iterative methods such as SMACOF tend to converge slowly. Faster non-iterative multi-dimensional scaling (MDS) methods for range, bearing, or vector measurements have also been formulated. However, these MDS methods generally require the full pairwise inter-agent measurement matrix - placing a severe requirement on swarm connectivity and leading to low tolerance for missing or badly estimated measurements. Even vector-based MDS, which incorporates both range and direction constraints, is shown here to require 4-vertex connectivity to achieve perfect localization. Results from rigidity theory, however, suggest that a lower connectivity threshold should be sufficient to guarantee a unique configuration (up to translation and rotation). In contrast, our proposed "vertex resequencing" and "edge resequencing" techniques further lower the vertex-connectivity threshold to 3 and 2, respectively. These localization techniques, which extend vector-based MDS with Nystrom approximation, prescribe a graph-based kernel sampling scheme and weighted coordinate reconstruction which suppress the effect of missing measurements.
Article
We describe a prototype 75 g micro quadrotor with onboard attitude estimation and control that operates autonomously with an external localization system. The motivation for designing quadrotors at this scale comes from two observations. First, the agility of the robot increases with a reduction in size, a fact that is supported by experimental results in this paper. Second, smaller robots are able to operate in tight formations in constrained, indoor environments. We describe the hardware and software used to operate the vehicle as well our dynamic model. We also discuss the aerodynamics of vertical flight and the contribution of ground effect to the vehicle performance. Finally, we discuss architecture and algorithms to coordinate a team of these quadrotors, and provide experimental results for a team of 20 micro quadrotors.
Article
Circular and spherical position-error bounds in covariance-based performance analysis are determined. A set of polynomial functions is numerically derived that accurately characterizes spatial position-error bounds for four distinct cases in which a sphere encompasses 50, 90, 95, and 99% of all possible outcomes. The results show that the polynomial functions yield accuracies very compatible with the needs of contemporary covariance-based system performance analysis. The numerical accuracy associated with the computation of spherical error probable (SEP) and circular error probable (CEP) is found to be better than that of the Grubbs formula. It is also concluded that a relatively simple polynomial function, together with a set of 64 constants, provide accurate values for all circular and spherical position-error bounds that are required in covariance-based performance analysis.
Article
We study the problem of designing dynamically feasible trajectories and con-trollers that drive a quadrotor to a desired state in state space. We focus on the development of a family of trajectories defined as a sequence of segments, each with a controller param-eterized by a goal state. Each controller is developed from the dynamic model of the robot and then iteratively refined through successive experimental trials to account for errors in the dynamic model and noise in the actuators and sensors. We show that this approach permits the development of trajectories and controllers enabling aggressive maneuvers such as fly-ing through narrow, vertical gaps and perching on inverted surfaces with high precision and repeatability.
Article
In the last five years, advances in materials, electronics, sensors, and batteries have fueled a growth in the development of microunmanned aerial vehicles (MAVs) that are between 0.1 and 0.5 m in length and 0.1-0.5 kg in mass [1]. A few groups have built and analyzed MAVs in the 10-cm range [2], [3]. One of the smallest MAV is the Picoftyer with a 60-mmpropellor diameter and a mass of 3.3 g [4]. Platforms in the 50-cm range are more prevalent with several groups having built and flown systems of this size [5]-[7]. In fact, there are severalcommercially available radiocontrolled (PvC) helicopters and research-grade helicopters in this size range [8].