Conference PaperPDF Available

Vision-based Guidance and Navigation for Swarm of Small Satellites in a Formation Flying Mission

Authors:

Abstract and Figures

Small satellites, including the nanosatellite platforms called CubeSats, are suitable for formation flying missions because of their modular nature and low cost. A formation flying mission involves a set of spatially-distributed satellites capable of autonomous interaction and cooperation with one another in order to maintain the desired formation. One of the fundamental drawbacks of current guidance, navigation, and control techniques is that they rely on a centralized process, primarily using a global positioning system (GPS). While real-time positioning computed by standard GPS service is adequate for some disperse applications (i.e., constellation missions), inherent position discontinuities are not acceptable for proximate formation flying or docking missions required by high-precision science instruments like phased array antenna measurements. Therefore, a new breed of swarm navigation algorithms along with accurate state estimation methodologies need to be developed for reliable formation flying of small satellites. First, obtaining an accurate relative pose estimation for the individual CubeSats of the swarm is essential. To accomplish this task, a vision-based fiducial system, referred to as augmented reality tags, was used to provide a unique identifier within each CubeSat in the swarm. We used ArUco markers for relative pose estimation and conducted various experiments for feasibility analysis of the method in the laboratory environment for implementation in space in proximate formation flying missions where GPS positioning is unreliable. Second, a new recursive hybrid consensus filter is developed for distributed state estimation using Kalman Filtering. We demonstrated a fully operational collaborative team that can be utilized for applying any method of swarm control. The proposed approach is scalable, robust to network failure, and capable of handling a non-Gaussian transition in observation models and, therefore, is quite suitable for controlling the CubeSat swarm to accomplish a proximate formation flying mission.
Content may be subject to copyright.
32nd Florida Conference on Recent Advances in Robotics May 9-10, 2019, Florida Polytechnic University, Lakeland, Florida
Vision-based Guidance and Navigation for Swarm
of Small Satellites in a Formation Flying Mission
Hadi Fekrmandi (1), Skye Rutan-bedard(2), Alexander Frye(2), Michael Yoon(3), Randy Hoover(3)
(1) Department of Mechanical Engineering, South Dakota School of Mines and Technology, Rapid City, SD
(2) Department of Electrical and Computer Engineering, South Dakota School of Mines and Technology, Rapid City, SD
(3) Department of Computer Science and Engineering, South Dakota School of Mines and Technology, Rapid City, SD
Hadi.Fekrmandi@sdsmt.edu, Skye.Rutan-bedard@mines.sdsmt.edu, Alexander.Frye@mines.sdsmt.edu,
Michael.Yoon@sdsmt.edu, Randy.Hoover@sdsmt.edu
ABSTRACT
Small satellites, including the nanosatellite platforms called
CubeSats, are suitable for formation flying missions because of
their modular nature and low cost. A formation flying mission
involves a set of spatially-distributed satellites capable of
autonomous interaction and cooperation with one another in order
to maintain the desired formation. One of the fundamental
drawbacks of current guidance, navigation, and control techniques
is that they rely on a centralized process, primarily using a global
positioning system (GPS). While real-time positioning computed
by standard GPS service is adequate for some disperse applications
(i.e., constellation missions), inherent position discontinuities are
not acceptable for proximate formation flying or docking missions
required by high-precision science instruments like phased array
antenna measurements. Therefore, a new breed of swarm
navigation algorithms along with accurate state estimation
methodologies need to be developed for reliable formation flying
of small satellites. First, obtaining an accurate relative pose
estimation for the individual CubeSats of the swarm is essential. To
accomplish this task, a vision-based fiducial system, referred to as
augmented reality tags, was used to provide a unique identifier
within each CubeSat in the swarm. We used ArUco markers for
relative pose estimation and conducted various experiments for
feasibility analysis of the method in the laboratory environment for
implementation in space in proximate formation flying missions
where GPS positioning is unreliable. Second, a new recursive
hybrid consensus filter is developed for distributed state estimation
using Kalman Filtering. We demonstrated a fully operational
collaborative team that can be utilized for applying any method of
swarm control. The proposed approach is scalable, robust to
network failure, and capable of handling a non-Gaussian transition
in observation models and, therefore, is quite suitable for
controlling the CubeSat swarm to accomplish a proximate
formation flying mission.
Keywords
CubeSat, Guidance Navigation and Control, Formation Flying,
Vision-Based Pose Estimation, Kalman Filters, Distributed State
Estimation.
1. INTRODUCTION
The objective of this study is to develop a new vision-based pose
estimation method for distributed guidance, navigation, and control
(GN&C) of a swarm of small satellites in a formation flying
mission. Systems of multiple small satellites appeal to NASA, as
their scientific, commercial, and academic tasks cannot be achieved
with a single satellite [1]. For example, formations will enable
distributed sensing for Earth gravity mapping, atmospheric data
collection, magnetospheric studies, co-observations, and global
communication systems [2]. Small satellites, including the
nanosatellite platforms called CubeSats, are suitable for formation
flying missions because of their modular nature and low cost [3]. A
formation flying (FF) mission, as illustrated in Figure 1, involves a
set of spatially distributed satellites, capable of autonomous
interaction and cooperation with one another in order to maintain
the desired formation [4]. Each spacecraft in the formation typically
carries, among other sensors, a sensor which provides a
measurement of the relative position between itself and other
spacecraft in the formation (e.g., using metrology, laser-ranging or
RF-ranging lidars, autonomous formation flying sensor) [5].
Figure 1. Illustration of formation flying of small satellites
with cross-link communication [6]
32nd Florida Conference on Recent Advances in Robotics May 9-10, 2019, Florida Polytechnic University, Lakeland, Florida
Proximate formation flying capability is essential for earth and
space science missions that require docking or precision multi-
point measurements. It also has the potential applications in larger
spacecraft inspection and service missions. Although transponders
are well established in the spacecraft world, networked swarms of
CubeSats that pass information amongst each other and then
eventually to ground have yet to be demonstrated [7]. One of the
fundamental drawbacks of current state estimation and control
techniques is their reliance on a centralized process a using global
positioning system (GPS) [5]. While real-time positioning
computed by a standard GPS service is adequate for some disperse
applications (i.e., constellation FF missions), inherent position
discontinuities are not acceptable for proximate FF missions
demanding high-precision science instruments (e.g. in the phased
array antenna measurements). Moreover, developing networked
swarms is less of a hardware engineering problem than a systems
and software engineering problem, as demonstrated by NASA
Ames Research Center’s Edison Demonstration of Smallsat
Networks (EDSN) [6]. Therefore, a new breed of swarm navigation
algorithms along with proximate accurate pose estimation methods
need to be developed for reliable formation flying of CubeSats.
To enable proximate formation flying and autonomous rendezvous
and capture (AR&C) and formation flying, NASA has developed
advanced video guidance sensor (AVGS) technology, which is an
optical laser sensor that calculates the relative range and attitude (6-
DOF state) between the two spacecrafts [8]. Smartphone video
guidance sensor (SVGS) is a low mass, low volume, and low-cost
implementation of the AVGS that has been developed in the
Marshall Space Flight Center (MSFC) using commercial off-the-
shelf (COTS) hardware. The SVGS [1] is particularly designed for
application on CubeSats and other small satellites. The basic
concept behind these sensors is to use a light source to illuminate a
known pattern of retroreflective targets on a target spacecraft. An
image of the illuminated targets is then captured. Using simple
geometric techniques, the 6-DOF state is extracted from the two-
dimensional image. In the proposed method of this study a unique
identifier is assigned for each CubeSats in the swarm and the visual-
based technique is being pursued to leverage reliability and
accuracy of proximate formation flying of small satellites and their
docking missions. The objective of this research is to develop a new
breed of vision-based pose estimation methods and an innovative
distributed state estimation and intelligent control algorithm. The
proposed method for pose estimation of the swarm is particularly
suitable for proximate state estimation considering the limited on-
board computational resources of small spacecrafts and the
availability of a vision camera sensor.
2. OBJECTIVES
2.1 Vision-Based Pose Estimation and
Verification
Prior to applying any method of swarm control, obtaining an
accurate pose estimation for the entire swarm is essential. Building
on the work of the NASA Marshall Space Flight Center, we will
develop a low-cost visual state estimator that will be on-board each
CubeSat within the swarm [1].
Table 1. The state of the art for GN&C sensor systems [7]
Component
Performance
Reaction Wheels
0.001 0.3 Nm peak torque, 0.015
8 Nms storage
Magnetorquers
0.1 Nm peak torque, 1.5 Nms storage
Star Trackers
25 arcsec pointing knowledge
Sun Sensors
0.1° accuracy
Earth Sensors
0.25° accuracy
Gyroscopes
1°h-1 bias stability, 0.1°h-1/2 random
walk
GPS Receivers
1.5 m position accuracy
Integrated Units
0.002° pointing capability
To accomplish this task, we will use an illuminated fiducial system,
referred to as an Augmented Reality or AR Tag [9-11], to provide
a unique identifier within each CubeSat in the swarm. ArUco
markers [12], shown in Figure 2, are an example of AR tags that
have open source library and will be used in this study.
Figure 2. Examples of ArUco markers [12]
Prior research using visual-based systems for accurate state/pose
estimation has shown promising results [13-15]. When applied to
ground-based robotic systems (UGVs) operating in a GPS-denied
environment and the addition of fiducial instrumentation, a fully
operational collaborative team can be developed [16-18]. Within
the context of the current study, we aim to extend these results in
two measurable ways:
Increase the overall efficiency of the computation time
required for accurate vision-based state estimation to enable
multiple AR Tag estimates on low-cost computing platforms with
reduced computational capability.
Adapt our estimation algorithms from two UGVs
operating in a planar environment to multiple CubeSats operating
32nd Florida Conference on Recent Advances in Robotics May 9-10, 2019, Florida Polytechnic University, Lakeland, Florida
on 6-degrees of freedom: position and yaw, pitch, and roll
orientation.
2.2 Distributed State Estimation (DSE)
Framework
In this task, a new recursive hybrid consensus filter for distributed
state estimation (DSE) will be developed on a hidden Markov
model (HMM), which is well-suited to multi-agent applications and
swarm settings. The proposed algorithm is scalable, robust to
network failure, and capable of handling non-Gaussian transition in
observation models and, therefore, is quite suitable for controlling
the swarm of small satellites to accomplish a reliable FF mission.
In the proposed DSE algorithm, no global knowledge of the
communication network is assumed. To manage the uncertainty,
the proposed framework uses the iterative conservative fusion
(ICF) to reach consensus over potentially correlated priors, while
consensus over likelihoods is handled using weights based on a
Metropolis-Hastings Markov chain (MHMC). The proposed DSE
method has been successfully evaluated in a multi-agent tracking
problem, a high-dimensional HMM, and it is shown that its
performance is efficient yet reliable [19]. During this project, it is
envisioned that the proposed DSE will be developed for a multi-
satellite FF mission in low Earth orbit (LEO). The proposed
distributed state estimation method of this research utilizes
complementary, multi-disciplinary expertise from a bio-inspired
design [20-22], reliability analysis [23-25], and computational
efficiency [26-28] of autonomous systems. The measurable
objectives of the proposed research in this task are as follows:
Decentralize target pose estimation problem in a 3D grid
using multiple observers connected through a changing topology
network.
Evaluate the robustness of the proposed method for
networks with different likelihoods of link failure.
2.3 Intelligent Planning and Control of
Swarm Formation
Data efficiency and computational efficiency are the major barriers
for applying biologically inspired control methods in the CubeSat
Swarm Network [29-30]. To address this challenge, the team
proposes a new, efficient integrated learning and association
framework, called experience replay adaptive dynamic
programming (ER-ADP). This approach is data-driven and does
not need an explicit mathematical model of the environment,
making it ideal for complex space applications. From a
mathematical viewpoint, there are two major differences in this
design compared with traditional adaptive dynamic programming
(ADP) and reinforcement learning (RL) designs. First, the weights
of action and critic networks are updated based on not only current
state and action pairs, but also on the history experienced state and
action pairs. Second, the designed experience replay tuples are
based on model-free fashion; that is to say, the experience tuple is
based on the backward temporal difference between current step
and previous step. Data-driven model-free approaches are suitable
for uncertain or unstructured systems (e.g., NASA unknown space
environment) [31]. Data and computation efficiencies are improved
from the preliminary studies on a cart-pole balancing example [32-
33]. Specifically, during this task, the following measurable
milestones will be accomplished:
To design a new experience replay tuple based on the
historical data of state (current and one-step future), action, and
reward. A batch of tuples will be selected and used periodically
during the ADP control online.
To build a projected CubeSat model to evaluate the new
learning control algorithm. The evaluation matrix includes learning
speed, robustness, and controlling the performance of the proposed
approach in comparison with traditional approaches.
3. METHOD
3.1 Vision-Based Pose Estimation
To test vision-based pose estimation with AR-tags [34], an
experimental test setup including a Raspberry Pi camera V1.3 was
used, as shown in Figure 3. The commercial range finder used has
an accuracy of ±2 mm. On the Raspberry Pi, OpenCV was installed,
and the ArUco libraries were used to calibrate the camera, find the
corners of tags in an image, and estimate the pose of the tags. An
ArUco marker [12] is a fiducial square marker composed of a wide,
black border and an inner binary matrix, which determines its
identifier (ID). The black border facilitates its fast detection in the
image, and the binary codification allows its identification and the
application of error detection and correction techniques. The
marker size determines the size of the internal matrix. The marker
size used in this study is composed of 6x6 bits or 36 bits.
Figure 3: Raspberry Pi Camera test setup comprised of 3D
printed camera stand and commercial laser range finder
The first step toward estimating the pose of a tag is the
identification of the tag and its corners. Using the ArUco library,
this can be done with the detectMarkers() function. This function
takes an input image and a dictionary including all valid tags as
parameters, and then returns the corners and IDs of identified tags
along with the corners from rejected tags. Each corner is constituted
by a vector denoting its position in the image. For the purpose of
determining the orientation of the tag, the corners are ordered
clockwise. The first corner is designated by the dictionary. IDs are
also defined by the tag dictionary used. The rejected corners are
typically not used but are formatted like the valid corners. Corners
are rejected when the library fails to extract a valid code from the
potential tag’s interior. Once the valid corners and IDs are
collected, it is possible to proceed to pose estimation. This is
32nd Florida Conference on Recent Advances in Robotics May 9-10, 2019, Florida Polytechnic University, Lakeland, Florida
handled by the estimatePoseSingleMarkers() method, which
requires the four corners of a marker, the side length of the tag, the
camera matrix, and distortion coefficients. The side length can use
any unit, and the output translation vector will be in Cartesian
coordinates of the same unit. This will be accompanied by a
rotation vector representing the orientation of the tag as roll, pitch,
and yaw in radians. The camera matrix represents the intrinsic
properties of the camera, while the distortion coefficients contain
the information needed to correct for the fact that the camera will
introduce aberrations not seen in the pinhole camera model used.
Both of these are found during the camera calibration process.
Camera calibration can be completed in several ways; however,
two methods were used and compared in this study. Calibration was
completed first with an ArUco board, and later with a ChArUco
board. The ArUco board was simply a grid of tags, as shown in
Figure 4-(a). A ChArUco board is similar, but the tags are placed
in the white portions of a chessboard pattern. The important
distinction is that the corners between black squares serve as the
key reference points for the ChArUco board, which can be more
accurately tracked than the corners of the tags themselves as used
by the ArUco board. The calibrateCameraCharuco() function
takes in data collected from multiple images with the board in
various positions. For each of these frames, the positions of the
corners in the image, the IDs of the corners, the ChArUco board
object being observed, and the size of the input image are provided.
The corner and IDs are found by interpolateCornersCharuco(), as
shown by the red overlay in Figure 4-(b). The ChArUco board
object is used as a reference and indicates the arrangement of tags
and corners the method should expect. Finally, the image size
indicates how many pixels are in the input frame. The method
produces a camera matrix [A], shown in Equation 1:
 
 
  . (1)
fx and fy represent focal lengths in pixel units, while cx and cy
represent the principal point in the image. Typically, this is the
center of the image. They represent the intrinsic properties of the
camera. The extrinsic properties of the camera are described by the
rotation-translation matrix [R|t]. This is the combination of a 3×3
rotation matrix composed of rr,c elements and a 3×1 translation
vector with tr elements. Together, they describe the position and
rotation of the camera relative to the 3D coordinate system being
used. The camera matrix along with the rotation-translation matrix
are required for the perspective transformation shown in Equation
2:
 
 
    
  
  

. (2)
Equation 2 describes how a pinhole camera projects a point in a
scene into the resultant image and is composed of four matrices.
First is the 2D position of a point as a homogeneous coordinate.
This can be written as [u v 1]T, where (u,v) is the Cartesian
coordinate of the point in the image. This homogeneous coordinate
is set equal to the product of three matrices. There are the camera
and rotation-translation matrices, as previously noted. Finally, there
is a homogeneous coordinate describing the 3D position of the
point being projected onto the image. Similar to the first
homogeneous coordinate, this can be expressed as [X Y Z 1]T,
where (X, Y, Z) is the position of the point in 3D space expressed
by Cartesian coordinates.The calibrateCameraAruco() method
takes in essentially the same parameters as
calibrateCameraCharuco(), but with ArUco tag corners
substituting the chessboard corners. Additionally, the number of
tags seen is included for each image. It still produce the camera
matrix and distortion coefficients needed for pose estimation.
(a)
(b)
Figure 4: ArUco Board (above) and ChArUco Board (below)
Assuming there is no rotational or translational offset between the
camera axes and the coordinate axis to be used, only the four
elements of the camera matrix need to be found. For each frame,
the algorithm has access to a set of evenly spaced points on a plane.
While the actual distance between points is not known, it still offers
a known geometry. For any camera matrix and point in the image,
Equation 2 solves to form directional information but do not
indicate how far the point being projected onto the image is from
the camera. Given all frames, a least-squares approach can be used
to find the camera matrix that best matches the data from each
image. Both calibration methods also account for different types of
distortion. They output distortion coefficients, which are divided
into three types: six for radial, two for tangential, and four for thin
prism. Each describes how points are offset from where they would
appear for an ideal pinhole camera. For example, radial distortion
shifts points away or toward the center of the image depending on
their distance from the center, as easily seen in fisheye cameras. All
three types of distortion have to be calibrated in addition to the
camera matrix. This is done by replacing u and v in Equation 2 with
functions that can reverse the distortion if the coefficients are
correct. As with the camera matrix, a least-squares approach is used
to find the values that minimize the resulting error.
32nd Florida Conference on Recent Advances in Robotics May 9-10, 2019, Florida Polytechnic University, Lakeland, Florida
3.2 Formation State Estimation
First, we introduce the prior work (see [35] for derivation), the
translational relative state estimation problem for formations on a
circular orbit using Kalman filters. The Kalman filter is a widely
applied concept for guidance, navigation, and control of vehicles,
particularly aircraft and spacecraft. In a centralized state estimation,
the formation relative translational state is defined as the vector of
positions and velocities for each satellite relative to a reference one
[36]:
, (3)
where N is the number of spacecrafts in the formation, and
, , (4)
where and .
The discretization of system dynamics in time for the relative
formation state can be represented by:
, (5)
where k = 0, 1, 2, … is the time index, and are the control
input and Gaussian disturbance input, respectively, and A and B are
the system and input matrices:
, 


,
 
 
  ,   
  
 ,
where , R is the radius of orbit, is the gravitational
parameter of the primary body, and are the identity and zero
matrix, and is the Kronecker product. Additionally, is the
control input, and is the input disturbance relative to spacecraft
1 as shown in Equation 5.
, (6)
where , with as the control input of the ith
spacecraft and   .
The same process applies for the process disturbance vector .
Measurements for estimation can be obtained by the vision-based
camera sensors method explained in the previous section and can
be fused from different markers. The measurements from each
sensor gives a relative position vector between a pair of spacecrafts
in the formation (range-only measurements are treated
subsequently). At any discrete time, instance , considering the
is a matrix to describe the available relative position
measurements. Each row of corresponds to an independently
obtained relative position measurement  with jth entry of the row
+1, ith entry -1, and the rest of the entries being zeros. If the
measurement links that exist at any given time index form a
connected sensing graph, it has been shown that [36]
 , (7)
where

    
   
    
   ,
and where
, and .
This matrix encodes the relative state definition with respect to
spacecraft 1. Definitions of T matrix vary depending on the
reference spacecraft. Having the measurement matrix given by the
Equation 6, the system model could be formed for the formation
estimator synthesis:
 (8)
 ,
where and are independent zero mean white noise processes
with  and . This model of the system
has linear-time-invariant state dynamics, and the measured output
is a linear but time-varying function of the state. is time-varying
to capture time-varying sensor graphs that indicates which
spacecrafts are measuring which other spacecrafts.
4. RESULTS
Figure 5 shows initial depth estimation results with calibration from
an ArUco board as shown in Figure 4-(a). A 6×6 cm ArUco tag was
positioned in front of the camera facing directly towards it. This tag
was placed at varying distances from the camera, the pose was
estimated, and the actual distance was recorded. The distance away
(Z) is consistent for small distances and could be made more
accurate with a nonlinear approximation. Error in the first meter
range was between ±3 cm. With the ArUco board for calibration
and the 720p camera resolution, OpenCV was limited to identifying
the tag out to 2 meters.
Figure 5: Initial pose estimation using ArUco Board
calibration
While this early result shows a clear need for improvement, it,
nevertheless, demonstrated that ArUco pose estimation is feasible.
In order to improve the pose estimation accuracy, better calibration
was required. Therefore, a ChArUco board as shown in Figure 4-
(b) was used for calibration. Figure 6 shows the results of testing
after calibration with the ChArUco board. The tag size was
32nd Florida Conference on Recent Advances in Robotics May 9-10, 2019, Florida Polytechnic University, Lakeland, Florida
increased from 6×6 cm to 8×8 cm to better cover the area available
on a 1U CubeSat. For each distance tested, 16 individual pose
estimations were made, and their mean computed. The tests were
repeated for three different resolutions: 640×480, 960×720, and
1280×960. While these tests went out to 3.5 m, tags can be detected
at greater distances.
Figure 6: Updated mean pose estimation using ChArUco
Board calibration
As can be seen, this represents a significant improvement over the
original results. Errors for all resolutions start below 0.5 cm and
linearly increase to no more than 1.25 cm at a 1-meter range. The
linear relationship between error and distance ends at
approximately 1 meter regardless of resolution. At distances greater
than a meter, collected data becomes more sporadic. The error
increases with distance, but more so for lower resolutions. By
moving the corners of a detected tag outwards or inwards a single
pixel, pose estimations switch between two discrete values. As the
distance to the tag increases, so does the difference between each
of these discrete values. While this effect results in an increased
error at greater distances for all resolutions, the effect is more
noticeable for lower resolutions. At its most extreme, the difference
between two discrete pose estimations was observed to be about 15
cm during the 640×480 test at approximately 320 cm.
Figure 7: Mean pose estimation for 1-meter range
As mentioned before, the data collected is notable for its linear
behavior within a 1-meter range. Figure 7 shows this portion of the
collected data with trendlines to match the data from each
resolution. There is no apparent relationship between the resolution
and the vertical offset of each line. It is believed that these
differences result from a slight variation in the calibrations
performed for each resolution. However, the slope seems to
decrease with increased resolution, though this potential
relationship requires further investigation. The linearity of this
portion of the data is important because it potentially could be used
by error reduction techniques.
Figure 7: Mean pose estimation for 1-meter range with error
reduction
Figure 7 demonstrates how error reduction could be applied. For
each resolution, the trendline was extracted as
. (9)
Each point (x, y) could then be replaced with an error-corrected
point,
. (10)
This results in a significant improvement. In this range, the
accuracy of the ArUco pose estimation approaches that of the ±2
mm accuracy of the laser range finder used to provide distance
data.
5. DISCUSSION
The proposed research is primarily focused on the algorithm
development for the vision-based distributed state estimation
method in collaborative and proximity formation missions of small
satellites networks (i.e., distributed control systems for formation
flying in the 3D space). This is an important first step toward
approaching the general, fully collaborative autonomous mission
operation in space. The proposed research will provide the software
and simulation foundation for the complete design of a small
satellite with an ability to function within a swarm, which includes
communication systems, attitude and position sensors, a power
unit, a command and data unit, and the propulsion unit. Discussions
about the possibility of extending the proposed research to
encompass the hardware and fabrication aspects of swarm-capable
small satellites have already begun.
32nd Florida Conference on Recent Advances in Robotics May 9-10, 2019, Florida Polytechnic University, Lakeland, Florida
6. Future Work
Using an innovative distributed approach, the goal is to leverage
the efficiency and communication among the CubeSats in the
swarm and to develop proximate formation flying and docking
capabilities. The following works are planned to be conducted in
continuation of the current effort:
6.1. Algorithms integration, verification
and validation and testing
To ensure an effective collaboration leading to the development of
a reusable flight software product that meets the standard and
complies with hardware, NASA’s core Flight System (cFS)
platform will be used as a cloud-based framework during the
development of the algorithms. The Linux-based software
development framework has been successfully tested on a GPU
computing-enabled PC. The flight software is unaware it is not
actually being run in space, as it obtains all the same inputs that it
would during nominal operations. The NOS3 is a hardware
simulation and visualization tool for software-based validation and
verification of small satellites. In the first phase, the investigative
team will work closely with MSFC to evaluate and improve the
performance of the software product in challenging scenarios of
proximate FF missions. In the second phase of the project, the
distributed visual-based state estimation and swarm control
algorithms will be integrated and validated using visualization and
simulation capabilities of the NASA operational simulation tool for
small satellites (NOS3).
6.2. Distributed State Estimation (DSE) to
be developed on a Hidden Markov Model
(HMM)
In this task, a new recursive hybrid consensus filter for distributed
state estimation (DSE) will be developed on a hidden Markov
model (HMM), which is well-suited to multi-agent application and
swarm settings. The proposed algorithm is scalable, robust to
network failure, and capable of handling non-Gaussian transition in
observation models and, therefore, is quite suitable for controlling
a swarm of CubeSats to accomplish a reliable FF mission. In the
proposed DSE algorithm, no global knowledge of the
communication network is assumed. To manage the uncertainty,
the proposed framework uses the iterative conservative fusion
(ICF) to reach consensus over potentially correlated priors, while
consensus over likelihoods is handled using weights based on a
Metropolis Hastings Markov chain (MHMC). The proposed DSE
method has been successfully evaluated in a multi-agent tracking
problem, a high-dimensional HMM, and it is shown that its
performance is efficient yet reliable. During this project, it is
envisioned that the proposed DSE will be developed for a multi-
CubeSat FF mission in three-dimensional space.
7. ACKNOWLEDGMENTS
Funding for this research was provided by the NASA EPSCoR
program of the state of South Dakota in the form of the Research
Initiation Grant (RIG). Authors gratefully acknowledge the NASA
EPSCoR program director, Dr. Edward Duke, for the guidance and
support of this effort. This research is being conducted
collaboratively at the South Dakota School of Mines and
Technology (SDSM&T), South Dakota State University (SDSU),
L3 Technologies Communication West, and the NASA Marshall
Space Flight Center (MSFC). Authors greatly appreciate the
supervision of Mr. John Rakoczy at the control systems division of
NASA MSFC.
8. REFERENCES
[1] Becker, Christopher, Richard Howard, and John Rakoczy.
"Smartphone Video Guidance Sensor for Small Satellites."
(2013). 27th Annual AIAA/USU Conference on Small
Satellites, Salt Lake City, UT.
[2] Becker C. Clyde Space is in the early stages of its ambitious
project called the Outernet, a low-cost, mass-producible
constellation of 1U CubeSats that will provide a near
continuous broadcast of humanitarian data to those in need.
In: Utah; 2017.
[3] Bandyopadhyay, Saptarshi, Rebecca Foust, Giri P.
Subramanian, Soon-Jo Chung, and Fred Y. Hadaegh.
"Review of formation flying and constellation missions using
nanosatellites." Journal of Spacecraft and Rockets 0 (2016):
567-578.
[4] Bandyopadhyay, Saptarshi, Soon-Jo Chung, and Fred Y.
Hadaegh. "Probabilistic and distributed control of a large-
scale swarm of autonomous agents." IEEE Transactions on
Robotics 33, no. 5 (2017): 1103-1123.
[5] Winternitz, Luke, Bill Bamford, Samuel Price, Anne Long,
Mitra Farahmand, and Russell Carpenter. "GPS Navigation
Above 76,000 km for the MMS Mission." (2016).
[6] Hanson, John, James Chartres, Hugo Sanchez, and Ken
Oyadomari. "The EDSN intersatellite communications
architecture." (2014).
[7] Yost, B. "State of the Art of Small Spacecraft Technology."
NASA report (2017).
[8] Rumford, Timothy E. "Demonstration of autonomous
rendezvous technology (DART) project summary." In Space
Systems Technology and Operations, vol. 5088, pp. 10-20.
International Society for Optics and Photonics, 2003.
[9] Richardson, Andrew, Johannes Strom, and Edwin Olson.
"AprilCal: Assisted and repeatable camera calibration." In
2013 IEEE/RSJ International Conference on Intelligent
Robots and Systems, pp. 1814-1821. IEEE, 2013.
[10] Wang, John, and Edwin Olson. "AprilTag 2: Efficient and
robust fiducial detection." In 2016 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS), pp.
4193-4198. IEEE, 2016.
[11] Babinec, Andrej, Ladislav Jurišica, Peter Hubinský, and
František Duchoň. "Visual localization of mobile robot using
artificial markers." Procedia Engineering 96 (2014): 1-9.
[12] Garrido-Jurado, Sergio, Rafael Muñoz-Salinas, Francisco
José Madrid-Cuevas, and Manuel Jesús Marín-Jiménez.
"Automatic generation and detection of highly reliable
fiducial markers under occlusion." Pattern Recognition 47,
no. 6 (2014): 2280-2292.
[13] Hoover, Randy C. Pose estimation of spherically correlated
images using eigenspace decomposition in conjunction with
spectral theory. Colorado State University, 2009.
[14] Hoover, Randy C., Anthony A. Maciejewski, Rodney G.
Roberts, and Ryan P. Hoppal. "An illustration of eigenspace
decomposition for illumination invariant pose estimation." In
2009 IEEE International Conference on Systems, Man and
Cybernetics, pp. 3415-3420. IEEE, 2009.
[15] Hoover, Randy C., Anthony A. Maciejewski, and Rodney G.
Roberts. "Designing eigenspace manifolds: with application
32nd Florida Conference on Recent Advances in Robotics May 9-10, 2019, Florida Polytechnic University, Lakeland, Florida
to object identification and pose estimation." In 2009 IEEE
International Conference on Systems, Man and Cybernetics,
pp. 3409-3414. IEEE, 2009.
[16] Brech, Dale E., and Randy C. Hoover. "Development of a
virtual reality simulation testbed for collaborative UGV and
UAV research using Matlab." In ASME 2013 International
Mechanical Engineering Congress and Exposition, pp.
V04AT04A029-V04AT04A029. American Society of
Mechanical Engineers, 2013.
[17] Brech, Dale E., Jiayi Liu, Alex A. Brech, and Randy C.
Hoover. "Visual Feedback Navigation and Control of a
Quadrotor Using Fuzzy Expert Systems." In ASME 2012
International Mechanical Engineering Congress and
Exposition, pp. 357-368. American Society of Mechanical
Engineers, 2012.
[18] Brech, Dale E., and Randy C. Hoover. "Development of a
virtual reality simulation testbed for collaborative UGV and
UAV research using Matlab." In ASME 2013 International
Mechanical Engineering Congress and Exposition, pp.
V04AT04A029-V04AT04A029. American Society of
Mechanical Engineers, 2013.
[19] Tamjidi, Amirhossein, Reza Oftadeh, Suman Chakravorty,
and Dylan Shell. "Efficient recursive distributed state
estimation of hidden Markov models over unreliable
networks." Autonomous Robots (2019): 1-18.
[20] Fekrmandi, Hadi, and Phillip Hillard. "A pipe-crawling robot
using bio-inspired peristaltic locomotion and modular
actuated non-destructive evaluation mechanism." In
Bioinspiration, Biomimetics, and Bioreplication IX, vol.
10965, p. 1096508. International Society for Optics and
Photonics, 2019.
[21] Fekrmandi, Hadi, John Hillard, and William Staib. "Design
of a Bio-Inspired Crawler for Autonomous Pipe Inspection
and Repair Using High Pressure Cold Spray."
[22] Fekrmandi, Hadi, Javier Rojas, Jason Campbell, Ibrahim Nur
Tansel, Bulent Kaya, and Sezai Taskin. "Inspection of the
integrity of a multi-bolt robotic arm using a scanning laser
vibrometer and implementing the surface response to
excitation method (SuRE)." International Journal of
Prognostics and Health Management 5, no. 1 (2014): 1-10.
[23] Fekrmandi, Hadi, Rafael Gonzalez, Sebastian Rojas, Ibrahim
Nur Tansel, David Meiller, and Kyle Lindsay. "Automation
of the interpretation of surface response to excitation (SuRE)
method by using neural networks." In 2015 7th International
Conference on Recent Advances in Space Technologies
(RAST), pp. 11-16. IEEE, 2015.
[24] Fekrmandi, H., and Y. S. Gwon. "Reliability of surface
response to excitation method for data-driven prognostics
using Gaussian process regression." In Health Monitoring of
Structural and Biological Systems XII, vol. 10600, p.
106002R. International Society for Optics and Photonics,
2018.
[25] Gwon, Y. S., and H. Fekrmandi. "A data-driven approach of
load monitoring on laminated composite plates using support
vector machine." In Smart Structures and NDE for Industry
4.0, vol. 10602, p. 1060206. International Society for Optics
and Photonics, 2018.
[26] Fekrmandi, Hadi, Muhammad Unal, Amin Baghalian,
Shervin Tashakori, Kathleen Oyola, Abdullah Alsenawi, and
Ibrahim Nur Tansel. "A non-contact method for part-based
process performance monitoring in end milling operations."
The International Journal of Advanced Manufacturing
Technology83, no. 1-4 (2016): 13-20.
[27] Fekrmandi, Hadi, Muhammet Unal, Sebastian Rojas Neva,
Ibrahim Nur Tansel, and Dwayne McDaniel. "A novel
approach for classification of loads on plate structures using
artificial neural networks." Measurement 82 (2016): 37-45.
[28] Fekrmandi, Hadi, Javier Rojas, Ibrahim N. Tansel, Ahmet
Yapici, and Balemir Uragun. "Investigation of the
computational efficiency and validity of the surface response
to excitation method." Measurement 62 (2015): 33-40.
[29] Powell, Warren B. Approximate Dynamic Programming:
Solving the curses of dimensionality. Vol. 703. John Wiley
& Sons, 2007.
[30] He, Haibo. Self-adaptive systems for machine intelligence.
John Wiley & Sons, 2011.
[31] Fong, Terry. "Human-Robot Teams for Unknown and
Uncertain Environments." (2015).
[32] Ni, Zhen, Naresh Malla, and Xiangnan Zhong. "Prioritizing
Useful Experience Replay for Heuristic Dynamic
Programming-Based Learning Systems." IEEE Transactions
on Cybernetics 99 (2018): 1-12.
[33] Malla, Naresh, and Zhen Ni. "A new history experience
replay design for model-free adaptive dynamic
programming." Neurocomputing 266 (2017): 141-149.
[34] Marchand, Eric, Hideaki Uchiyama, and Fabien Spindler.
"Pose estimation for augmented reality: a hands-on survey."
IEEE transactions on visualization and computer graphics 22,
no. 12 (2015): 2633-2651.
[35] Açιkmeşe, Behçet, Daniel P. Scharf, John M. Carson III, and
Fred Y. Hadaegh. "Distributed estimation for spacecraft
formations over time-varying sensing topologies." IFAC
Proceedings Volumes 41, no. 2 (2008): 2123-2130.
[36] Hadaegh, Fred Y., Singh Gurkirpal, Behcet Açikmeşe,
Daniel P. Scharf, and Milan Mandić. "Guidance and Control
of Formation Flying Spacecraft." In The Path to Autonomous
Robots, pp. 1-19. Springer, Boston, MA, 2009.
... This process can work with different patterns, provided that the geometrical rules exploited by the identification procedure are respected. Previous works on cooperative relative navigation have also adopted code-based markers like the ArUco ones [10]. Nevertheless, circular retroreflective markers have been preferred for this architecture since they require only a small portion of the target's surface for their placement, thus being less intrusive when included in the design of the target (e.g., lower impact on thermal control). ...
Conference Paper
This paper describes the design, characterization and preliminary testing of a relative navigation module based on electro-optical sensors and tailored for close-proximity operations with respect to passively cooperative targets of the CubeSat class. The proposed unit relies on a laser range finder with a relatively large beam divergence and a visual camera. The laser range finder operates in the near-infrared band and is exploited to both provide direct range measurements and actively illuminate the scene. The visual sensor and its optics are coherently chosen and allow obtaining full pose (i.e., relative position and attitude) measurements by processing the acquired target's images. The pose estimation procedure is based on the detection and identification of a set of fiducial markers installed on the target surface and highly reflective in the near-infrared band. Experimental tests are carried out at components level to both characterise the laser range finder and preliminarily assess the performance of the proposed pose determination approach. The results show that the proposed approach is robust to a large number of outliers, produced by highly reflective insulation layers typically covering satellite surfaces. Also, in all the considered test cases pose estimates with pixel-level reprojection errors, corresponding to sub-millimetre accuracy, are obtained.
... Systems of multiple small satellites appeal to NASA, as their scientific, commercial, and academic tasks cannot be achieved with a single satellite [2]. For example, formations will enable distributed sensing for Earth gravity mapping, atmospheric data collection, magnetospheric studies, coobservations, and global communication systems [3]. Small satellites, including the nanosatellite platforms called CubeSats, are suitable for formation flying missions because of their modular nature and low cost. ...
Chapter
Full-text available
In this paper, the feasibility of using a vision-based fiducial system is evaluated to provide state estimation for agents within a swarm. We used ArUco markers for relative pose estimation and conducted an experiment for validation of the performance, the accuracy of estimations, and reliability of the method. The ultimate goal is to develop an approach for guidance, control, and navigation of a swarm of CubeSat agents in the space where GPS positioning is unreliable (e.g., proximity formation flying missions in low earth orbit). We demonstrated the feasibility of the method for a three-agent swarm and validated the results obtained for relative position and attitude estimation.
... In addition, the navigation is based on the presence of four markers installed on the leader CubeSat. In Ref. [10] a low-cost visual state estimator is developed, that will be on-board each CubeSat within the swarm. To accomplish this task, the authors use an augmented reality system, to provide a unique identifier within each https://doi.org/10.1016/j.actaastro.2020.03.015 ...
Article
In the framework of on-orbit proximity operations, small satellites represent an appealing solution in terms of costs, effectiveness and realization, but show intrinsic power, mass and computation limitations that could be overcome if cooperation among many spacecraft is considered. In this research, a swarm of small satellites (called “Children S/C″), deployed from a larger platform (called “Mother S/C″), is employed in the inspection of a damaged spacecraft, which cannot be directly approached by the Mother S/C for safety reasons. The navigation process is delegated to the Mother S/C, that is equipped with a passive camera. A navigation algorithm runs on the Mother S/C, which identifies and tracks the Children S/C framed in the scene, even though false matches (i.e., incorrect tracking of a given child) could happen. The navigation algorithm is made robust through the generation at each time step of a propagated virtual image (based on the previous estimate of the swarm state) which is compared to the acquired image for the correct identification of each child. When the relative position estimates reach a satisfactory accuracy, the Mother S/C communicates to the Children S/C their current state and an optimal impulsive control strategy can be implemented to make the swarm fulfill the required inspection task. Simulations are carried out in a purposely developed software environment, in which images of rendered CAD models are acquired and processed to have a realistic validation of the estimation algorithm, showing satisfactory performance and robustness for the overall mission accomplishment.
Article
Full-text available
We consider a scenario in which a process of interest, evolving within an environment occupied by several agents, is well-described probablistically via a Markov model. The agents each have local views and observe only some limited partial aspects of the world, but their overall task is to fuse their data to construct an integrated, global portrayal. The problem, however, is that their communications are unreliable: network links may fail, packets can be dropped, and generally the network might be partitioned for protracted periods. The fundamental problem then becomes one of consistency as agents in different parts of the network gain new information from their observations but can only share this with those with whom they are able to communicate. As the communication network changes, different views may be at odds; the challenge is to reconcile these differences. The issue is that correlations must be accounted for, lest some sensor data be double counted, inducing overconfidence or bias. As a means to address these problems, a new recursive consensus filter for distributed state estimation on hidden Markov models is presented. It is shown to be well-suited to multi-agent settings and associated applications since the algorithm is scalable, robust to network failure, capable of handling non-Gaussian transition and observation models, and is, therefore, quite general. Crucially, no global knowledge of the communication network is ever assumed. We have dubbed the algorithm a Hybrid method because two existing pieces are used in concert: the first, iterative conservative fusion is used to reach consensus over potentially correlated priors, while consensus over likelihoods, the second, is handled using weights based on a Metropolis Hastings Markov chain. To attain a detailed understanding of the theoretical upper limit for estimator performance modulo imperfect communication, we introduce an idealized distributed estimator. It is shown that under certain general conditions, the proposed Hybrid method converges exponentially to the ideal distributed estimator, despite the latter being purely conceptual and unrealizable in practice. An extensive evaluation of the Hybrid method, through a series of simulated experiments, shows that its performance surpasses competing algorithms.
Article
Full-text available
Small satellites are enabling multisatellite missions that were not otherwise possible because of their small size and modular nature [1]. Multiple small satellites can be flown instead of a much bigger and costlier conventional satellite for distributed sensing applications such as atmospheric sampling, distributed antennas [2], and synthetic apertures [3,4]. Missions with multiple small satellites can deliver a comparable or greater mission capability than a monolithic satellite, but with significantly enhanced flexibility (adaptability, scalability, evolvability, and maintainability) and robustness (reliability, survivability, and fault tolerance) [1,5]. Small satellites that weigh less than 10 kg can be broadly classified into nanosatellites (mass between 1 and 10 kg), picosatellites (mass between 0.1 and 1 kg), and femtosatellites (mass less than 100 g) [1,6].
Article
The adaptive dynamic programming controller usually needs a long training period because the data usage efficiency is relatively low by discarding the samples once used. Prioritized experience replay (ER) promotes important experiences and is more efficient in learning the control process. This paper proposes integrating an efficient learning capability of prioritized ER design into heuristic dynamic programming (HDP). First, a one time-step backward state-action pair is used to design the ER tuple and, thus, avoids the model network. Second, a systematic approach is proposed to integrate the ER into both critic and action networks of HDP controller design. The proposed approach is tested for two case studies: a cart-pole balancing task and a triple-link pendulum balancing task. For fair comparison, we set the same initial weight parameters and initial starting states for both traditional HDP and the proposed approach under the same simulation environment. The proposed approach improves the required average number of trials to succeed by 60.56% for cart-pole, and 56.89% for triple-link balancing tasks, in comparison with the traditional HDP approach. Also, we have added results of ER-based HDP for comparison. Moreover, theoretical convergence analysis is presented to guarantee the stability of the proposed control design.
Conference Paper
In this paper, we describe design of a robotic pipe crawler for inspection and repair of remote to access internal piping systems. Current snake robots for pipe are primarily designed for inspection purposes and unreliable for conducting further operations including repair. This shortcoming is due to the limited load carrying capacity that impedes them from offering a holistic and viable inspection and repair approach. Towards achieving this goal, a collaborative effort has been formed among Advanced Intelligent Mechatronics Systems (AIMS) research laboratory and Advanced Materials Processing Technology Transition and Training Center (AMPTECH) of South Dakota School of Mines and Technology (SDSM&T). The objective is to design and develop a long-range tethered modular pipe crawler with high load capacity carrying capable of identifying and localizing the pipe damages using Non-Destructive Evaluation (NDE) and conducting repair through coating defected pipe areas via high pressure particle deposition also known as cold spray process. A modular design concept is adopted in the design of crawler with four modules using a bio-inspired peristaltic movement for locomotion. Adoption of modular design concept along with high load carrying capacity allows the integration of two additional modules for carrying NDE and cold spray equipment, respectively. The current design allows navigation through 4 inch and above pipe diameters. To provides access for remote areas of piping system all modules are designed to satisfy dimensional requirement to pass 45 and 90-deg bends. Also, in order to maximize the pull force available for carrying the inspection and repair equipment, a self-locking mechanism is used in the design of the grippers. This paper describes the design considerations for maximum load carrying capacity and initial design efforts for the pipe robot. It highlights the break through innovations in the design of gripper mechanism using four-follower face cam mechanism and other novel features especially within NDE inspection and cold spray repair modules.
Article
An adaptive dynamic programming (ADP) controller is a powerful control technique that has been investigated, designed and tested in a wide range of applications for solving optimal control problems in complex systems. The performance of the ADP controller is usually obtained by long training periods because the data usage efficiency is low as it discards the samples once used. History experience, also known as experience replay, is a powerful technique showing potential to accelerate the training process of learning and control. However, the existing design of history experience cannot be directly used for the model-free ADP design, because the existing work focuses on the forward temporal difference (TD) information (e.g., state-action pair). This information is between the current time step and the future time step and will need a model network for future information prediction. This paper proposes a new history experience replay design to avoid the usage of the model network or identifier of the system/environment. Specifically, we designed the experience tuple with one step backward state-action information and the TD can be achieved by a previous time step and a current time step. In addition, a systematic approach is proposed to integrate history experience in both the critic and action networks of the ADP controller design. The proposed approach is tested for two case studies: a cart-pole balancing task and a triple-link pendulum balancing task. For fair comparison, we set the same initial starting states and initial weight parameters for both approaches under the same simulation environment. The statistical results show that the proposed approach can improve the required average number of trials to succeed as well as the success rate. In general, the proposed approach improved the required average trial to succeed by 26.5% for cart-pole and 43% for triple-link balancing tasks.
Article
We present a novel method for guiding a large-scale swarm of autonomous agents into a desired formation shape in a distributed and scalable manner. Our Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG–IMC) algorithm adopts an Eulerian framework, where the physical space is partitioned into bins and the swarm’s density distribution over each bin is controlled. Each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain. These time-varying Markov matrices are constructed by each agent in real-time using the feedback from the current swarm distribution, which is estimated in a distributed manner. The PSG–IMC algorithm minimizes the expected cost of the transitions per time instant, required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. We demonstrate the effectiveness of this proposed swarm guidance algorithm by using results of numerical simulations and hardware experiments with multiple quadrotors.
Article
In this study the location of applied load on an aluminum and a composite plate was identified using two type of neural network classifiers. Surface Response to the Excitation (SuRE) method was used to excite and monitor the elastic guided waves on plates. The characteristic behavior of plates with and without load was obtained. The experiments were conducted using two set of equipment. First, laboratory equipment with a signal generator and a data acquisition card. Then same test was conducted with a low cost Digital Signal Processor (DSP) system. With experimental data, Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) neural network classifiers were used comparatively to detect the presence and location of load on both plates. The study indicated that the Neural Networks is reliable for data analysis and load diagnostic and using measurements from both laboratory equipment and low cost DSP.
Article
Augmented reality (AR) allows to seamlessly insert virtual objects in an image sequence. In order to accomplish this goal, it is important that synthetic elements are rendered and aligned in the scene in an accurate and visually acceptable way. The solution of this problem can be related to a pose estimation or, equivalently, a camera localization process. This paper aims at presenting a brief but almost self-contented introduction to the most important approaches dedicated to vision-based camera localization along with a survey of several extension proposed in the recent years. For most of the presented approaches, we also provide links to code of short examples. This should allow readers to easily bridge the gap between theoretical aspects and practical implementations.
Article
This book will advance the understanding and application of self-adaptive intelligent systems; therefore it will potentially benefit the long-term goal of replicating certain levels of brain-like intelligence in complex and networked engineering systems. It will provide new approaches for adaptive systems within uncertain environments. This will provide an opportunity to evaluate the strengths and weaknesses of the current state-of-the-art of knowledge, give rise to new research directions, and educate future professionals in this domain. Self-adaptive intelligent systems have wide applications from military security systems to civilian daily life. In this book, different application problems, including pattern recognition, classification, image recovery, and sequence learning, will be presented to show the capability of the proposed systems in learning, memory, and prediction. Therefore, this book will also provide potential new solutions to many real-world applications.