Content uploaded by William Ledbetter
Author content
All content in this area was uploaded by William Ledbetter on Feb 02, 2019
Content may be subject to copyright.
AAS 19-529
EXPECTED ACCURACY OF DENSITY RECOVERY USING
SATELLITE SWARM GRAVITY MEASUREMENTS
William Ledbetter∗
, Rohan Sood†
, and Jeffrey Stuart ‡
Asteroids are of particular interest in the modern space industry due to their poten-
tial for advancing knowledge about the origins of the solar system. Identifying and
characterizing asteroids with sustainable resources for future space exploration is
critical. Additionally, limited knowledge of near-Earth asteroids’ physical char-
acteristics such as shape, density, gravity field, and composition pose a challenge
to any manned exploration. A stochastic gradient method from the field of deep
learning is applied to the problem of asteroid density recovery, with implications
for target selection and mining. The algorithm outperforms the predicted observ-
ability, and is minimally affected by noisy measurements.
INTRODUCTION
Questions about the origins of life on Earth have been the overarching motivation that has driven
the majority of space-faring missions over the last half-century. Scientists and engineers have rec-
ognized that asteroids, the leftover building blocks from the time of planetary accretion, may be the
richest source of information capable of providing a glimpse into the prehistoric solar system. De-
veloping a comprehensive understanding of the composition of asteroids may offer insight into the
early processes of solar system formation, with implications for planetary exploration, exoplanetary
investigations, and, ultimately, the origins of life.
Missions such as NEAR/Shoemaker1and Hayabusa2have been invaluable to the scientific com-
munity, and successful completion of the OSIRIS-REx and Hayabusa 2 missions will catalyze fur-
ther research. However, the number of cataloged asteroids recently surpassed 780,000, yet, even
with these recent missions, fewer than 20 have been visited by spacecraft. Small-body exploration
missions, such as Lucy,3often utilize flybys to tour multiple bodies with a single spacecraft. Lucy
plans to visit multiple targets in the Jupiter-Trojan system by utilizing sequential flybys. Although
the asteroid-tour-style mission is an elegant solution to the problem of broad exploration, some
trade-offs are made with respect to the traditional single-target mission. In order for the spacecraft
to continue on its tour trajectory, it must maintain significant velocity as it passes each body of in-
terest, substantially reducing the time available for close-proximity observations. As an attempt to
augment the quality of data obtained during such short encounters, the authors propose a method of
in-situ gravimetry to reveal internal structures and density variations.
∗Ph.D. Student, Astrodynamics and Space Research Laboratory, Department of Aerospace Engineering and Mechanics,
The University of Alabama, Tuscaloosa, AL, 35487, USA. wgledbetter@crimson.ua.edu
†Assistant Professor, Astrodynamics and Space Research Laboratory, Department of Aerospace Engineering and Mechan-
ics, The University of Alabama, Tuscaloosa, AL, 35487, USA. rsood@eng.ua.edu
‡Research Technologist, Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena,
CA 91109, USA, MS 301-121, jeffrey.r.stuart@jpl.nasa.gov
1
Previous work4outlined the principles of density recovery for a heterogeneous body, and revised
the polyhedral gravity model5, 6 to express such a body. Preliminary developments showed promis-
ing results using a simple matrix inversion recovery technique; however, the method had limitations
when complex recoveries were attempted. In this paper, the authors expand the research on two
fronts: gravity model observability, and recovery algorithm development. Analysis of the gravity
model’s observability will help define bounds on the expected performance of an optimal recov-
ery algorithm, and research into alternative recovery methods will aim to achieve the best expected
performance.
THEORY
Dynamical Model
Recent developments in asteroid gravity modeling have utilized the polyhedral formulation of
Werner and Scheeres.5Updates to this model have investigated techniques for expressing variable
density,6, 7 but still maintain a priori assumptions about the internal geometry, such as core/shell
or hemispherical distributions. The authors’ contribution expands the techniques of Takahashi6to
express fully heterogeneous polyhedra at the same level of resolution as the shape model itself.
The expression of heterogeneous polyhedra relies on vectorization of Werner’s basic formulation,
Equation (1).
U=Gρ X
eedges
¯re·Ee·¯re·Le−Gρ X
ffaces
¯rf·Ff·¯rf·ωf,(1)
where Gis the gravitational constant, ρis the average density of the body, ¯reis a vector from the
point in space to a point on edge e,Eeis a matrix describing the geometry of edge e, and Leis the
potential of a 1D ‘wire’ expressed in terms of the distances to the edge’s endpoints. In the second
term, ¯rfis a vector from the query point in space to a point on face f,Ffis a matrix describing
the geometry and orientation of face f, and ωfis the signed solid angle subtended by face f, from
the perspective of the point in space. Equation (1) assumes homogeneity, but by distributing the
density term into the summations, each face and edge can be assigned a unique density, ¯ρfand ¯ρe,
respectively, thus enabling variation in the latitude and longitude of the central body. Furthermore,
the potential equation can now be expressed as a vector dot product:
U= ¯u·¯ρ= [¯ue|¯uf]·[¯ρe|¯ρf],(2)
where the elements in the ¯uvectors are given as
ue=GLe(¯re·Ee·¯re)(3)
uf=−Gωf(¯rf·Ff·¯rf).(4)
Radial variation is achieved through layering of multiple polyhedra. Assuming the shape of each
inner layer is a scaled and concentric copy of the outermost layer preserves the Eeand Ffmatricies.
Therefore, only the relative position vectors, ¯reand ¯rf, and the scalars, Leand ωf, vary with
each subsequent layer. The procedure necessary for layering is illustrated in Figure 1, and can be
mathematically expressed by Equation (5).
U= (¯u1−¯u2)¯ρ1+ (¯u2−¯u3)¯ρ2+ ¯u3¯ρ3= ¯u·¯ρ(5)
¯u= [( ¯u1−¯u2)|(¯u2−¯u3)|¯u3](6)
2
Figure 1:Visualization of Polyhedral Layering
¯ρ= [ ¯ρ1|¯ρ2|¯ρ3](7)
In Equations (5) to (7), each color wheel’s size is analogous to ¯uiand the orientation of the color
pattern is analogous to ¯ρi. Takahishi describes a similar technique called the block model, which is
used to simulate zones of homogeneous density.
Observability and Correlation
Park et al., in 2010,8presented an analysis of finite-cube and finite-sphere gravity models wherein
observability of internal density was estimated from on-orbit data. The method utilized a batch-
least-squares filter applied to the range and range-rate measurements: no direct gravity measure-
ments were considered. In recent years, innovations in sensor technology have opened the possi-
bility of measuring gravity gradient (GG) directly from small-scale spacecraft.9The same mathe-
matical technique used by Park et al. can now be applied to predict the effectiveness of this new
gravimetry technique.
Assume that a gravity gradient measurement is taken at a point in space, and the measurement
can be defined by a polyhedral gravity model. The equation that would predict that measurement is
GG =¯
gg ·¯ρ= [ ¯
gge|¯
ggf]·[¯ρe|¯ρf],(8)
where GG is the gravity gradient matrix, ¯
gg is the concatenation of Equations (9) and (10), ¯ρis
the density vector describing the body, and ¯
ggeand ¯
ggfare vectors of matrices. Each matrix in the
edge and face vector is given as,
gge=−GLeEe,(9)
ggf=GωfFf,(10)
respectively. The recovery problem assumes that the position and gravity gradient can be measured.
Position is used to calculate ¯
gg, the gravity measurements are substituted for GG, and the goal of
the problem is to find the density vector ¯ρthat most closely satisfies Equation (8).
For a given position, the relationship between the density and the gravity gradient is linear. Sub-
sequently, partial derivatives with respect to the density are found simply to be
∂GG
∂¯ρ=¯
gg =G[−LeEe|ωfFf].(11)
Then, following Park et al., define
Λ=¯
ggT¯
gg,(12)
and to obtain the covariance matrix,
P=Λ−1.(13)
3
The covariance matrix provides the correlations and standard deviations of elements in the density
vector for a given set of measurement data. Analysis of these values informs predictions about the
accuracy of a recovery attempt. For example, a high standard deviation indicates high uncertainty
about the value of a density zone, and high correlation with other zones implies that the gravitational
effects caused by that zone are hard to distinguish from the effects of other zones.
Adam Optimizer
The problem of high-resolution polyhedral density reconstruction bears some rudimentary re-
semblance to the problem of training a neural network: a large number of parameters must be
tuned to recreate a set of noisy data. The methods which have shown the best performance in
high-dimensional neural network training can be classified as stochastic gradient methods. One
such algorithm, Adam, has quickly become one of the most popular methods since its introduction
in 2014.10 By keeping a running average of the second moment of the gradient, Adam is able to
scale the step size of each iteration for empirically better convergence. Certain situations11 result in
slower convergence, and prior work has sought to modify the base algorithm to accommodate such
cases.12 In this investigation, the original formulation based on adaptive moment estimation is im-
plemented. The core loop is reproduced from the original paper as Algorithm 1 for reference. One
Algorithm 1 Adam Algorithm (Reproduced for reference)
Require: α: Step size
Require: β1, β2∈[0,1): Exponential decay rates
Require: imax: Maximum iterations
Require: ∇f(¯x): Gradient function
Require: ¯x0: Initial guess
1: i←0
2: ¯m0←¯
0
3: ¯v0←¯
0
4: while i<imax do
5: i←i+ 1
6: ¯gi← ∇fi(¯xi−1)
7: ¯mi←β1¯mi−1+ (1 −β1)¯gi
8: ¯vi←β2¯vi−1+ (1 −β2)¯g[2]
i(Square bracket exponent indicates element-wise power)
9: αi←α·q1−βi
2/(1 −βi
1)(Correct for initialization bias)
10: ¯xi←¯xi−1−αi¯mi/(√¯vi+ε)(Element-wise square root and division)
return ¯xi
modification from the original algorithm is the loop control parameter, which was changed from a
convergence evaluation to a set number of iterations. This change allows the user to terminate the
computation and investigate the algorithm’s initial behavior before investing significant time into a
convergence-evaluated test case.
In order to minimize the stochastic gradient function, ∇f, running averages of the mean, ¯m, and
the uncentered variance, ¯v, are used to calculate an appropriate step size. Kingma and Ba draw an
analogy between the ratio ¯mi/√¯viand the signal-to-noise ratio (SNR).10 A smaller SNR (larger ¯v)
indicates greater uncertainty in the direction of the true gradient, and the step size is appropriately
scaled in line 10.
4
The optimization problem in this paper is formulated as a least-squares minimization. Because of
its broad applicability, the framework for solving such problems is written in the same C++ class as
the main optimizer, and is set as the default mode. The objective of the least-squares optimization
is
min(¯y−A¯x)2(14)
where ¯xis the design vector, ¯yis the data vector, and Arelates the two, such that, for the optimal
case, ¯y=A¯x. Adam operates using only the gradient, which is
¯g(¯x) = 2ATA¯x−2AT¯y. (15)
Batching is often utilized when training deep neural networks. In its most common form, batch-
ing consists of choosing a few data points with which the gradient function is calculated, performing
a few iterations, and then randomly re-selecting those data points. This technique can help the op-
timizer avoid getting trapped in non-convex regions of the solution space. For the least-squares
formulation, re-sampling data points amounts to selecting rows of the Amatrix and the correspond-
ing elements in the ¯yvector, and using these truncated values in Equation (15). Two parameters
that are used to control batching are the batch size and the batch frequency. Each is fairly intuitive:
batch size refers to the number of points extracted from Aand ¯yin each sample, and batch fre-
quency determines how often the data is resampled. The effects of each parameter on recovery and
convergence are detailed in the Results section.
PROCEDURE
Test scenarios can be described as occurring in 3 steps: parameter selection, simulation, and
recovery. The most significant parameters available for selection, as well as their valid values, are
presented in Table 1. The number of measurement directions parameter refers to the elements of the
gravity gradient matrix (xx,xy,xz,yy,yz,zz).
Parameter Values
Body Any polyhedral model
Layers >0
Number of probes >0
Noise on/off
Number of meas. directions 1-6
Sample Rate >0
Adam batch size >0
Adam batch frequency ≥1
Table 1:Parameters available for selection
A typical simulation for multiple probe flyby of a sample asteroid is illustrated in Figure 2. In
this case, 5 probes are ejected from a relative position of (50, 0, 0) km from the asteroid, and are
propagated forward for 6 hours. Their initial velocities are calculated to be evenly spaced around
the body, resulting in a more complete ground track coverage.
Before noise is applied to the data, the theoretical observability is analyzed using Park’s method
as detailed above. The standard deviations and the correlation matrix are saved in full precision
5
Figure 2:Sample simulation using an Eros shape model
as text files, and will be visualized later using MATLAB tools. Noise is sampled from a Gaus-
sian distribution with zero mean, where the level of noise is defined by a standard deviation. A
set is sampled from the defined distribution, and is then added to nominal trajectories and gravity
measurements.
In order for Adam to operate on this problem, it must be cast as a least-squares matrix problem.
Recalling the formulation above, the objective is
min( ¯
GG −gg ¯ρ)2,(16)
where ¯
GG is the measurement vector, ¯ρis the density vector, and gg is the system matrix defined
from Equation (8). The noise in position is considered when calculating gg, and the noise in gravity
measurements is included in ¯
GG.
Optimization is initialized using the default values β1= 0.9and β2= 0.999 given in the original
paper.10 The step size, α= 20, was empirically found to produce good results, but this parameter
is problem-specific, since Kingma and Ba suggested an αof 0.001. Performance is quantified by
calculating the difference between the optimized density and the true density at each iteration.
RESULTS
Three test cases of varying complexity were selected that constitute a broad sample of the possible
input parameters. For the simplest case, the base results are presented, and then the neighboring
parameter space is explored by changing certain values. The conclusions from the simple analysis
will determine the settings for more complex cases.
Case 1: 1-Layer Octahedral Body
Base Configuration: The theoretical correlations for the base case are given in Figure 3. For
each axis, the Zone ID refers to the index of that zone in the density vector, ¯ρ. Note that all diagonal
components were set to zero in order for the color scale to be more intuitive. The most evident dif-
ference is the emergence of a visual distinction between edges and faces. Figure 3b shows consistent
6
low-magnitude negative correlations of edges with faces, and mostly small positive correlations be-
tween faces. The outliers within the face-zone, such as (13,18), are likely caused by the symmetry
of the body.
(a) Case 1: 1 Probe correlation matrix (b) Case 1: 10 Probe correlation matrix
Figure 3:Correlation comparison for Case 1
The average standard deviation for the base case is 5.57 ×105g/cm3, an order of magnitude
improvement over the one-probe case’s average of 4.64 ×106g/cm3. However, since the average
density of the body is 2.67 g/cm3, both deviations suggest that a recovery will be imprecise.
In the absence of noise, the Adam algorithm shows consistent convergence in Figure 4, although
the same mechanics that make it robust for noise also make it relatively slow. Nonetheless, the
asymptotic convergence pattern was encouraging given the wide standard deviations from the ob-
servability analysis.
Figure 4:Case 1: Adam Convergence
7
Adding Noise: Previous investigation by the authors suggested that noisy optimization would
be difficult, if not impossible. However, adding up to 10 meters in positional uncertainty resulted
in convergence profiles nearly indistinguishable from the clean Figure 4. Zooming in to the level
of Figure 5 is required to discern a difference. It is observed in the 10 cm and 1 m cases that the
optimization actually performed better than the clean case. However, the result may be misleading;
if the optimization were continued, the noisy cases will overshoot the target and settle with a higher
final error.
Figure 5:Case 1: Final Iterations of Noise Comparison
Varying Sample Rate: Intuition suggests that increasing the amount of measurements would im-
prove the accuracy of the recovery. With the current formulation, there are two ways to obtain more
measurements: increase the number of probes, or increase the sample rate of the probes. The effects
of increased sample rate are illustrated in Figure 6, with an accuracy improvement of about 7×10−7.
Note that there is minimal benefit of a 1-second sample rate over the 5-second sample rate. These
cases are not identical, due to random batch selection, but they appear to twist around each other,
with neither displaying consistent gains over the other.
Varying Number of Probes: The other option to gather more measurements is to increase the
number of probes. Additional probes are sent on unique trajectories, thereby sampling in previously
unexplored space. The effects shown in Figure 7 are of a similar order of magnitude to the sample
rate analysis above. This similarity indicates that the variation could be a product of the stochas-
ticity of the algorithm itself, via noise sampling and batch sampling, rather than an effect of the
sample rate or the number of probes. Also, the final errors after 50,000 iterations do not support
the hypothesis that more probes implies better recoveries. Therefore, other factors are affecting the
system.
Varying Batch Size: The concept of batching, as discussed in the Adam Optimizer section is
implemented in the simulations, and two variables are introduced that control the process: batch
size, and batch frequency. In the previous two analyses, a large number of samples were available for
8
Figure 6:Case 1: Final Iterations of Sample Rate Comparison
Figure 7:Case 1: Final Iterations of Probe Number Comparison
9
use in the optimization, but since the batch size was constant throughout, each iteration only utilized
800 samples. By increasing the batch size, more samples are used in each iteration. The effects of
varying the batch size are illustrated in Figure 8. The drastic improvement in convergence speed
Figure 8:Case 1: Batch Size Comparison
from larger batch size can be explained by Kingma and Ba’s SNR analogy: a more comprehensive
sample of the data results in a more consistent gradient vector. Then, if the gradient is more stable,
the algorithm will take larger steps. However, there are some trade-offs with respect to the runtime.
Increasing the batch size is the same as increasing the height of the Amatrix in Equation (15); as
such, the linear algebra operations take longer to compute.
Considering the standard deviation predictions from above, the result for the 8000 batch size is
particularly surprising. Despite the standard deviation being on the order of 105g/cm3, the densities
are recovered to an accuracy of about 6×10−4g/cm3.
Behavior Near Answer: All the tests, thus far, have investigated the behavior of the optimization
as it leaves the initial guess and moves towards the correct answer. However, it is also of interest to
know how the algorithm behaves in the vicinity of the correct answer. This zone is tested by passing
the true density distribution as the initial guess, and then observing whether the algorithm stays in
the neighborhood of the optimum. Given the results of the cases above, all scenarios in Figure 9
utilize the maximum possible batch size, as well as noise in position and gravity measurements.
The location to which the algorithm converges is a function of the random noise sample. The three
cases shown in Figure 9 use different samples; thus, they converge to a different location. However,
the behavior can still be analyzed. The 5 probe scenario converges to its final solution noticeably
slower than the 10 and 15 probe cases.
Case 2: 3-Layer Random Sphere
The polyhedral body for case 2 is a sphere with noise applied to the radius of each vertex. The
body consists of 560 unique density zones, in comparison to 20 density zones of the octahedral
body. Furthermore, two inner layers are added, resulting in a total of 1680 density zones.
The results from case 1 can assist in making an informed choice of parameters for case 2. A
10
Figure 9:Case 1: Behavior of Algorithm Near Optimum
batch size of 2000 is arbitrarily selected as a balance between the predicted runtime and conver-
gence speed. Note, the batch size is greater than the number of density zones, resulting in an over-
constrained Amatrix at each iteration. As a result, it is less probable to select a batch permutation
that does not produce a reasonable gradient direction. In this case, fifteen probes were simulated,
and are assumed to take a measurement every 10 seconds.
Base Configuration: Analysis of theoretical observability, using the same technique as case 1,
produced a less intuitive output. The standard deviation for many zones was returned as NaN,
implying these areas cannot be observed at all. For the areas that returned a number, the average
value was 2.18×1014 g/cm3; however, the results of case 1 suggest that a reasonably accurate answer
can still be found. The NaN zones are illustrated by dark lines in Figure 10. Each layer of the
polyhedron is represented by a third of the x and y axes, with the outermost layer shown in the top
left of Figure 10. The concentration of dark lines in the bottom right implies that the inner layers
are less discernible than the outer layer.
Due to the increased size of the Amatrix, computations for case 2 are much slower than those for
case 1. Therefore, only 1000 iterations are shown in Figure 11. Comparing the graph to Figure 4,
it is clear that case 2 converges relatively slower, and is less consistent. Although the Amatrix is
overconstrained, as previously mentioned, the ratio of the batch size to the total number of mea-
surements is approximately 1:16. The low ratio implies higher turnover in each batch, resulting in a
noisier gradient and a smaller step size.
Behavior Near Answer: In Figure 12, 1000 iterations are not enough to conclusively observe a
point of convergence. The divergence from the true answer is slower than the cases in Figure 9, due
to the high batch turnover and small step size. The fluctuations after the 900th iteration, marked by
the red circle, may indicate that the algorithm has settled near a point and will continue to oscillate,
but a longer simulation would be necessary to confirm this possibility.
11
Figure 10:Case 2: Correlations (Dark Blue = Unobservable)
Figure 11:Case 2: Initial Behavior
12
Figure 12:Case 2: Final Behavior
Case 3: 2-Layer Ellipsoid
Case 3 consists of a 2-layer ellipsoidal body with a total of 3600 unique density zones. The
simulation uses 20 probes and a sample rate of 15 seconds.
Base Configuration: As in case 2, many areas of the body appear unobservable from Park et
al.’s8method. No clear patterns emerge between the outer and inner layers in Figure 13. The
average deviation for the observable zones is 1.52×1015 g/cm3, only one order of magnitude larger
than case 2, leading to the expectation of similar trends in the recovery.
Figure 13:Case 3: Correlations (Dark Blue = Unobservable)
13
The behavior using the average density as the initial guess is illustrated in Figure 14. Although
the initial error is higher than case 2, the reduction in error after 1000 iterations is greater, and
the minimization is relatively more consistent. The consistency is a direct result of the higher
initial error. Closer to the answer, stochastic effects dominate, however, in the initial iterations, the
minimizing direction is clearly evident. Differences between the 20- and 25-probe simulations are
due to the inherent randomness of the procedure, such as noise sampling and batching.
Figure 14:Case 3: Initial Behavior
Behavior Near Answer: When the algorithm is close to the true answer, the noise in the system
is more apparent. No convergence is observed in Figure 15, and the erratic path of both cases
indicates that the gradient is inconsistent. The large jump in the 20 probe case between 600 and 700
Figure 15:Case 3: Final Behavior
iterations (Figure 15) is due to batching effects. In this regime, multiple batches were sequentially
selected that strongly pointed away from the true answer. If the scenario were recalculated, the
increase may take place more gently, or may not be present at all. Gradient inconsistencies are
14
partially caused by the ratio of batch size to the number of density zones in the body. The selected
batch size of 2000 is insufficient to fully constrain a body with 3600 parameters, but any tangible
increase in batch size comes at the expense of the runtime. In future work, it may be beneficial to
dedicate computing time towards obtaining accurate, high-resolution recoveries, but the aim of this
paper is to analyze trends of a prototype algorithm in simple cases.
CONCLUSION
Asteroids are of particular interest in the modern space industry due to their potential for ad-
vancing knowledge about the origins of the solar system. The techniques investigated in this paper
can support flyby missions and target selection for asteroid mining. In the authors’ previous work,
using a different optimization algorithm, a clear trend emerged that suggested more probes would
give a more accurate result. The results from this paper suggest that the choice and tuning of the
optimization technique has a more significant effect, especially in noisy conditions, than simply
increasing the number of probes. The most significant conclusion from the above research is the
apparent disagreement between the theoretical observability of a body, and the performance of the
Adam algorithm. Even with noise applied to the dataset, Adam was able to consistently move from
a decent initial guess to a more accurate solution. Future work will investigate this disconnect and
attempt to develop techniques for more effective accuracy prediction.
REFERENCES
[1] J. Miller, A. Konopliv, P. Antreasian, J. Bordi, S. Chesley, C. Helfrich, W. Owen, T. Wang, B. Williams,
D. Yeomans, and D. Scheeres, “Determination of Shape, Gravity, and Rotational State of Asteroid 433
Eros,” Icarus, Vol. 155, 2002, pp. 3–17.
[2] D. Scheeres, R. Gaskell, S. Abe, O. Barnouin-Jha, T. Hashimoto, J. Kawaguchi, T. Kubota, J. Saito,
M. Yoshikawa, N. Hirata, T. Mukai, M. Ishiguro, T. Kominato, K. Shirakawa, and M. Uo, “The Actual
Dynamical Environment About Itokawa,” 2006.
[3] H. F. Levison, C. Olkin, K. S. Noll, et al., “Lucy: Surveying the Diversity of the Trojan Asteroids,”
Lunar and Planetary Science XLVIII, 2017.
[4] W. G. Ledbetter, R. Sood, and J. Keane, “SmallSat Swarm Gravimetry: Revealing the Interior Structure
of Asteroids and Comets,” AAS/AIAA Astrodynamics Specialist Conference, Aug. 2018.
[5] R. A. Werner and D. J. Scheeres, “Exterior Gravitation of a Polyhedron Derived and Compared with
Harmonic and Mascon Gravitation Representations of Asteroid 4769 Castalia,” Celestial Mechanics
and Dynamical Astronomy, Vol. 65, 1997, pp. 313–344.
[6] Y. Takahashi and D. Scheeres, “Morphology driven density distribution estimation for small bodies,”
Icarus, 2014, pp. 179–193.
[7] D. Scheeres, B. Khushalani, and R. Werner, “Estimating asteroid density distributions from shape and
gravity information,” Planetary and Space Science, Vol. 48, 2000, pp. 965–971.
[8] R. S. Park, R. A. Werner, and S. Bhaskaran, “Estimating Small-Body Gravity Field from Shape Model
and Navigation Data,” Journal of Guidance, Control, and Dynamics, Vol. 33, No. 1, 2010, pp. 212–221.
doi: 10.2514/1.41585, 10.2514/1.41585; 10.2514/1.41585.
[9] K. Carroll and D. Faber, “Tidal Acceleration Gravity Gradiometry for Measuring Asteroid Gravity Field
From Orbit,” 10 2018.
[10] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” CoRR, Vol. abs/1412.6980,
2014.
[11] S. J. Reddi, S. Kale, and S. Kumar, “On the Convergence of Adam and Beyond,” International Confer-
ence on Learning Representations, 2018.
[12] T. Dozat, “Incorporating Nesterov Momentum into Adam,” 2015.
[13] J. Atchison, R. Mitch, and A. Rivkin, “Swarm Flyby Gravimetry,” tech. rep., The Johns Hopkins Uni-
versity Applied Physics Laboratory, Apr. 2015.
[14] A. Vroom, M. D. Carlo, J. M. R. Martin, and M. Vasile, “Optimal Trajectory Planning for Multiple
Asteroid Tour Mission by means of an Incremental Bio-Inspired Tree Search Algorithm,” Dec. 2016.
15
[15] Y. Takahashi and D. Scheeres, “Small body surface gravity fields via spherical harmonic expansions,”
Celestial Mechanics and Dynamical Astronomy, Vol. 119, June 2014, pp. 169–206.
[16] D. Yeomans, P. Antreasian, J.-P. Barriot, et al., “Radio Science Results During the NEAR-Shoemaker
Spacecraft Rendezvous with Eros,” Science, Vol. 289, Sept. 2000, pp. 2085–2088.
[17] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran,
D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning,”
Nature, Vol. 518, Feb. 2015, p. 529, 10.1038/nature14236; 10.1038/nature14236.
[18] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” CoRR, Vol. abs/1412.6980,
2014.
[19] J. Duchi, E. Hazan, and Y. Singer, “Adaptive Subgradient Methods for Online Learning and Stochastic
Optimization,” Journal of Machine Learning Research, Vol. 12, July 2011, p. 2121–2159.
[20] N. Stacey and S. D’Amico, “Autonomous Swarming for Simultaneous Navigation and Asteroid Char-
acterization,” AAS/AIAA Astrodynamics Specialist Conference, Aug. 2018.
16