Conference PaperPDF Available

Expected Accuracy of Density Recovery using Satellite Swarm Gravity Measurements


Abstract and Figures

Asteroids are of particular interest in the modern space industry due to their potential for advancing knowledge about the origins of the solar system. A stochastic gradient method from the field of deep learning is applied to the problem of asteroid density recovery, with implications for target selection and mining. The algorithm outperforms the observability predicted using a technique from Park, and is minimally affected by noisy measurements.
Content may be subject to copyright.
AAS 19-529
William Ledbetter
, Rohan Sood
, and Jeffrey Stuart
Asteroids are of particular interest in the modern space industry due to their poten-
tial for advancing knowledge about the origins of the solar system. Identifying and
characterizing asteroids with sustainable resources for future space exploration is
critical. Additionally, limited knowledge of near-Earth asteroids’ physical char-
acteristics such as shape, density, gravity field, and composition pose a challenge
to any manned exploration. A stochastic gradient method from the field of deep
learning is applied to the problem of asteroid density recovery, with implications
for target selection and mining. The algorithm outperforms the predicted observ-
ability, and is minimally affected by noisy measurements.
Questions about the origins of life on Earth have been the overarching motivation that has driven
the majority of space-faring missions over the last half-century. Scientists and engineers have rec-
ognized that asteroids, the leftover building blocks from the time of planetary accretion, may be the
richest source of information capable of providing a glimpse into the prehistoric solar system. De-
veloping a comprehensive understanding of the composition of asteroids may offer insight into the
early processes of solar system formation, with implications for planetary exploration, exoplanetary
investigations, and, ultimately, the origins of life.
Missions such as NEAR/Shoemaker1and Hayabusa2have been invaluable to the scientific com-
munity, and successful completion of the OSIRIS-REx and Hayabusa 2 missions will catalyze fur-
ther research. However, the number of cataloged asteroids recently surpassed 780,000, yet, even
with these recent missions, fewer than 20 have been visited by spacecraft. Small-body exploration
missions, such as Lucy,3often utilize flybys to tour multiple bodies with a single spacecraft. Lucy
plans to visit multiple targets in the Jupiter-Trojan system by utilizing sequential flybys. Although
the asteroid-tour-style mission is an elegant solution to the problem of broad exploration, some
trade-offs are made with respect to the traditional single-target mission. In order for the spacecraft
to continue on its tour trajectory, it must maintain significant velocity as it passes each body of in-
terest, substantially reducing the time available for close-proximity observations. As an attempt to
augment the quality of data obtained during such short encounters, the authors propose a method of
in-situ gravimetry to reveal internal structures and density variations.
Ph.D. Student, Astrodynamics and Space Research Laboratory, Department of Aerospace Engineering and Mechanics,
The University of Alabama, Tuscaloosa, AL, 35487, USA.
Assistant Professor, Astrodynamics and Space Research Laboratory, Department of Aerospace Engineering and Mechan-
ics, The University of Alabama, Tuscaloosa, AL, 35487, USA.
Research Technologist, Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena,
CA 91109, USA, MS 301-121,
Previous work4outlined the principles of density recovery for a heterogeneous body, and revised
the polyhedral gravity model5, 6 to express such a body. Preliminary developments showed promis-
ing results using a simple matrix inversion recovery technique; however, the method had limitations
when complex recoveries were attempted. In this paper, the authors expand the research on two
fronts: gravity model observability, and recovery algorithm development. Analysis of the gravity
model’s observability will help define bounds on the expected performance of an optimal recov-
ery algorithm, and research into alternative recovery methods will aim to achieve the best expected
Dynamical Model
Recent developments in asteroid gravity modeling have utilized the polyhedral formulation of
Werner and Scheeres.5Updates to this model have investigated techniques for expressing variable
density,6, 7 but still maintain a priori assumptions about the internal geometry, such as core/shell
or hemispherical distributions. The authors’ contribution expands the techniques of Takahashi6to
express fully heterogeneous polyhedra at the same level of resolution as the shape model itself.
The expression of heterogeneous polyhedra relies on vectorization of Werner’s basic formulation,
Equation (1).
where Gis the gravitational constant, ρis the average density of the body, ¯reis a vector from the
point in space to a point on edge e,Eeis a matrix describing the geometry of edge e, and Leis the
potential of a 1D ‘wire’ expressed in terms of the distances to the edge’s endpoints. In the second
term, ¯rfis a vector from the query point in space to a point on face f,Ffis a matrix describing
the geometry and orientation of face f, and ωfis the signed solid angle subtended by face f, from
the perspective of the point in space. Equation (1) assumes homogeneity, but by distributing the
density term into the summations, each face and edge can be assigned a unique density, ¯ρfand ¯ρe,
respectively, thus enabling variation in the latitude and longitude of the central body. Furthermore,
the potential equation can now be expressed as a vector dot product:
U= ¯u·¯ρ= [¯ue|¯uf]·ρe|¯ρf],(2)
where the elements in the ¯uvectors are given as
Radial variation is achieved through layering of multiple polyhedra. Assuming the shape of each
inner layer is a scaled and concentric copy of the outermost layer preserves the Eeand Ffmatricies.
Therefore, only the relative position vectors, ¯reand ¯rf, and the scalars, Leand ωf, vary with
each subsequent layer. The procedure necessary for layering is illustrated in Figure 1, and can be
mathematically expressed by Equation (5).
U= (¯u1¯u2)¯ρ1+ (¯u2¯u3)¯ρ2+ ¯u3¯ρ3= ¯u·¯ρ(5)
¯u= [( ¯u1¯u2)|(¯u2¯u3)|¯u3](6)
Figure 1:Visualization of Polyhedral Layering
¯ρ= [ ¯ρ1|¯ρ2|¯ρ3](7)
In Equations (5) to (7), each color wheel’s size is analogous to ¯uiand the orientation of the color
pattern is analogous to ¯ρi. Takahishi describes a similar technique called the block model, which is
used to simulate zones of homogeneous density.
Observability and Correlation
Park et al., in 2010,8presented an analysis of finite-cube and finite-sphere gravity models wherein
observability of internal density was estimated from on-orbit data. The method utilized a batch-
least-squares filter applied to the range and range-rate measurements: no direct gravity measure-
ments were considered. In recent years, innovations in sensor technology have opened the possi-
bility of measuring gravity gradient (GG) directly from small-scale spacecraft.9The same mathe-
matical technique used by Park et al. can now be applied to predict the effectiveness of this new
gravimetry technique.
Assume that a gravity gradient measurement is taken at a point in space, and the measurement
can be defined by a polyhedral gravity model. The equation that would predict that measurement is
GG =¯
gg ·¯ρ= [ ¯
where GG is the gravity gradient matrix, ¯
gg is the concatenation of Equations (9) and (10), ¯ρis
the density vector describing the body, and ¯
ggeand ¯
ggfare vectors of matrices. Each matrix in the
edge and face vector is given as,
respectively. The recovery problem assumes that the position and gravity gradient can be measured.
Position is used to calculate ¯
gg, the gravity measurements are substituted for GG, and the goal of
the problem is to find the density vector ¯ρthat most closely satisfies Equation (8).
For a given position, the relationship between the density and the gravity gradient is linear. Sub-
sequently, partial derivatives with respect to the density are found simply to be
gg =G[LeEe|ωfFf].(11)
Then, following Park et al., define
and to obtain the covariance matrix,
The covariance matrix provides the correlations and standard deviations of elements in the density
vector for a given set of measurement data. Analysis of these values informs predictions about the
accuracy of a recovery attempt. For example, a high standard deviation indicates high uncertainty
about the value of a density zone, and high correlation with other zones implies that the gravitational
effects caused by that zone are hard to distinguish from the effects of other zones.
Adam Optimizer
The problem of high-resolution polyhedral density reconstruction bears some rudimentary re-
semblance to the problem of training a neural network: a large number of parameters must be
tuned to recreate a set of noisy data. The methods which have shown the best performance in
high-dimensional neural network training can be classified as stochastic gradient methods. One
such algorithm, Adam, has quickly become one of the most popular methods since its introduction
in 2014.10 By keeping a running average of the second moment of the gradient, Adam is able to
scale the step size of each iteration for empirically better convergence. Certain situations11 result in
slower convergence, and prior work has sought to modify the base algorithm to accommodate such
cases.12 In this investigation, the original formulation based on adaptive moment estimation is im-
plemented. The core loop is reproduced from the original paper as Algorithm 1 for reference. One
Algorithm 1 Adam Algorithm (Reproduced for reference)
Require: α: Step size
Require: β1, β2[0,1): Exponential decay rates
Require: imax: Maximum iterations
Require: fx): Gradient function
Require: ¯x0: Initial guess
1: i0
2: ¯m0¯
3: ¯v0¯
4: while i<imax do
5: ii+ 1
6: ¯gi← ∇fi(¯xi1)
7: ¯miβ1¯mi1+ (1 β1gi
8: ¯viβ2¯vi1+ (1 β2g[2]
i(Square bracket exponent indicates element-wise power)
9: αiα·q1βi
2/(1 βi
1)(Correct for initialization bias)
10: ¯xi¯xi1αi¯mi/(¯vi+ε)(Element-wise square root and division)
return ¯xi
modification from the original algorithm is the loop control parameter, which was changed from a
convergence evaluation to a set number of iterations. This change allows the user to terminate the
computation and investigate the algorithm’s initial behavior before investing significant time into a
convergence-evaluated test case.
In order to minimize the stochastic gradient function, f, running averages of the mean, ¯m, and
the uncentered variance, ¯v, are used to calculate an appropriate step size. Kingma and Ba draw an
analogy between the ratio ¯mi/¯viand the signal-to-noise ratio (SNR).10 A smaller SNR (larger ¯v)
indicates greater uncertainty in the direction of the true gradient, and the step size is appropriately
scaled in line 10.
The optimization problem in this paper is formulated as a least-squares minimization. Because of
its broad applicability, the framework for solving such problems is written in the same C++ class as
the main optimizer, and is set as the default mode. The objective of the least-squares optimization
where ¯xis the design vector, ¯yis the data vector, and Arelates the two, such that, for the optimal
case, ¯y=A¯x. Adam operates using only the gradient, which is
¯g(¯x) = 2ATA¯x2AT¯y. (15)
Batching is often utilized when training deep neural networks. In its most common form, batch-
ing consists of choosing a few data points with which the gradient function is calculated, performing
a few iterations, and then randomly re-selecting those data points. This technique can help the op-
timizer avoid getting trapped in non-convex regions of the solution space. For the least-squares
formulation, re-sampling data points amounts to selecting rows of the Amatrix and the correspond-
ing elements in the ¯yvector, and using these truncated values in Equation (15). Two parameters
that are used to control batching are the batch size and the batch frequency. Each is fairly intuitive:
batch size refers to the number of points extracted from Aand ¯yin each sample, and batch fre-
quency determines how often the data is resampled. The effects of each parameter on recovery and
convergence are detailed in the Results section.
Test scenarios can be described as occurring in 3 steps: parameter selection, simulation, and
recovery. The most significant parameters available for selection, as well as their valid values, are
presented in Table 1. The number of measurement directions parameter refers to the elements of the
gravity gradient matrix (xx,xy,xz,yy,yz,zz).
Parameter Values
Body Any polyhedral model
Layers >0
Number of probes >0
Noise on/off
Number of meas. directions 1-6
Sample Rate >0
Adam batch size >0
Adam batch frequency 1
Table 1:Parameters available for selection
A typical simulation for multiple probe flyby of a sample asteroid is illustrated in Figure 2. In
this case, 5 probes are ejected from a relative position of (50, 0, 0) km from the asteroid, and are
propagated forward for 6 hours. Their initial velocities are calculated to be evenly spaced around
the body, resulting in a more complete ground track coverage.
Before noise is applied to the data, the theoretical observability is analyzed using Park’s method
as detailed above. The standard deviations and the correlation matrix are saved in full precision
Figure 2:Sample simulation using an Eros shape model
as text files, and will be visualized later using MATLAB tools. Noise is sampled from a Gaus-
sian distribution with zero mean, where the level of noise is defined by a standard deviation. A
set is sampled from the defined distribution, and is then added to nominal trajectories and gravity
In order for Adam to operate on this problem, it must be cast as a least-squares matrix problem.
Recalling the formulation above, the objective is
min( ¯
GG gg ¯ρ)2,(16)
where ¯
GG is the measurement vector, ¯ρis the density vector, and gg is the system matrix defined
from Equation (8). The noise in position is considered when calculating gg, and the noise in gravity
measurements is included in ¯
Optimization is initialized using the default values β1= 0.9and β2= 0.999 given in the original
paper.10 The step size, α= 20, was empirically found to produce good results, but this parameter
is problem-specific, since Kingma and Ba suggested an αof 0.001. Performance is quantified by
calculating the difference between the optimized density and the true density at each iteration.
Three test cases of varying complexity were selected that constitute a broad sample of the possible
input parameters. For the simplest case, the base results are presented, and then the neighboring
parameter space is explored by changing certain values. The conclusions from the simple analysis
will determine the settings for more complex cases.
Case 1: 1-Layer Octahedral Body
Base Configuration: The theoretical correlations for the base case are given in Figure 3. For
each axis, the Zone ID refers to the index of that zone in the density vector, ¯ρ. Note that all diagonal
components were set to zero in order for the color scale to be more intuitive. The most evident dif-
ference is the emergence of a visual distinction between edges and faces. Figure 3b shows consistent
low-magnitude negative correlations of edges with faces, and mostly small positive correlations be-
tween faces. The outliers within the face-zone, such as (13,18), are likely caused by the symmetry
of the body.
(a) Case 1: 1 Probe correlation matrix (b) Case 1: 10 Probe correlation matrix
Figure 3:Correlation comparison for Case 1
The average standard deviation for the base case is 5.57 ×105g/cm3, an order of magnitude
improvement over the one-probe case’s average of 4.64 ×106g/cm3. However, since the average
density of the body is 2.67 g/cm3, both deviations suggest that a recovery will be imprecise.
In the absence of noise, the Adam algorithm shows consistent convergence in Figure 4, although
the same mechanics that make it robust for noise also make it relatively slow. Nonetheless, the
asymptotic convergence pattern was encouraging given the wide standard deviations from the ob-
servability analysis.
Figure 4:Case 1: Adam Convergence
Adding Noise: Previous investigation by the authors suggested that noisy optimization would
be difficult, if not impossible. However, adding up to 10 meters in positional uncertainty resulted
in convergence profiles nearly indistinguishable from the clean Figure 4. Zooming in to the level
of Figure 5 is required to discern a difference. It is observed in the 10 cm and 1 m cases that the
optimization actually performed better than the clean case. However, the result may be misleading;
if the optimization were continued, the noisy cases will overshoot the target and settle with a higher
final error.
Figure 5:Case 1: Final Iterations of Noise Comparison
Varying Sample Rate: Intuition suggests that increasing the amount of measurements would im-
prove the accuracy of the recovery. With the current formulation, there are two ways to obtain more
measurements: increase the number of probes, or increase the sample rate of the probes. The effects
of increased sample rate are illustrated in Figure 6, with an accuracy improvement of about 7×107.
Note that there is minimal benefit of a 1-second sample rate over the 5-second sample rate. These
cases are not identical, due to random batch selection, but they appear to twist around each other,
with neither displaying consistent gains over the other.
Varying Number of Probes: The other option to gather more measurements is to increase the
number of probes. Additional probes are sent on unique trajectories, thereby sampling in previously
unexplored space. The effects shown in Figure 7 are of a similar order of magnitude to the sample
rate analysis above. This similarity indicates that the variation could be a product of the stochas-
ticity of the algorithm itself, via noise sampling and batch sampling, rather than an effect of the
sample rate or the number of probes. Also, the final errors after 50,000 iterations do not support
the hypothesis that more probes implies better recoveries. Therefore, other factors are affecting the
Varying Batch Size: The concept of batching, as discussed in the Adam Optimizer section is
implemented in the simulations, and two variables are introduced that control the process: batch
size, and batch frequency. In the previous two analyses, a large number of samples were available for
Figure 6:Case 1: Final Iterations of Sample Rate Comparison
Figure 7:Case 1: Final Iterations of Probe Number Comparison
use in the optimization, but since the batch size was constant throughout, each iteration only utilized
800 samples. By increasing the batch size, more samples are used in each iteration. The effects of
varying the batch size are illustrated in Figure 8. The drastic improvement in convergence speed
Figure 8:Case 1: Batch Size Comparison
from larger batch size can be explained by Kingma and Ba’s SNR analogy: a more comprehensive
sample of the data results in a more consistent gradient vector. Then, if the gradient is more stable,
the algorithm will take larger steps. However, there are some trade-offs with respect to the runtime.
Increasing the batch size is the same as increasing the height of the Amatrix in Equation (15); as
such, the linear algebra operations take longer to compute.
Considering the standard deviation predictions from above, the result for the 8000 batch size is
particularly surprising. Despite the standard deviation being on the order of 105g/cm3, the densities
are recovered to an accuracy of about 6×104g/cm3.
Behavior Near Answer: All the tests, thus far, have investigated the behavior of the optimization
as it leaves the initial guess and moves towards the correct answer. However, it is also of interest to
know how the algorithm behaves in the vicinity of the correct answer. This zone is tested by passing
the true density distribution as the initial guess, and then observing whether the algorithm stays in
the neighborhood of the optimum. Given the results of the cases above, all scenarios in Figure 9
utilize the maximum possible batch size, as well as noise in position and gravity measurements.
The location to which the algorithm converges is a function of the random noise sample. The three
cases shown in Figure 9 use different samples; thus, they converge to a different location. However,
the behavior can still be analyzed. The 5 probe scenario converges to its final solution noticeably
slower than the 10 and 15 probe cases.
Case 2: 3-Layer Random Sphere
The polyhedral body for case 2 is a sphere with noise applied to the radius of each vertex. The
body consists of 560 unique density zones, in comparison to 20 density zones of the octahedral
body. Furthermore, two inner layers are added, resulting in a total of 1680 density zones.
The results from case 1 can assist in making an informed choice of parameters for case 2. A
Figure 9:Case 1: Behavior of Algorithm Near Optimum
batch size of 2000 is arbitrarily selected as a balance between the predicted runtime and conver-
gence speed. Note, the batch size is greater than the number of density zones, resulting in an over-
constrained Amatrix at each iteration. As a result, it is less probable to select a batch permutation
that does not produce a reasonable gradient direction. In this case, fifteen probes were simulated,
and are assumed to take a measurement every 10 seconds.
Base Configuration: Analysis of theoretical observability, using the same technique as case 1,
produced a less intuitive output. The standard deviation for many zones was returned as NaN,
implying these areas cannot be observed at all. For the areas that returned a number, the average
value was 2.18×1014 g/cm3; however, the results of case 1 suggest that a reasonably accurate answer
can still be found. The NaN zones are illustrated by dark lines in Figure 10. Each layer of the
polyhedron is represented by a third of the x and y axes, with the outermost layer shown in the top
left of Figure 10. The concentration of dark lines in the bottom right implies that the inner layers
are less discernible than the outer layer.
Due to the increased size of the Amatrix, computations for case 2 are much slower than those for
case 1. Therefore, only 1000 iterations are shown in Figure 11. Comparing the graph to Figure 4,
it is clear that case 2 converges relatively slower, and is less consistent. Although the Amatrix is
overconstrained, as previously mentioned, the ratio of the batch size to the total number of mea-
surements is approximately 1:16. The low ratio implies higher turnover in each batch, resulting in a
noisier gradient and a smaller step size.
Behavior Near Answer: In Figure 12, 1000 iterations are not enough to conclusively observe a
point of convergence. The divergence from the true answer is slower than the cases in Figure 9, due
to the high batch turnover and small step size. The fluctuations after the 900th iteration, marked by
the red circle, may indicate that the algorithm has settled near a point and will continue to oscillate,
but a longer simulation would be necessary to confirm this possibility.
Figure 10:Case 2: Correlations (Dark Blue = Unobservable)
Figure 11:Case 2: Initial Behavior
Figure 12:Case 2: Final Behavior
Case 3: 2-Layer Ellipsoid
Case 3 consists of a 2-layer ellipsoidal body with a total of 3600 unique density zones. The
simulation uses 20 probes and a sample rate of 15 seconds.
Base Configuration: As in case 2, many areas of the body appear unobservable from Park et
al.’s8method. No clear patterns emerge between the outer and inner layers in Figure 13. The
average deviation for the observable zones is 1.52×1015 g/cm3, only one order of magnitude larger
than case 2, leading to the expectation of similar trends in the recovery.
Figure 13:Case 3: Correlations (Dark Blue = Unobservable)
The behavior using the average density as the initial guess is illustrated in Figure 14. Although
the initial error is higher than case 2, the reduction in error after 1000 iterations is greater, and
the minimization is relatively more consistent. The consistency is a direct result of the higher
initial error. Closer to the answer, stochastic effects dominate, however, in the initial iterations, the
minimizing direction is clearly evident. Differences between the 20- and 25-probe simulations are
due to the inherent randomness of the procedure, such as noise sampling and batching.
Figure 14:Case 3: Initial Behavior
Behavior Near Answer: When the algorithm is close to the true answer, the noise in the system
is more apparent. No convergence is observed in Figure 15, and the erratic path of both cases
indicates that the gradient is inconsistent. The large jump in the 20 probe case between 600 and 700
Figure 15:Case 3: Final Behavior
iterations (Figure 15) is due to batching effects. In this regime, multiple batches were sequentially
selected that strongly pointed away from the true answer. If the scenario were recalculated, the
increase may take place more gently, or may not be present at all. Gradient inconsistencies are
partially caused by the ratio of batch size to the number of density zones in the body. The selected
batch size of 2000 is insufficient to fully constrain a body with 3600 parameters, but any tangible
increase in batch size comes at the expense of the runtime. In future work, it may be beneficial to
dedicate computing time towards obtaining accurate, high-resolution recoveries, but the aim of this
paper is to analyze trends of a prototype algorithm in simple cases.
Asteroids are of particular interest in the modern space industry due to their potential for ad-
vancing knowledge about the origins of the solar system. The techniques investigated in this paper
can support flyby missions and target selection for asteroid mining. In the authors’ previous work,
using a different optimization algorithm, a clear trend emerged that suggested more probes would
give a more accurate result. The results from this paper suggest that the choice and tuning of the
optimization technique has a more significant effect, especially in noisy conditions, than simply
increasing the number of probes. The most significant conclusion from the above research is the
apparent disagreement between the theoretical observability of a body, and the performance of the
Adam algorithm. Even with noise applied to the dataset, Adam was able to consistently move from
a decent initial guess to a more accurate solution. Future work will investigate this disconnect and
attempt to develop techniques for more effective accuracy prediction.
[1] J. Miller, A. Konopliv, P. Antreasian, J. Bordi, S. Chesley, C. Helfrich, W. Owen, T. Wang, B. Williams,
D. Yeomans, and D. Scheeres, “Determination of Shape, Gravity, and Rotational State of Asteroid 433
Eros,” Icarus, Vol. 155, 2002, pp. 3–17.
[2] D. Scheeres, R. Gaskell, S. Abe, O. Barnouin-Jha, T. Hashimoto, J. Kawaguchi, T. Kubota, J. Saito,
M. Yoshikawa, N. Hirata, T. Mukai, M. Ishiguro, T. Kominato, K. Shirakawa, and M. Uo, “The Actual
Dynamical Environment About Itokawa,” 2006.
[3] H. F. Levison, C. Olkin, K. S. Noll, et al., “Lucy: Surveying the Diversity of the Trojan Asteroids,
Lunar and Planetary Science XLVIII, 2017.
[4] W. G. Ledbetter, R. Sood, and J. Keane, “SmallSat Swarm Gravimetry: Revealing the Interior Structure
of Asteroids and Comets,” AAS/AIAA Astrodynamics Specialist Conference, Aug. 2018.
[5] R. A. Werner and D. J. Scheeres, “Exterior Gravitation of a Polyhedron Derived and Compared with
Harmonic and Mascon Gravitation Representations of Asteroid 4769 Castalia,Celestial Mechanics
and Dynamical Astronomy, Vol. 65, 1997, pp. 313–344.
[6] Y. Takahashi and D. Scheeres, “Morphology driven density distribution estimation for small bodies,
Icarus, 2014, pp. 179–193.
[7] D. Scheeres, B. Khushalani, and R. Werner, “Estimating asteroid density distributions from shape and
gravity information,Planetary and Space Science, Vol. 48, 2000, pp. 965–971.
[8] R. S. Park, R. A. Werner, and S. Bhaskaran, “Estimating Small-Body Gravity Field from Shape Model
and Navigation Data,Journal of Guidance, Control, and Dynamics, Vol. 33, No. 1, 2010, pp. 212–221.
doi: 10.2514/1.41585, 10.2514/1.41585; 10.2514/1.41585.
[9] K. Carroll and D. Faber, “Tidal Acceleration Gravity Gradiometry for Measuring Asteroid Gravity Field
From Orbit,” 10 2018.
[10] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,CoRR, Vol. abs/1412.6980,
[11] S. J. Reddi, S. Kale, and S. Kumar, “On the Convergence of Adam and Beyond,International Confer-
ence on Learning Representations, 2018.
[12] T. Dozat, “Incorporating Nesterov Momentum into Adam,” 2015.
[13] J. Atchison, R. Mitch, and A. Rivkin, “Swarm Flyby Gravimetry,” tech. rep., The Johns Hopkins Uni-
versity Applied Physics Laboratory, Apr. 2015.
[14] A. Vroom, M. D. Carlo, J. M. R. Martin, and M. Vasile, “Optimal Trajectory Planning for Multiple
Asteroid Tour Mission by means of an Incremental Bio-Inspired Tree Search Algorithm,” Dec. 2016.
[15] Y. Takahashi and D. Scheeres, “Small body surface gravity fields via spherical harmonic expansions,
Celestial Mechanics and Dynamical Astronomy, Vol. 119, June 2014, pp. 169–206.
[16] D. Yeomans, P. Antreasian, J.-P. Barriot, et al., “Radio Science Results During the NEAR-Shoemaker
Spacecraft Rendezvous with Eros,Science, Vol. 289, Sept. 2000, pp. 2085–2088.
[17] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran,
D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning,
Nature, Vol. 518, Feb. 2015, p. 529, 10.1038/nature14236; 10.1038/nature14236.
[18] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” CoRR, Vol. abs/1412.6980,
[19] J. Duchi, E. Hazan, and Y. Singer, “Adaptive Subgradient Methods for Online Learning and Stochastic
Optimization,” Journal of Machine Learning Research, Vol. 12, July 2011, p. 2121–2159.
[20] N. Stacey and S. D’Amico, “Autonomous Swarming for Simultaneous Navigation and Asteroid Char-
acterization,” AAS/AIAA Astrodynamics Specialist Conference, Aug. 2018.
... However, the asteroid density distribution is quite unlikely to be accurately approximated by the centroid-vertex tetrahedral elements. Takahashi et al. [23,28] and Ledbetter et al. [29] divided the centroid-vertex tetrahedron element into layers, but the accuracy of density distribution estimation is still limited. ...
Full-text available
In this article, we present a free-vertex tetrahedral finite-element representation of irregularly shaped small bodies, which provides an alternative solution for estimating asteroid density distribution. We derived the transformations between gravitational potentials expressed by the free-vertex tetrahedral finite elements and the spherical harmonic functions. Inversely, the density of each free-vertex tetrahedral finite element can be estimated via the least-squares method, assuming a spherical harmonic gravitational function is present. The proposed solution is illustrated by modeling gravitational potential and estimating the density distribution of the simulated asteroid 216 Kleopatra.
Full-text available
A growing interest in small body exploration has motivated research into the rapid characterization of near-Earth objects to meet economic or scientific objectives. Specifically, knowledge of the internal density structure can aid with target selection and enables an understanding of prehistoric planetary formation to be developed. To this end, multi-layer extensions to the polyhedral gravity model are suggested, and an inversion technique is implemented to present their effectiveness. On-orbit gravity gradiometry is simulated and employed in stochastic and deterministic algorithms, with results that imply robustness in both cases.
Full-text available
- General topic: - A new approach to measuring planetary gravitational fields from spacecraft - Tidal Acceleration Gravity Gradiometry - This presentation: - One aspect of that: - Its particular effectiveness for small planetary bodies - See paper for a wider discussion
Conference Paper
Full-text available
In this paper, a combinatorial optimisation algorithm inspired by the Physarum Polycephalum mould is presented and applied to the optimal trajectory planning of a multiple asteroid tour mission. The Automatic Incremental Decision Making And Planning (AIDMAP) algorithm is capable of solving complex discrete decision making problems with the use of the growth and exploration of the decision network. The stochastic AIDMAP algorithm has been tested on two discrete astrodynamic decision making problems of increased complexity and compared in terms of accuracy and computational cost to its deterministic counterpart. The results obtained for a mission to the Atira asteroids and to the Main Asteroid Belt show that this non-deterministic algorithm is a good alternative to the use of traditional deterministic combinatorial solvers, as the computational cost scales better with the complexity of the problem.
Full-text available
The dynamical environment about and on Asteroid 25143 Itokawa is studied using the shape and rotation state model estimated during the close proximity phase of the Hayabusa mission to that asteroid. We first discuss the general gravitational properties of the shape model assuming a constant density. Next we discuss the actual dynamical environment about this body, both on the surface and in orbit, and consider the orbital dynamics of a Hayabusa-like spacecraft. Then we detail one of the approaches used to estimate the mass of the body, using optical and lidar imaging, during the close proximity phase.
Full-text available
Prior to the Near Earth Asteroid Rendezvous (NEAR) mission, little was known about Eros except for its orbit, spin rate, and pole orientation, which could be determined from ground-based telescope observations. Radar bounce data provided a rough estimate of the shape of Eros. On December 23, 1998, after an engine misfire, the NEAR-Shoemaker spacecraft flew by Eros on a high-velocity trajectory that provided a brief glimpse of Eros and allowed for an estimate of the asteroid's pole, prime meridian, and mass. This new information, when combined with the ground-based observations, provided good a priori estimates for processing data in the orbit phase.
The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Conventional gravity field expressions are derived from Laplace’s equation, the result being the spherical harmonic gravity field. This gravity field is said to be the exterior spherical harmonic gravity field, as its convergence region is outside the Brillouin (i.e., circumscribing) sphere of the body. In contrast, there exists its counterpart called the interior spherical harmonic gravity field for which the convergence region lies within the interior Brillouin sphere that is not the same as the exterior Brillouin sphere. Thus, the exterior spherical harmonic gravity field cannot model the gravitation within the exterior Brillouin sphere except in some special cases, and the interior spherical harmonic gravity field cannot model the gravitation outside the interior Brillouin sphere. In this paper, we will discuss two types of other spherical harmonic gravity fields that bridge the null space of the exterior/interior gravity field expressions by solving Poisson’s equation. These two gravity fields are obtained by assuming the form of Helmholtz’s equation to Poisson’s equation. This method renders the gravitational potentials as functions of spherical Bessel functions and spherical harmonic coefficients. We refer to these gravity fields as the interior/exterior spherical Bessel gravity fields and study their characteristics. The interior spherical Bessel gravity field is investigated in detail for proximity operation purposes around small primitive bodies. Particularly, we apply the theory to asteroids Bennu (formerly 1999 RQ36) and Castalia to quantify its performance around both nearly spheroidal and contact-binary asteroids, respectively. Furthermore, comparisons between the exterior gravity field, interior gravity field, interior spherical Bessel gravity field, and polyhedral gravity field are made and recommendations are given in order to aid planning of proximity operations for future small body missions.
We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions. The method is straightforward to implement and is based an adaptive estimates of lower-order moments of the gradients. The method is computationally efficient, has little memory requirements and is well suited for problems that are large in terms of data and/or parameters. The method is also ap- propriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The method exhibits invariance to diagonal rescaling of the gradients by adapting to the geometry of the objective function. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. We demonstrate that Adam works well in practice when experimentally compared to other stochastic optimization methods.
We explore methods to detect and characterize the internal mass distribution of small bodies using the gravity field and shape of the body as data, both of which are determined from orbit determination process. The discrepancies in the spherical harmonic coefficients are compared between the measured gravity field and the gravity field generated by homogeneous density assumption. The discrepancies are shown for six different heterogeneous density distribution models and two small bodies, namely 1999 KW4 and Castalia. Using these differences, a constraint is enforced on the internal density distribution of an asteroid, creating an archive of characteristics associated with the same-degree spherical harmonic coefficients. Following the initial characterization of the heterogeneous density distribution models, a generalized density estimation method to recover the hypothetical (i.e., nominal) density distribution of the body is considered. We propose this method as the block density estimation, which dissects the entire body into small slivers and blocks, each homogeneous within itself, to estimate their density values. Significant similarities are observed between the block model and mass concentrations. However, the block model does not suffer errors from shape mismodeling, and the number of blocks can be controlled with ease to yield a unique solution to the density distribution. The results show that the block density estimation approximates the given gravity field well, yielding higher accuracy as the resolution of the density map is increased. The estimated density distribution also computes the surface potential and acceleration within 10%10% for the particular cases tested in the simulations, the accuracy that is not achievable with the conventional spherical harmonic gravity field. The block density estimation can be a useful tool for recovering the internal density distribution of small bodies for scientific reasons and for mapping out the gravity field environment in close proximity to small body’s surface for accurate trajectory/safe navigation purposes to be used for future missions.
A least-squares approach for estimating the internal density distribution of an asteroid is presented and applied to a simple polyhedron asteroid shape. The method assumes that the asteroid gravity field is measured to a specified degree and order and that a polyhedral model of the asteroid is available and has been discretized into a finite number of constant density polyhedra. The approach is derived using several basic properties of spherical harmonic gravitational expansions and can explicitly accommodate a fully correlated covariance matrix for the estimated gravity field. For an asteroid shape discretized into M constant density polyhedra and a gravity field measured to degree and order N, the least-squares problem is under-determined if M>(N+1)2 and is over-determined if M