Content uploaded by Louis-Jerome Burtz
Author content
All content in this area was uploaded by Louis-Jerome Burtz on Aug 02, 2022
Content may be subject to copyright.
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
IAC–18–A3,IP,23,x47008
Finding the North on a Lunar Microrover: a Lunar Surface Environment
Simulator for the Development of Vision-Based Navigation Pipelines
Fabian Dubois
ispace inc, Japan, f-dubois@ispace-inc.com
Louis-Jerome Burtz
ispace inc, Japan, l-burtz@ispace-inc.com
Oriol Gasquez
ispace inc, Spain, o-gasquez@ispace-inc.com
Takahiro Miki
ispace inc, Japan, t-miki@ispace-inc.com
Over the coming years, ispace plans to deploy several microrovers to the lunar surface. Localization systems
are needed for efficient exploration of the lunar surface and are the foundation for map making, a core
ispace goal for enabling in-situ resource identification and utilization. However, the Moon lacks a global
positioning infrastructure and has no potent magnetic field. In this paper, we propose an approach to
estimate the heading of the rover based on data from the sensors that are already base-lined on the rover:
an Inertial Measurement Unit, fixed monocular cameras and limited computing resources. We focus on
sharing the challenges related to creating and validating a vision based algorithm for the lunar surface. We
implement a dynamic lunar environment simulation, based on the Gazebo framework, to generate camera
images as would be obtained from the rover in various terrain and lighting conditions. The results of heading
estimation performed with a convolution method and a machine learning method are compared. Validation
activities of the simulation and the two methods in physical analogs (rover with flight model cameras and
mobility systems in a lunar lighting analog environment) are presented and discussed. The paper concludes
by summarizing the next steps needed to improve the heading estimation accuracy.
I. Introduction
I.i The ispace mission
The mission of ispace is to deploy several micro-
rovers to the lunar surface in missions of increasing
capability over the coming years. A localization and
mapping system is needed to ensure that :
•The rover is able to reach its mission objectives
on the Moon.
•We can provide the context information that
payload customers need to understand the data
from their instruments.
•The line of sight communication link between the
rover and the lander is kept at all times.
The baseline rover used for the development we
present here is the Sorato Flight Model rover, seen in
1.1The equipment of the rover relevant to this paper
includes:
•Two wide angle cameras (150 degrees horizontal
field of view) on the left and right sides
Fig. 1: Sorato Flight Model rover (2018)
•Two narrow angle cameras (50 degrees horizon-
tal field of view) on the front and back.
•An Inertial Measurement Unit
•A clock reference synchronized with Earth
IAC–18–A3,IP,23,x47008 Page 1 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
Together, the four cameras provide 360 degree
panoramic coverage (Figure 2).
Fig. 2: rover camera configuration
I.ii The lunar environment and the need for heading
estimation
The knowledge of the heading angle of the rover
is key to ensure that the rover complies with the fol-
lowing mission constraints:
•Power generation: orientation of the body-
mounted solar panels relative to the Sun to en-
sure maximum power generation
•Thermal management: orientation of the radi-
ators relative to the Sun to ensure acceptable
operational temperatures for the electronics
However, the Moon lacks a global positioning in-
frastructure (such as GPS on Earth) and has no po-
tent magnetic field (preventing the use of a compass).
Furthermore, the lunar regolith provides a challeng-
ing environment for wheel odometry based localiza-
tion techniques due to wheel slip in the soft terrain
or when traversing over obstacles. Roll and pitch es-
timation is readily available through gravity vector
sensing from an IMU, but the heading angle estima-
tion suffers from gyroscope drift over time.
Therefore, heading estimation (absolute yaw an-
gle of the rover with respect to the North/West/Up
reference frame of the lunar surface as illustrated in
Figure 3) is not readily available.
I.iii Constraints on heading estimation
Additional constraints on the strategy for the
heading estimation are due to the strong focus of
the ispace platform to be the most compact and
lightweight planetary rover ever flown.
This architecture (size, mass, electrical power and
computing capability) precludes the use of terrestrial
full-fledged SLAM solutions. It also requires avoid-
ing single purpose sensors (such as a dedicated star
Fig. 3: rover heading definition
tracker). It prevents implementing some of the strate-
gies used in recent Mars exploration missions. Sun
sensing for heading estimation was implemented on
NASA’s Mars Exploration Rovers23 and actively con-
tributed to the navigation capabilities of the rover.4
They are also planned for future missions like ESA’s
ExoMars rover where it is expected to improve lo-
calization accuracy.5However, they both rely on an
actuated (pan and tilt) mast camera that is not com-
patible with the ispace mass and complexity reduc-
tion requirements.
More generally on Earth, work has been done to
estimate absolute heading and thus improve odome-
try from Sun illumination even in cases where the Sun
is not directly visible, like in urban environments6.7
This paper builds on such previous work with a spe-
cial focus on the lunar environment and limited com-
puting resources.
Compared to other terrestrial implementations,
the work here aims at being robust to the unstruc-
tured and feature-less environment of the Moon. Fur-
thermore, the extreme lighting conditions and high
dynamic range of the lunar surface challenge the
imaging sensors. The effect of lens flare and sensor
saturation must be taken into account by the algo-
rithms.
I.iv Outline
To provide a solution within the restricted con-
text defined above, the paper is organized as follows:
Section 2 provides the overview of the heading esti-
mation pipeline. In Section 3 we describe the imple-
mentation of a dynamic lunar environment simula-
tion based on the Gazebo framework to generate of
camera images as would be obtained from the rover
in various terrain and lighting conditions. Section 4
presents the method and results of heading estima-
tion performed with our convolution based method.
Section 5 presents the advantages of using a machine
IAC–18–A3,IP,23,x47008 Page 2 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
learning based method. Validation activities of the
simulation and the two methods in physical analogs
(rover with flight model cameras and mobility sys-
tems in a lunar lighting analog environment) are pre-
sented and discussed together in each of the relevant
sections. The paper concludes in Sections 6 and 7 by
summarizing the next steps needed to improve the
heading estimation accuracy and robustness as well
as how to use this estimation as a an input to the
overall localization and mapping pipeline.
II. Heading estimation concept
The Sorato rover is equipped with four cameras
that provide a 360 degree panoramic coverage (see
Figure 2. Intuitively, we can determine the rover-Sun
vector by either:
•Finding the disc of the Sun in the camera images.
•Detecting and estimating the direction of the
rovers own shadow on the ground.
Then, we can identify the rovers heading taking
into account:
•The camera calibration parameters (including
lens distortion).
•The rover attitude with respect to gravity (roll
and pitch given by the IMU)
•Time of day, ephemeris data and rough localiza-
tion ( kilometer precision from knowledge of the
landing site).
This process is summarized in Figure 4.
Fig. 4: Overview of the visual heading estimation
pipeline
The orientation of each camera on the rover is ex-
pressed relative to the rover body frame by a quater-
nion qcam. We obtain the relative gravity vector ori-
entation qgravity from the fusion of the IMU’s ac-
celerometers and gyroscopes.
For the ephemeris system, we use NASA SPICE
toolkit.8Given the time tand the approximate
latitude, longitude coordinates of the rover on the
moon, we can query the Sun vector in the local
topocentric frame. We represent this Sun vector as a
vector ~
Stopo.
The system of equation to solve for the heading
qheading becomes:
~
Scamera =q∗~
Stopo [1]
with q=qcamqrov er [2]
and qrover =qgr avity qheading [3]
The main challenge is thus the estimation of the
Sun vector ~
Scamera in the camera frame from the im-
age data. In the next section, we present the simula-
tion environment that we developed to try and test
ways to estimate that vector in lunar lighting condi-
tions. Then, Sections 4 and 5 will present the two
most promising methods. Both methods will be used
exclusively with the rover’s side cameras: their large
field of view make them more appropriate for the task
(more sky area covered) and their position near the
solar panels make them more likely to catch the Sun
image given the operational contraints.
III. Simulation
III.i Existing analogue datasets
We used the the lunar analogue experimen-
tal facility of the Japan Aerospace Exploration
Agency/Institute of Space and Astronautical Science
(JAXA/ISAS) Sagamihara campus to gather experi-
mental data with our Flight Model rover.
We also collated publicly available images from
previous missions (Apollo and Chang’e). These pic-
tures are taken in the real target environment but
with different hardware compared to our flight model.
Of particular interest in the rest of the paper, we
assembled the following two reduced datasets:
•A dataset of 9 images from Apollo missions con-
taining lens flares
•A dataset of 20 images taken with our flight
model in Sagamihara. They constitute a spot
turn dataset and approximate ground truth is
known ( 10 degrees accuracy)
IAC–18–A3,IP,23,x47008 Page 3 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
However, these datasets are limited in:
•Size: it is expensive to generate analogs and an-
notate datasets with ground truth.
•Fidelity with reality: lunar illumination condi-
tions and regolith optical properties are chal-
lenging to replicate. Furthermore, the camera
sensor technology is radically different for the
Apollo dataset. The sensor resolution is lower
on Chang’e dataset.
•Time consuming: if we change the rovers camera
configuration (lens, sensor, orientation) we must
create the datasets again.
Given these limitations it is difficult to test the ro-
bustness of the algorithms we develop. To remedy to
this problem, we also created simulated environments
for the optimization and the validation of algorithms.
III.ii Simulation scope and needs
Rover simulations have been developed previously
for the validation of traverse planning9and for testing
of surface reconstruction from depth images.10 Plan-
etary environment simulation tools have also been de-
veloped11 with a focus on multi-scale rendering for
the landing phase.
We use Gazebo12 as our simulation framework. It
combines a 3D rendering engine and a physics engine.
It enables the control of the rover’s joints as well as
the creation of simulated sensor outputs. Of partic-
ular interest for us are the camera images, the IMU
and the wheel odometry outputs. Additional bene-
fits of this framework include being open-source and
easy to integration with the Robot Operating System
(ROS13) ecosystem. For our purpose the focus is on
optical properties and slow dynamics physics.
III.iii Simulation of the lunar surface
For the lunar surface environment we use a 2m/px
Digital Terrain Model (DTM) from NASA LRO Nar-
row Angle Camera (NAC)14 as a base. The Sun
is approximated by a single directional light (loca-
tion at infinity). We use multiple normal maps for
texturing: this enables viewing angle and illumina-
tion dependant rendering of the surface by modelling
the ground’s surface with finer precision (cm or mm
scale).
III.iv Rover model
For the rover and its cameras, we model:
Fig. 5: DTM
•Simplified representation of the rover’s main
components in term of volumes (body, wheels
and antenna) that are important to generate
shadows.
•Simplified representation of the rover’s mechani-
cal properties (mass, differential gearbox, motor
torque) that are important for generating the dy-
namics of the system.
•Use of geometric parameters of cameras (relative
position on the rover).
•Lens model based on physical lens of the rover
(projection function, field of view, aspect ratio).
•Lens flare: the effect is dominant over the disc
image, as confirmed by analogues and Apollo im-
ages.
Fig. 6: Rover model in the simulation
IAC–18–A3,IP,23,x47008 Page 4 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
III.v Comparison with other datasets
The lunar surface is characterized by a dark sky
with a sharply defined horizon, featureless landscape
and varied relief. Special focus is placed on surface
textures and shadow rendering. For illustration, fig-
ure 7 compares a picture taken by Apollo astronauts
(top) with our simulated landscape.
Fig. 7: Landscape taken during Apollo mission (top),
simulation rendering (bottom)
Lens flare is the result of intra-lens reflection and
sensor output saturation. It is dominant over the Sun
disc image. Additional ”ghost” disks and chromatic
aberrations can create challenging configurations for
algorithms. Figure 8 compares an Apollo image to
an image taken in physical analogue and an image
output by our simulation environment.
Shadows tend to be sharp because the absence of
atmospheric diffusion creates little ambient illumina-
tion. One key aspect of Sun based heading estimation
is to use the rover like the gnomon of a sundial. By
detecting the shadow of the rover, we can infer the
position of the Sun. Figure 9 compares an Apollo
image to an image taken in physical analogue and an
image output by our simulation environment.
Overall, in this first iteration, we have created an
environment that is close enough to real Moon views
to enable algorithm development. However, signifi-
cant improvement is still possible.
III.vi Generated datasets
For the results detailed in the next Sections, we
created 4 simulated datasets (Table 1). However, we
can generate an unlimited number of variations, given
enough time. We have been running the simulation at
Fig. 8: Lens flare. Image from Apollo(top), sample
from Sagamihara(middle), simulation (bottom
a real-time speed to generate these, but given enough
computing power, we could in fact generate the data
at an accelerated pace.
ID Samples Sun el. Sun az. Position
1 3741 21 6 hill
2 707 8.6 108 near crater
3 2298 9.8 51 near crater
4 354 15 22 rock field
Table 1: Datasets description, including Sun position
in the environment. The angles are relative to the
DTM reference frame. Changes in elevation and
azimuth create variability in illumination condi-
tions. In each dataset, the trajectory of the rover
is a loop to capture all relative azimuths in the
camera frame
We only covered a limited range of Sun elevation,
but as the rover is moving across slopes in datasets
1, 2 and 3, there is additional variation in term of
relative elevation in the camera reference frame.
IAC–18–A3,IP,23,x47008 Page 5 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
Fig. 9: Shadows of the rover on the Moon. Yutu rover
from Chang’e mission (top), sample from Sagami-
hara(middle), simulation (bottom)
IV. Convolution method and results
IV.i Method
We use a method similar to the sliding window
method used by the Mars Exploration Rover team.3
A major difference is that their use of hardware filters
allows them to directly image the Sun disc and thus
theoretically determine the optimal size of the win-
dow so that it matches the Sun image. In our case,
the Sun image is not separable from the lens flare ar-
tifacts. Moreover we plan to use the same images as
the ones used by operators, i.e. using the same gain
and exposure optimized for overall visual balance. It
means the final size of the bright area around the Sun
will vary according to the Sun position, landscape and
rover position and cannot be easily predicted.
The main steps of the convolution based methods
are as follows:
1. Convert image to grayscale (intensity channel))
128x72 pixels
2. Threshold image. Given a threshold thresh:
Imthresh(x, y ) = (1,if Imsource (x, y)> thresh
0,otherwise
[4]
We use half of the maximum intensity as a
threshold. The goal is to prevent too many irrel-
evant pixels to contribute to the following oper-
ations.
3. For each pixel of the image, we compute the sum
of the intensity of the pixel and its neighbours
(defined by a kernel of diameter r). This can be
written as the convolution of equation 5, where
∗is the convolution operator.
Imfiltered =kernel ∗Imthresh [5]
4. Find the coordinates of the maximum in the im-
age:
usun, vsun =argmax(Imf iltered ) [6]
5. We can then get homogeneous coordinates via
the inverse projection function:
x/z, y/z =f−1(u, v) [7]
with fbeing the camera projection model.
u, v =f(x/z, y/z) [8]
6. And finally get the Sun vector relative to the
camera frame by choosing an arbitrary value for
z.
Without loss of generality, we use the stereo-
graphic fish-eye projection model. It maps accurately
to our lens in the center part of the field of view.
The results of this method are detailed in the next
paragraphs.
IV.ii Sun detection result
We first check the capacity to detect the Sun in
the images compared to known ground truth, i.e. is
there an overlap between the detection and the flare
image. We report the result in the table 2
For the Sagamihara dataset, we used a subset of
8 images that contain the spotlight. For pictures in
the Apollo dataset, we resized all pictures to have a
IAC–18–A3,IP,23,x47008 Page 6 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
Fig. 10: Example of detection of Sun with lens flare.
After filter application(top). The kernel bound-
ary is overlayed on the resized input centered on
the detected position (bottom
.
width of 128pixels, keeping the aspect ratio. Pictures
might have been taken in very different condition and
the sizes of the lens flare’s main bloom vary consider-
ably(ratio of 5 between the largest and smallest). In
fact the failure case on kernel size above 11px in the
Apollo dataset is always coming from the same image
where the bloom appears particularly small. At this
stage, it suggests that a size of 11 to 15 pixels is prob-
ably the most adequate for our task. Larger sizes are
preferable as they result in a smoother activation and
thus in a more stable result given noise. However a
too large size can also result in erroneous detection
in bright areas of the ground with few shadows.
IV.iii Results in the analogue environment dataset
We first test on a spot turn dataset taken in the
Sagamihara. We show a comparison of ground truth
and result but don’t provide an accuracy metric due
to the large ground truth error.
As seen on Figure 11, The method is able to cor-
rectly estimate the heading. The ground truth values
being approximate, we cannot draw further conclu-
sion at this stage. We plan to use an optical tracker
r Sagamihara Apollo Gazebo
5 1.0 1.0 1.0
11 1.0 1.0 1.0
15 1.0 0.83 1.0
17 0.94 0.83 1.0
21 0.71 0.83 1.0
Table 2: Detection rate of Sun in different datasets
for different kernel radius r
Fig. 11: Measurement result on spot turn dataset
to gather accurate ground truth in our next experi-
ments.
IV.iv Results in the simulated datasets
We obtain a good match between expected Sun
positions and detected positions, with error under 1
pixel in most cases, as seen on Fig. 12.
We expected the error to be a displacement of the
detected position along the line between the Sun po-
sition and center of the image. The error we observe
seems more likely due to the decreased resolution.
We will need to investigate further and attempt bet-
ter modelizations of the lens flare.
Figure 13 shows that we achieve a mean absolute
heading error of 0.8 degrees with a standard deviation
of 0.7 degrees in the test data. We could further
improve the result by removing the bias or increasing
the resolution. This result includes some detections
when the Sun is in fact just outside the image frame
and the detection relies exclusively on the lens flare.
These samples exhibit lower accuracy (as seen in right
side of Figure 13)
With a maximum absolute heading error of less
than 5 degrees, this method is very promising. How-
ever we need to make the dataset more challenging
to confirm the applicability in real conditions and ro-
IAC–18–A3,IP,23,x47008 Page 7 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
Fig. 12: Ground truth position of the Sun in the im-
age vs the estimated position
bustness to edge cases. Furthermore, the method is
straightforward, but the main assumption is that the
Sun is visible in the image frame. If not, then the
result returned by this method will be meaningless.
This means that the method is limited to the times
during the mission when: 1. The Sun is low enough
on the horizon to be visible in the field of view of
the camera (maximum elevation of 32 degrees on flat
ground). 2. The rover’s roll and pitch and terrain
features do not contribute to blocking or distorting
the Sun disk.
V. Machine learning method and results
V.i Output representation and Loss function
A common difficulty in machine learning is to ap-
propriately choose the target representation space so
that similar targets share a similar representation.
Orientations pose the problem that a value of 2πis
in fact the same as a value of 0. It was proposed
to use quaternions as a representation for illumina-
tion angles learning in.15 While we could work with
quaternions, we finally chose, similarly to16 to learn a
normalized Sun pointing vector in the camera frame.
It has the advantage of not being over-parametrized,
requiring only 3 parameters as a representation in-
stead of 4.
We use the L2 distance between the target vector
and the normalized output vector as a cost function:
L=khats −sk2[9]
Fig. 13: Azimuth error depending on angular dis-
tance from optical axis. (The box extends from
the Q1 to Q3 quartile, with a line at the median.
The whiskers extend from the edges of box to
show the range of the error. Outliers are shown
as circles past the end of the whiskers)
V.ii Neural network architecture
We choose a neural network for machine learning
due to the high level of control on architecture de-
sign, the ability to learn highly non linear relations
between inputs and outputs and the broad availabil-
ity of tools. We expect the network to be able use
clues from lens flare as well as the shadows in the
scene, particularly the rover’s own shadow. Most ex-
isting image recognition network have been trained
on largely irrelevant datasets, including urban envi-
ronment (KITTI, MIT Places dataset). Manually en-
gineered features used in17 such as sky illumination
(with presence of an atmosphere), straight shadow
edges and flat vertical surfaces are generally not avail-
able for lunar landscapes. We thus choose to create
our own model, trained without transfer learning. We
implement the typical features of modern deep learn-
ing architectures (residual blocks,18 batch normaliza-
tion,19 Leaky Rectified Linear Unit (ReLU) activa-
tions20) but at a reduced scale ( 10s vs the common
100s layers). We want to keep the size of the model
small in order to:
•Improve the onboard inference performance.
•Improve the generalisation from simulation to
real data.
•Improve analysis of estimation errors.
To be able to keep the model size small, we use do-
main knowledge of the problem. We divide our net-
work in three main components (Fig. 2). The first
IAC–18–A3,IP,23,x47008 Page 8 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
Fig. 14: Neural network architecture
component (Sun image based estimator) processes
the upper half of the image via 1 residual convolu-
tion block followed by 3 fully connected layers. to
provide an Sun vector estimate based on the sky im-
age. We expect it to learn to detect the Sun position
and translate it into a vector (Fig. 14). The sec-
ond component (shadow based estimator) processes
the bottom 2-thirds of the image. It is made of 2
residual convolution blocks followed by 3 fully con-
nected layers. The additional residual block is added
in expectation of more complex and less local features
in the image (bright and dark areas interactions for
shadows). The last component uses 4 fully connected
layer to integrate the results of the 2 previous com-
ponents for the final estimate.
We also implemented extra strategies in order to
improve transfer from simulation to real pictures:
•Input images are scaled down to a small size
(80x45) and converted to grayscale to prevent
over-fitting on simulation.
•Input images have their mean subtracted in or-
der to decrease exposure sensitivity.
•We dropout21 20% of input pixels during train-
ing as a further measure to prevent over-fitting.
•We use a dropout layer before the last fully con-
nected layer, once again to prevent over-fitting
and also as a lightweight ensembling (dropout
layers can have a similar effect to averaging the
outputs of an ensemble of models21).
Fig. 15: Sun image based estimator
V.iii Implementation and training
The implementation is done using the Chainer
framework.22 The model size is under 30000 param-
eters. We trained for 160 epochs on 2130 test sam-
ples and using 1243 samples for validation. Training
took less than 30 minutes using a consumer grade
GTX1050Ti GPU.
V.iv Heading estimation results on simulation
With this method we include frames where the Sun
disk is not in the image. Despite the additional chal-
lenge, the method achieves promising results. Over-
all, we obtain a mean absolute error of 1.0 degrees
with a standard deviation of 0.7 degrees. The results
are consistent at all relative angles, even though the
error (and the number of outliers) tends to increase
with the distance from optical axis (Fig. 17).
When studying a sequence of image (Fig. 18), the
Sun vector observation is continuous, indicating that
the model managed to learn stable features.
This method opens the way for Sun based heading
estimation without direct Sun-disc imaging. For now
we demonstrated the use of the method with low Sun
elevation and with the Sun outside the field of view of
the camera. In the future, we may be able to extend
the model to cases where Sun elevation is above 30
degrees and the shadows are shorter.
IAC–18–A3,IP,23,x47008 Page 9 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
Fig. 16: Example of resized grayscale image from
simulation
Fig. 17: Estimation errors on 500 test images from
the simulated dataset (not used in training) when
the Sun is facing the camera side, the front/back
of rover and the opposite side )
V.v Relative angle estimation result on analogue
datasets
Finally, we apply our model to the Sagamihara
spot turn dataset. The transfer of the model is en-
couraging. For positions with the Sun close to the
optical axis, we obtain an average absolute error un-
der 10 degrees (knowing that our ground truth mea-
surement is imprecise). The model is mostly failing
in cases where the Sun is facing the front or back of
the rover, but manages an absolute error of less than
20 degrees in more than half of the cases when the
Sun is on the opposite side of the camera, meaning
that the model is also able to use the rover’s shadow
as clues successfully (Fig. 19). Overall, we obtain
a mean absolute error of 39 degrees. We achieve a
mean absolute error of 7.9 degrees for cases where
the Sun is less than 60 degrees away from the optical
axis of the camera.
Fig. 18: Sun vector on a test sequence: ground truth
(top) vs prediction (bottom)
Fig. 19: Estimation errors on 20 test images taken as
a spot turn in Sagamihara, when the Sun is facing
the camera side, the front/back of rover and the
opposite side)
The large errors when the rover’s shadow are vis-
ible are in fact due to a few images that are also
confusing to the human eye. In the picture Fig. 20,
there is no Sun disk and we see a dark shadow area in
the lower right corner, which could be confused with
the rover’s shadow when the Sun is in the back of the
rover.
VI. Conclusion
We described solutions for heading estimation
without dedicated hardware in the challenging lunar
surface environment. We presented a methodology
to validate algorithm development without access in-
situ experimentation.
IAC–18–A3,IP,23,x47008 Page 10 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
Fig. 20: Example of image with high error: the dark
area is the bottom left of the image can be mis-
interpreted as the rover’s shadow))
Both the convolution based method and the ma-
chine learning based method will be integrated in the
rover, giving choice depending on the performance
during the mission.
In particular the machine learning model will be
tunable once we obtain our first images from the
Moon. This system is an enabler for faster explo-
ration of the lunar surface by providing the absolute
yaw estimation needed for :
1. Thermal and power management
2. Key element of the navigation pipeline (absolute
yaw to compensate for the IMU drift over time)
3. As a guide to the human operator for not be-
coming disoriented (see User Interface integra-
tion 21)
VII. Next steps
VII.i Closing the reality gap
We have multiple points to work on to close the
reality gap:
1. We need to make the dataset more challenging
and more realistic: Adding the Earth in the sky,
reflections on the lander or other rovers, more
varied terrains.
2. Ensuring a more faithful photometric response.
It requires better modelling of the lens, sensor
and camera driver (auto gain, exposure, white
balance).
3. It was shown by Tobin (2017)23 that domain ran-
domization can ensure smooth transfer from sim-
ulation to reality. We already implemented some
related techniques but will need to go further
in increasing the variability in generated inputs:
textures, lighting conditions...
4. Leverage the limited real world datasets to fine
tune the models with additional training of part
of the network for instance.
VII.ii Error budget
Ultimately the error budget will be due to:
•Camera configuration dependent (resolution /
field of view).
•Single or multiple camera use.
•Algorithmic false positives / false negatives.
•Relative elevation / Relative azimuth of the Sun
in the camera frame.
•Accuracy of the gravity vector (from the IMU).
Fig. 21: UI integration
IAC–18–A3,IP,23,x47008 Page 11 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
VII.iii Operational training
The simulator that has been developed during this
study is also providing valuable assistance for opera-
tional training. The Mission control team is able to
rehearse lunar exploration scenarios by sending com-
mands to the rover and receiving the updated output
from cameras and other sensors.
Acknowledgements
We extend our sincere thanks to the staff of the
Sagamihara Advanced Facility for Space Exploration
(JAXA) and specifically to Yasuhiro Katayama for al-
lowing us to perform lunar analogue testing in their
facility as part of our Flight Model validation cam-
paign.
References
[1] John Walker. Flight System Architecture of the
Sorato Lunar Rover. iSAIRAS 2018, June 2018.
[2] A. Trebi-Ollennu, T. Huntsberger, Yang
Cheng, E. T. Baumgartner, B. Kennedy, and
P. Schenker. Design and analysis of a sun
sensor for planetary rover absolute heading
detection. IEEE Transactions on Robotics and
Automation, 17(6):939–947, December 2001.
[3] A. R. Eisenman, C. C. Liebe, and R. Perez. Sun
sensing on the Mars exploration rovers. In Pro-
ceedings, IEEE Aerospace Conference, volume 5,
pages 5–5, March 2002. ISSN:.
[4] Mark W Maimone, P Chris Leger, and Jef-
frey J Biesiadecki. Overview of the Mars Explo-
ration Rovers’ Autonomous Mobility and Vision
Capabilities. IEEE international conference on
robotics and automation (ICRA) space robotics
workshop, page 9, 2007.
[5] F Souvannavong, C Lemar´echal, L Rastel,
M Maurette, and France Magellium. Vision-
based motion estimation for the ExoMars rover.
CNES (The French Space Agency), France,
2010.
[6] Valentin Peretroukhin, Lee Clement, and
Jonathan Kelly. Inferring sun direction to im-
prove visual odometry: A deep learning ap-
proach. The International Journal of Robotics
Research, page 027836491774973, January 2018.
[7] Lee Clement, Valentin Peretroukhin, and
Jonathan Kelly. Improving the Accuracy of
Stereo Visual Odometry Using Visual Illumina-
tion Estimation. arXiv:1609.04705 [cs], Septem-
ber 2016.
[8] Charles Acton, Nat Bachman, Lee Elson, Boris
Semenov, and Edward Wright. Spice: A real
example of data system re-use to reduce the costs
of ground data systems development and mission
operations. 2003.
[9] Daniel Dıaz, Maria D R-Moreno, Amedeo Cesta,
Angelo Oddi, and Riccardo Rasconi. An Empir-
ical Experience with 3DROV Simulator: Testing
an Advanced Autonomous Controller for Rover
Operations. Procs. of the 12th ESA Workshop
on Advanced Space Technologies for Robotics
and Automation, page 8, 2013.
[10] Matthias Hellerer, Martin J. Schuster, and Roy
Lichtenheldt. Software-in-the-Loop Simulation
Of A Planetary Rover. In The International
Symposium on Artificial Intelligence, Robotics
and Automation in Space (i-SAIRAS 2016),
June 2016.
[11] S.M. Parkes, I. Martin, M. Dunstan, and
D. Matthews. Planet Surface Simulation with
PANGU. In Space OPS 2004 Conference, Mon-
treal, Quebec, Canada, May 2004. American In-
stitute of Aeronautics and Astronautics.
[12] N. Koenig and A. Howard. Design and use
paradigms for gazebo, an open-source multi-
robot simulator. In 2004 IEEE/RSJ Interna-
tional Conference on Intelligent Robots and Sys-
tems (IROS) (IEEE Cat. No.04CH37566), vol-
ume 3, pages 2149–2154, Sendai, Japan, 2004.
IEEE.
[13] Morgan Quigley, Brian Gerkey, Ken Conley,
Josh Faust, Tully Foote, Jeremy Leibs, Eric
Berger, Rob Wheeler, and Andrew Ng. ROS:
An open-source Robot Operating System. Pro-
ceedings of the IEEE International Conference
on Robotics and Automation Workshop on Open
Source Robotics, 2009, page 6, 2009.
[14] T. Tran, M. R. Rosiek, Ross A. Beyer, S. Matt-
son, E. Howington-Kraus, M. S. Robinson, B. A.
Archinal, K. Edmundson, D. Harbour, and
E. Anderson. Generating digital terrain models
using lroc nac images. In International Archives
of the Photogrammetry, Remote Sensing and
Spatial Information Sciences - ISPRS Archives,
volume 38. International Society for Photogram-
metry and Remote Sensing, 2010.
IAC–18–A3,IP,23,x47008 Page 12 of 13
69th International Astronautical Congress, Bremen, Germany. Copyright c
2018 by the authors. All rights reserved.
[15] Alexandros Panagopoulos, Chaohui Wang, Dim-
itris Samaras, and Nikos Paragios. Illumination
estimation and cast shadow detection through
a higher-order graphical model. In Computer
Vision and Pattern Recognition (CVPR), 2011
IEEE Conference On, pages 673–680. IEEE,
June 2011.
[16] Valentin Peretroukhin, Lee Clement, and
Jonathan Kelly. Inferring sun direction to im-
prove visual odometry: A deep learning ap-
proach. The International Journal of Robotics
Research, page 027836491774973, January 2018.
[17] Jean-Francois Lalonde, Alexei A. Efros, and
Srinivasa G. Narasimhan. Estimating natural
illumination from a single outdoor image. In
Computer Vision, 2009 IEEE 12th International
Conference On, pages 183–190. IEEE, Septem-
ber 2009.
[18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and
Jian Sun. Deep Residual Learning for Image
Recognition. arXiv:1512.03385 [cs], December
2015.
[19] Sergey Ioffe and Christian Szegedy. Batch
Normalization: Accelerating Deep Network
Training by Reducing Internal Covariate Shift.
arXiv:1502.03167 [cs], February 2015.
[20] Bing Xu, Naiyan Wang, Tianqi Chen, and
Mu Li. Empirical evaluation of rectified acti-
vations in convolutional network. arXiv preprint
arXiv:1505.00853, 2015.
[21] Nitish Srivastava, Geoffrey Hinton, Alex
Krizhevsky, Ilya Sutskever, and Ruslan
Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting.
The Journal of Machine Learning Research,
15(1):1929–1958, 2014.
[22] Seiya Tokui, Kenta Oono, Shohei Hido, and
Justin Clayton. Chainer: A Next-Generation
Open Source Framework for Deep Learning. In
Proceedings of Workshop on Machine Learn-
ing Systems (LearningSys) in The Twenty-Ninth
Annual Conference on Neural Information Pro-
cessing Systems (NIPS), 2015.
[23] Josh Tobin, Rachel Fong, Alex Ray, Jonas
Schneider, Wojciech Zaremba, and Pieter
Abbeel. Domain Randomization for Transfer-
ring Deep Neural Networks from Simulation to
the Real World. arXiv:1703.06907 [cs], March
2017.
IAC–18–A3,IP,23,x47008 Page 13 of 13