Content uploaded by Amy Mcgovern
Author content
All content in this area was uploaded by Amy Mcgovern on Dec 24, 2013
Content may be subject to copyright.
Mach Learn
DOI 10.1007/s10994-011-5249-4
EDITORIAL
Machine learning in space: extending our reach
Amy McGovern ·Kiri L. Wagstaff
Received: 24 March 2011 / Accepted: 7 April 2011
© The Author(s) 2011
Abstract We introduce the challenge of using machine learning effectively in space appli-
cations and motivate the domain for future researchers. Machine learning can be used to
enable greater autonomy to improve the duration, reliability, cost-effectiveness, and science
return of space missions. In addition to the challenges provided by the nature of space itself,
the requirements of a space mission severely limit the use of many current machine learning
approaches, and we encourage researchers to explore new ways to address these challenges.
Keywords Space missions ·Machine learning applications ·Autonomy
1 Space operations: a challenge for machine learning
Space missions operate in an extremely challenging environment, for both human and
robotic explorers. Due to the risks, the cost, and the distance, exploration is most often
carried out remotely (e.g., the MESSENGER mission to Mercury, a multitude of Earth or-
biters, the twin Mars Exploration Rovers, the Cassini mission to Saturn, the New Horizons
mission to Pluto and beyond). For the foreseeable future, our only access to up-close obser-
vations of stars, planets, moons, and other celestial objects will be through the instruments
of robotic spacecraft. Even after we achieve the technological ability to send humans to
these remote locations, they will be assisted by a suite of rovers, orbiters, and other data
collection and analysis tools. Some locations may remain too dangerous, inhospitable, or
remote for humans to access at all. In all of these cases, autonomy for the remote robotic
agents is essential. Autonomy is useful even in missions closer to home, such as NASA’s
A. McGovern
School of Computer Science, University of Oklahoma, Norman, OK 73019, USA
e-mail: amcgovern@ou.edu
K.L. Wagstaff ()
Jet Propulsion Laboratory, California Institute of Technology, Mail Stop 306-463, 4800 Oak Grove
Drive, Pasadena, CA 91109, USA
e-mail: kiri.wagstaff@jpl.nasa.gov
Mach Learn
Robonaut (a robotic humanoid torso recently launched to help astronauts onboard the Inter-
national Space Station). The teleoperation approach currently in place quickly exhausts the
teleoperator, and an autonomous or semi-autonomous Robonaut could be a more effective
assistant for the astronauts. Such a system is under development (Bluethmann et al. 2004)
but has not yet been approved on the launched Robonaut.
Several factors make the goal of autonomous operation in space more challenging than
autonomous operations in an Earth-based desktop or web environment. First, remote space-
craft generally operate under severe computational constraints, with processors and memory
that lag a decade behind the desktop state of the art. This is due to the necessary radiation-
hardening process, which also greatly increases their cost. For example, the RAD750 pro-
cessor used by Deep Impact, Mars Reconnaissance Orbiter, the Kepler space telescope, and
other current missions and instruments runs at only 133 MHz and costs ∼$200,000 (Rhea
2002).
Second, space missions have an extremely high cost of failure. Not only is it expensive to
develop and launch the mission, but there is little or no opportunity for external aid or repair.
Any autonomy provided by machine learning or artificial intelligence techniques must be
provably reliable and constrained from posing any threat to the spacecraft’s station-keeping,
health, and core operations. This is at odds with the desire, and in some cases the need, to
enable autonomous control of spacecraft position and activities.
Third, space missions often experience extremely long communication times between
the spacecraft and the nearest human expert. These delays provide additional motivation for
autonomy, lest the remote agent expend resources and time in an unproductive state waiting
for a response. However, they place a higher requirement on autonomous decisions being
correct, as there can be no real-time human oversight or feedback.
The explicit need for adaptability, reasoning, and generalization from past experience
renders space a challenging application area that provides a prime opportunity for the field of
machine learning. We challenge our readers to address this domain. What existing methods
are suitable for this environment? What are their limitations? How can we incorporate the
need for safety? How can we trade off between risk and potential benefits? It is likely that the
space application domain calls for new ways of thinking about machine learning problems
and devising appropriate solutions.
This editorial aims to highlight to the research community the challenges of developing
and using machine learning methods for space applications and to point out avenues for
fruitful research pursuits. We also provide a context for and introduction to a new paper
by Michael C. Burl and Philipp G. Wetzler, “Onboard Object Recognition for Planetary
Exploration”, which is an example of this kind of work.
2 Existing machine learning and artificial intelligence in space
Autonomous operation enables a remote spacecraft to observe its environment and make in-
dependent decisions about which actions to take, which data to collect, and what to transmit
back to Earth. These capabilities are still in their infancy for today’s spacecraft, permitting
limited autonomy for obstacle avoidance (Maimone et al. 2004) or detection of certain real-
time events such as volcanic eruptions and floods from Earth orbit (Chien et al. 2005)or
dust devils on the Martian surface (Castaño et al. 2008). Autonomous terrain navigation has
improved the capabilities of the twin 2003 Mars Exploration Rovers, enabling them to tra-
verse significantly more terrain than the 1997 Sojourner rover and to increase their science
return. The rovers can also direct the onboard instruments autonomously and identify inter-
esting rock formations (Bajracharya et al. 2008). The AEGIS (Autonomous Exploration for
Mach Learn
Gathering Increased Science) system provides an initial data analysis of images collected by
a rover, so that features of interest (e.g., rocks with certain properties) can be automatically
identified and targeted for additional observations (Estlin et al. 2009). AEGIS is now in use
onboard the Mars Exploration Rovers and is also slated for use on the next Mars rover, Mars
Science Laboratory, which has a planned launch of late 2011. Details about these and other
advances for rover autonomy were summarized by Kean (2010).
In most cases, deployed onboard autonomy consists of the use of a planner whose pri-
orities can be influenced by newly obtained observations, and possibly a rudimentary anal-
ysis of those observations to derive higher-level conclusions about the state of the environ-
ment. To date, very little machine learning has been incorporated into space missions. To
our knowledge, the only onboard operational machine learning is a support vector machine
(SVM) classifier on the EO-1 spacecraft. Castaño et al. trained an SVM to classify pixels
from the Hyperion instrument as snow, water, ice, or land (Castaño et al. 2005). The trained
classifier was uploaded to EO-1 in 2005 and has been operational ever since, providing an
additional data product (thematic map) in real time that enables the automatic detection
of higher-level phenomena, such as spring lake ice thaw events, which informs automatic
instrument retasking.
Learning has been investigated for future missions, but these approaches have not yet
been fielded. One example is the use of onboard data analysis for the THEMIS instrument
on the Mars Odyssey orbiter (Castaño et al. 2007). The algorithms (including an SVM re-
gression model) were developed and tested but ultimately not uploaded to the spacecraft
due to risk considerations. Because the Mars Global Surveyor spacecraft failed in late 2006,
the single remaining orbiter (Odyssey) was designated a key asset for the 2008 landing of
the Phoenix mission and no software updates were permitted. Risk is often the barrier to
further acceptance of machine learning or autonomous methods. Even if it is highly unlikely
for a learning algorithm to do anything to jeopardize the safety of the equipment (or in the
case of Robonaut, the nearby astronauts), such capabilities cannot be fielded until they are
proven sufficiently safe. That process requires close collaboration with spacecraft experts
and a commitment to complete integration with verification and validation activities.
One subject of particular interest to machine learning in space that has received recent
attention has been the impact of a high-radiation environment on the reliability of the learn-
ing algorithms themselves. A study of the impact of radiation-corrupted RAM on different
clustering algorithms concluded that the k-means algorithm can (somewhat surprisingly)
withstand the Earth orbit environment without requiring radiation-hardened memory, which
could lead to substantial future mission cost savings (Wagstaff and Bornstein 2009b). Sim-
ilar results were found for SVM classifiers (Wagstaff and Bornstein 2009a). The clustering
study also found that kd-k-means, a faster version of the algorithm that stores the data set
as a kd-tree in memory, was much more sensitive to radiation and would not be advisable
for onboard use—another result that runs counter to the strategies one would employ in a
desktop environment. This result led to the subsequent development of a kd-tree variant that
was restructured to increase its robustness to radiation (Gieseke et al. 2010). More work on
this subject will help enable the adoption of advanced machine learning methods onboard
spacecraft.
Although machine learning for space is a new field, there have been several related work-
shops and conferences that provide venues for discussing new advances and opportunities.
These include:
– The Workshop on Machine Learning Technologies for Autonomous Space Applications
at the 2003 International Conference on Machine Learning. Participants identified robust
and efficient communication, verification/validation, and risk mitigation as the key topics
Mach Learn
on which future machine learning contributions should focus. A full workshop summary
is available at http://www.lunabots.com/icml2003/summary.html.
– The International Symposium on Artificial Intelligence, Robotics and Automation in
Space Conference (i-SAIRAS). The most recent symposium (2010) covered methods for
planning and scheduling, docking and capture, robotic landing, navigation, autonomy,
telerobotics, and more.
– The Workshops on Artificial Intelligence in Space, held in 2007 and 2009 in conjunction
with the International Joint Conference on Artificial Intelligence. Topics included collab-
oration between multiple robots, onboard clustering and data analysis, decision making,
efficient scheduling, and more.
The need for close collaboration between space mission experts and machine learning re-
searchers has been recognized informally, but there has been a dearth of true meeting
grounds established for these communities to make contact. The successes cited above in
integrating artificial intelligence and machine learning methods into space missions have
come about through direct collaboration in the context of the mission in operation, often
with ML/AI researchers first volunteering to train as rover drivers or other mission opera-
tors to gain direct experience with the mission needs and constraints. We encourage machine
learning researchers to reach out and make these connections, since they are so critical to
the adoption and use of ML methods for space missions.
3 Opportunities for machine learning in space
Instruments and missions that must operate remotely stand to benefit greatly from the use
of advanced machine learning. There is a need for innovative, high-reliability, and resource-
constrained methods for the following.
– Image analysis: recognition of features to inform instrument targeting, navigation, pin-
point landing
– Time series analysis: fault detection or prediction in telemetry, anomaly detection in sci-
entific sensors
– Classification: surface type mapping, mineral composition estimation
– Clustering: identification of trends and outliers
– Reinforcement learning: efficient exploration of new environments, identification of
robotic solutions to tasks
– Ranking: prioritization or subsampling of data given limited downlink bandwidth
– Active learning: selection of new observational targets
– Abstaining or introspective learning: to enable high reliability
– Multi-instrument or multi-mission ensemble learning.
This list is not exhaustive, and other applications of machine learning to space are possible.
To have a positive impact on space applications, we need to understand how well existing
machine learning methods perform as well as what their limitations are. In order to ensure
that a method will work well in space, the following challenges must also be addressed.
– Limited processing power
– Limited memory capacity
– High-radiation environment, which can perturb operations and corrupt memory
– Long round-trip communication delays, necessitating autonomous decision making
– High cost of failure, requiring high reliability and recovery from unexpected events
Mach Learn
– Embedded operation, requiring minimal impact to computing, memory, and other onboard
resources also needed to maintain spacecraft health and communications with Earth.
The potential payoffs are considerable. Any improvement in autonomy and decision
making for remote spacecraft leads to savings in time, and therefore reductions in cost and
risk. The spacecraft can accomplish more in a shorter period of time, which is not only more
efficient but may make the difference as to whether a specific discovery can be made at all,
since most spacecraft have severely limited total lifetimes. That limit is imposed by extreme
environmental factors (radiation, dust, cold, hazards), degradation of components (due to
thermal cycling, dust, age), consumption of finite resources (e.g., fuel for attitude thrusters,
sharpness of a drill bit, reactants for chemical testing), and cost. Keeping a mission operat-
ing is a constant financial drain, and in some cases even if the hardware is still functional
the mission may be terminated due to limited funds. Increased autonomy can greatly reduce
the ongoing operational costs and may even enable unanticipated extensions in the mission
lifetime. For example, the EO-1 spacecraft was able to reduce operational costs by $1 M per
year, with a 50% increase in science return, by using the Autonomous Sciencecraft Exper-
iment to automatically plan and adaptively re-plan observations as needed (Rabideau et al.
2006).
Further, onboard advanced machine learning capabilities may enable entirely new kinds
of missions that are not currently possible. Examples could include extremely long-duration
missions that require onboard adaptation to changing sensor responses, high-risk explo-
ration of caves or other locations in which the remote agent will be entirely cut off from
Earth communications for long periods, or scaling a cliff or glacier wall for which real-time
detection and avoidance of hazards and falls is required. All such missions will require the
ability to detect anomalous sensor readings, adapt to unexpected environmental conditions,
autonomously adjust to hardware failures, and more.
Space applications research can also yield benefits for machine learning. In develop-
ing innovative methods to meet the challenges of the space environment, we will push the
boundaries of existing machine learning algorithms and gain understanding about their own
limitations and ways they can be addressed. Thinking about problems outside of the typical
desktop computing environment can lead to new advances in machine learning with severe
resource constraints or when misclassification costs are extreme.
4 Example: Onboard Object Recognition for Planetary Exploration
The following paper, “Onboard Object Recognition for Planetary Exploration,” provides an
example of machine learning research inspired by the needs of actual space missions. This
paper introduces an SVM-based technique for identifying craters that directly addresses the
limited computation and memory of radiation-hardened processors. A rich array of crater
finding methods had been developed previously, but these focused on the ground-based
analysis of archived data, using conventional desktop or cluster computers. The authors
of the paper in this issue demonstrate that straightforward SVMs cannot be run within the
memory and processing time limitations of onboard processing, and they introduce a Fast
Fourier Transform (FFT) technique that enables the SVMs to run much more efficiently.
They demonstrate that both the theoretical and empirical computational efficiency of SVMs
with the FFTs are dramatically improved over standard SVMs and over neural nets. Their
approach is comparable to that of a human labeling the craters and can enable a remote
spacecraft to quickly focus on areas of interest.
Mach Learn
5Calltoaction
This editorial has outlined the need for advanced machine learning methods for space appli-
cations. Machine learning has the potential to greatly increase these missions’ capabilities,
as well as to enable ambitious new exploration that is not currently possible. We encourage
the machine learning community to (1) actively develop new machine learning concepts and
methods that can meet the unique challenges of the space environment; (2) identify novel
space applications where machine learning can significantly increase capabilities, robust-
ness, and/or efficiency; and (3) develop appropriate evaluation and validation strategies to
establish confidence in the remote operation of these methods in a mission-critical setting.
Acknowledgements The writing of this paper was supported by the University of Oklahoma and was
carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with
the National Aeronautics and Space Administration.
References
Bajracharya, M., Maimone, M. W., & Helmick, D. (2008). Autonomy for Mars rovers: Past, present, and
future. IEEE Computer,41, 44–50.
Bluethmann, W., Ambrose, R., Diftlet, M., Huber, E., Fagg, A. H., Rosenstein, M., Platt, R., Grupen, R.,
Breazeal, C., Brooks, A., Lockerd, A., Peters, R. A., Jenkins, O. C., Mataric, M., & Bugajska, M.
(2004). Building an autonomous humanoid tool user. In Proceedings of the IEEE-RAS international
conference on humanoid robots (pp. 402–421).
Castaño, R., Mazzoni, D., Tang, N., Doggett, T., Chien, S., Greeley, R., Cichy, B., & Davies, A. (2005).
Learning classifiers for science event detection in remote sensing imagery. In Proceedings of the 8th
international symposium on artificial intelligence, robotics, and automation in space.
Castaño, R., Wagstaff, K. L., Chien, S., Stough, T. M., & Tang, B. (2007). On-board analysis of uncalibrated
data for a spacecraft at Mars. In Proceedings of the thirteenth international conference on knowledge
discovery and data mining (KDD) (pp. 922–930).
Castaño, A., Fukunaga, A., Biesiadecki, J., Neakrase, L., Whelley, P., Greeley, R., Lemmon, M., Castaño,
R., & Chien, S. (2008). Automatic detection of dust devils and clouds on Mars. Machine Vision and
Applications,19, 467–482.
Chien, S., Sherwood, R., Tran, D., Cichy, B., Rabideau, G., Castaño, R., Davies, A., Mandl, D., Frye, S.,
Trout, B., Shulman, S., & Boyer, D. (2005). Using autonomy flight software to improve science return
on Earth Observing One. Journal of Aerospace Computing, Information, and Communication,2, 196–
216.
Estlin, T., Castaño, R., Bornstein, B., Gaines, D., Anderson, R. C., de Granville, C., Thompson, D., Burl,
M., Judd, M., & Chien, S. (2009). Automated targeting for the MER rovers. In Proceedings of the third
IEEE international conference on space mission challenges for information technology.
Gieseke, F., Moruz, G., & Vahrenhold, J. (2010). Resilient k-d trees: K-means in space revisited. In Proceed-
ings of the IEEE international conference on data mining (pp. 815–820).
Kean, S. (2010). Making smarter, savvier robots. Science,329, 508–509.
Maimone, M., Johnson, A., Cheng, Y., Wilson, R., & Matthies, L. (2004). Autonomous navigation results
from the Mars Exploration Rover (MER) mission. In Proceedings of the 9th international symposium
on experimental robotics (pp. 3–13).
Rabideau, G., Tran, D., Chien, S., Cichy, B., Sherwood, R., Mandl, D., Frye, S., Shulman, S., Szwaxzkowski,
J., Boyer, D., & Van Gassbeck, J. (2006). Mission operations of Earth Observing-1 with onboard auton-
omy. In Proceedings of the IEEE international conference on space mission challenges for information
technology.
Rhea, J. (2002). BAE Systems moves into third generation rad-hard processors. Military & Aerospace Elec-
tronics,13.
Wagstaff, K. L., & Bornstein, B. (2009a). How much memory radiation protection do onboard machine learn-
ing algorithms require. In Proceedings of the IJCAI-09/SMC-IT-09/IWPSS-09 workshop on artificial
intelligence in space.
Wagstaff, K. L., & Bornstein, B. (2009b). K-means in space: A radiation sensitivity evaluation. In Proceed-
ings of the twenty-sixth international conference on machine learning (ICML) (pp. 1097–1104).