ArticlePDF Available

Machine learning in space: Extending our reach

Authors:

Abstract

We introduce the challenge of using machine learning effectively in space applications and motivate the domain for future researchers. Machine learning can be used to enable greater autonomy to improve the duration, reliability, cost-effectiveness, and science return of space missions. In addition to the challenges provided by the nature of space itself, the requirements of a space mission severely limit the use of many current machine learning approaches, and we encourage researchers to explore new ways to address these challenges.
Mach Learn
DOI 10.1007/s10994-011-5249-4
EDITORIAL
Machine learning in space: extending our reach
Amy McGovern ·Kiri L. Wagstaff
Received: 24 March 2011 / Accepted: 7 April 2011
© The Author(s) 2011
Abstract We introduce the challenge of using machine learning effectively in space appli-
cations and motivate the domain for future researchers. Machine learning can be used to
enable greater autonomy to improve the duration, reliability, cost-effectiveness, and science
return of space missions. In addition to the challenges provided by the nature of space itself,
the requirements of a space mission severely limit the use of many current machine learning
approaches, and we encourage researchers to explore new ways to address these challenges.
Keywords Space missions ·Machine learning applications ·Autonomy
1 Space operations: a challenge for machine learning
Space missions operate in an extremely challenging environment, for both human and
robotic explorers. Due to the risks, the cost, and the distance, exploration is most often
carried out remotely (e.g., the MESSENGER mission to Mercury, a multitude of Earth or-
biters, the twin Mars Exploration Rovers, the Cassini mission to Saturn, the New Horizons
mission to Pluto and beyond). For the foreseeable future, our only access to up-close obser-
vations of stars, planets, moons, and other celestial objects will be through the instruments
of robotic spacecraft. Even after we achieve the technological ability to send humans to
these remote locations, they will be assisted by a suite of rovers, orbiters, and other data
collection and analysis tools. Some locations may remain too dangerous, inhospitable, or
remote for humans to access at all. In all of these cases, autonomy for the remote robotic
agents is essential. Autonomy is useful even in missions closer to home, such as NASA’s
A. McGovern
School of Computer Science, University of Oklahoma, Norman, OK 73019, USA
e-mail: amcgovern@ou.edu
K.L. Wagstaff ()
Jet Propulsion Laboratory, California Institute of Technology, Mail Stop 306-463, 4800 Oak Grove
Drive, Pasadena, CA 91109, USA
e-mail: kiri.wagstaff@jpl.nasa.gov
Mach Learn
Robonaut (a robotic humanoid torso recently launched to help astronauts onboard the Inter-
national Space Station). The teleoperation approach currently in place quickly exhausts the
teleoperator, and an autonomous or semi-autonomous Robonaut could be a more effective
assistant for the astronauts. Such a system is under development (Bluethmann et al. 2004)
but has not yet been approved on the launched Robonaut.
Several factors make the goal of autonomous operation in space more challenging than
autonomous operations in an Earth-based desktop or web environment. First, remote space-
craft generally operate under severe computational constraints, with processors and memory
that lag a decade behind the desktop state of the art. This is due to the necessary radiation-
hardening process, which also greatly increases their cost. For example, the RAD750 pro-
cessor used by Deep Impact, Mars Reconnaissance Orbiter, the Kepler space telescope, and
other current missions and instruments runs at only 133 MHz and costs $200,000 (Rhea
2002).
Second, space missions have an extremely high cost of failure. Not only is it expensive to
develop and launch the mission, but there is little or no opportunity for external aid or repair.
Any autonomy provided by machine learning or artificial intelligence techniques must be
provably reliable and constrained from posing any threat to the spacecraft’s station-keeping,
health, and core operations. This is at odds with the desire, and in some cases the need, to
enable autonomous control of spacecraft position and activities.
Third, space missions often experience extremely long communication times between
the spacecraft and the nearest human expert. These delays provide additional motivation for
autonomy, lest the remote agent expend resources and time in an unproductive state waiting
for a response. However, they place a higher requirement on autonomous decisions being
correct, as there can be no real-time human oversight or feedback.
The explicit need for adaptability, reasoning, and generalization from past experience
renders space a challenging application area that provides a prime opportunity for the field of
machine learning. We challenge our readers to address this domain. What existing methods
are suitable for this environment? What are their limitations? How can we incorporate the
need for safety? How can we trade off between risk and potential benefits? It is likely that the
space application domain calls for new ways of thinking about machine learning problems
and devising appropriate solutions.
This editorial aims to highlight to the research community the challenges of developing
and using machine learning methods for space applications and to point out avenues for
fruitful research pursuits. We also provide a context for and introduction to a new paper
by Michael C. Burl and Philipp G. Wetzler, “Onboard Object Recognition for Planetary
Exploration”, which is an example of this kind of work.
2 Existing machine learning and artificial intelligence in space
Autonomous operation enables a remote spacecraft to observe its environment and make in-
dependent decisions about which actions to take, which data to collect, and what to transmit
back to Earth. These capabilities are still in their infancy for today’s spacecraft, permitting
limited autonomy for obstacle avoidance (Maimone et al. 2004) or detection of certain real-
time events such as volcanic eruptions and floods from Earth orbit (Chien et al. 2005)or
dust devils on the Martian surface (Castaño et al. 2008). Autonomous terrain navigation has
improved the capabilities of the twin 2003 Mars Exploration Rovers, enabling them to tra-
verse significantly more terrain than the 1997 Sojourner rover and to increase their science
return. The rovers can also direct the onboard instruments autonomously and identify inter-
esting rock formations (Bajracharya et al. 2008). The AEGIS (Autonomous Exploration for
Mach Learn
Gathering Increased Science) system provides an initial data analysis of images collected by
a rover, so that features of interest (e.g., rocks with certain properties) can be automatically
identified and targeted for additional observations (Estlin et al. 2009). AEGIS is now in use
onboard the Mars Exploration Rovers and is also slated for use on the next Mars rover, Mars
Science Laboratory, which has a planned launch of late 2011. Details about these and other
advances for rover autonomy were summarized by Kean (2010).
In most cases, deployed onboard autonomy consists of the use of a planner whose pri-
orities can be influenced by newly obtained observations, and possibly a rudimentary anal-
ysis of those observations to derive higher-level conclusions about the state of the environ-
ment. To date, very little machine learning has been incorporated into space missions. To
our knowledge, the only onboard operational machine learning is a support vector machine
(SVM) classifier on the EO-1 spacecraft. Castaño et al. trained an SVM to classify pixels
from the Hyperion instrument as snow, water, ice, or land (Castaño et al. 2005). The trained
classifier was uploaded to EO-1 in 2005 and has been operational ever since, providing an
additional data product (thematic map) in real time that enables the automatic detection
of higher-level phenomena, such as spring lake ice thaw events, which informs automatic
instrument retasking.
Learning has been investigated for future missions, but these approaches have not yet
been fielded. One example is the use of onboard data analysis for the THEMIS instrument
on the Mars Odyssey orbiter (Castaño et al. 2007). The algorithms (including an SVM re-
gression model) were developed and tested but ultimately not uploaded to the spacecraft
due to risk considerations. Because the Mars Global Surveyor spacecraft failed in late 2006,
the single remaining orbiter (Odyssey) was designated a key asset for the 2008 landing of
the Phoenix mission and no software updates were permitted. Risk is often the barrier to
further acceptance of machine learning or autonomous methods. Even if it is highly unlikely
for a learning algorithm to do anything to jeopardize the safety of the equipment (or in the
case of Robonaut, the nearby astronauts), such capabilities cannot be fielded until they are
proven sufficiently safe. That process requires close collaboration with spacecraft experts
and a commitment to complete integration with verification and validation activities.
One subject of particular interest to machine learning in space that has received recent
attention has been the impact of a high-radiation environment on the reliability of the learn-
ing algorithms themselves. A study of the impact of radiation-corrupted RAM on different
clustering algorithms concluded that the k-means algorithm can (somewhat surprisingly)
withstand the Earth orbit environment without requiring radiation-hardened memory, which
could lead to substantial future mission cost savings (Wagstaff and Bornstein 2009b). Sim-
ilar results were found for SVM classifiers (Wagstaff and Bornstein 2009a). The clustering
study also found that kd-k-means, a faster version of the algorithm that stores the data set
as a kd-tree in memory, was much more sensitive to radiation and would not be advisable
for onboard use—another result that runs counter to the strategies one would employ in a
desktop environment. This result led to the subsequent development of a kd-tree variant that
was restructured to increase its robustness to radiation (Gieseke et al. 2010). More work on
this subject will help enable the adoption of advanced machine learning methods onboard
spacecraft.
Although machine learning for space is a new field, there have been several related work-
shops and conferences that provide venues for discussing new advances and opportunities.
These include:
The Workshop on Machine Learning Technologies for Autonomous Space Applications
at the 2003 International Conference on Machine Learning. Participants identified robust
and efficient communication, verification/validation, and risk mitigation as the key topics
Mach Learn
on which future machine learning contributions should focus. A full workshop summary
is available at http://www.lunabots.com/icml2003/summary.html.
– The International Symposium on Artificial Intelligence, Robotics and Automation in
Space Conference (i-SAIRAS). The most recent symposium (2010) covered methods for
planning and scheduling, docking and capture, robotic landing, navigation, autonomy,
telerobotics, and more.
The Workshops on Artificial Intelligence in Space, held in 2007 and 2009 in conjunction
with the International Joint Conference on Artificial Intelligence. Topics included collab-
oration between multiple robots, onboard clustering and data analysis, decision making,
efficient scheduling, and more.
The need for close collaboration between space mission experts and machine learning re-
searchers has been recognized informally, but there has been a dearth of true meeting
grounds established for these communities to make contact. The successes cited above in
integrating artificial intelligence and machine learning methods into space missions have
come about through direct collaboration in the context of the mission in operation, often
with ML/AI researchers first volunteering to train as rover drivers or other mission opera-
tors to gain direct experience with the mission needs and constraints. We encourage machine
learning researchers to reach out and make these connections, since they are so critical to
the adoption and use of ML methods for space missions.
3 Opportunities for machine learning in space
Instruments and missions that must operate remotely stand to benefit greatly from the use
of advanced machine learning. There is a need for innovative, high-reliability, and resource-
constrained methods for the following.
Image analysis: recognition of features to inform instrument targeting, navigation, pin-
point landing
Time series analysis: fault detection or prediction in telemetry, anomaly detection in sci-
entific sensors
Classification: surface type mapping, mineral composition estimation
Clustering: identification of trends and outliers
– Reinforcement learning: efficient exploration of new environments, identification of
robotic solutions to tasks
Ranking: prioritization or subsampling of data given limited downlink bandwidth
Active learning: selection of new observational targets
Abstaining or introspective learning: to enable high reliability
Multi-instrument or multi-mission ensemble learning.
This list is not exhaustive, and other applications of machine learning to space are possible.
To have a positive impact on space applications, we need to understand how well existing
machine learning methods perform as well as what their limitations are. In order to ensure
that a method will work well in space, the following challenges must also be addressed.
Limited processing power
Limited memory capacity
High-radiation environment, which can perturb operations and corrupt memory
Long round-trip communication delays, necessitating autonomous decision making
High cost of failure, requiring high reliability and recovery from unexpected events
Mach Learn
Embedded operation, requiring minimal impact to computing, memory, and other onboard
resources also needed to maintain spacecraft health and communications with Earth.
The potential payoffs are considerable. Any improvement in autonomy and decision
making for remote spacecraft leads to savings in time, and therefore reductions in cost and
risk. The spacecraft can accomplish more in a shorter period of time, which is not only more
efficient but may make the difference as to whether a specific discovery can be made at all,
since most spacecraft have severely limited total lifetimes. That limit is imposed by extreme
environmental factors (radiation, dust, cold, hazards), degradation of components (due to
thermal cycling, dust, age), consumption of finite resources (e.g., fuel for attitude thrusters,
sharpness of a drill bit, reactants for chemical testing), and cost. Keeping a mission operat-
ing is a constant financial drain, and in some cases even if the hardware is still functional
the mission may be terminated due to limited funds. Increased autonomy can greatly reduce
the ongoing operational costs and may even enable unanticipated extensions in the mission
lifetime. For example, the EO-1 spacecraft was able to reduce operational costs by $1 M per
year, with a 50% increase in science return, by using the Autonomous Sciencecraft Exper-
iment to automatically plan and adaptively re-plan observations as needed (Rabideau et al.
2006).
Further, onboard advanced machine learning capabilities may enable entirely new kinds
of missions that are not currently possible. Examples could include extremely long-duration
missions that require onboard adaptation to changing sensor responses, high-risk explo-
ration of caves or other locations in which the remote agent will be entirely cut off from
Earth communications for long periods, or scaling a cliff or glacier wall for which real-time
detection and avoidance of hazards and falls is required. All such missions will require the
ability to detect anomalous sensor readings, adapt to unexpected environmental conditions,
autonomously adjust to hardware failures, and more.
Space applications research can also yield benefits for machine learning. In develop-
ing innovative methods to meet the challenges of the space environment, we will push the
boundaries of existing machine learning algorithms and gain understanding about their own
limitations and ways they can be addressed. Thinking about problems outside of the typical
desktop computing environment can lead to new advances in machine learning with severe
resource constraints or when misclassification costs are extreme.
4 Example: Onboard Object Recognition for Planetary Exploration
The following paper, “Onboard Object Recognition for Planetary Exploration,” provides an
example of machine learning research inspired by the needs of actual space missions. This
paper introduces an SVM-based technique for identifying craters that directly addresses the
limited computation and memory of radiation-hardened processors. A rich array of crater
finding methods had been developed previously, but these focused on the ground-based
analysis of archived data, using conventional desktop or cluster computers. The authors
of the paper in this issue demonstrate that straightforward SVMs cannot be run within the
memory and processing time limitations of onboard processing, and they introduce a Fast
Fourier Transform (FFT) technique that enables the SVMs to run much more efficiently.
They demonstrate that both the theoretical and empirical computational efficiency of SVMs
with the FFTs are dramatically improved over standard SVMs and over neural nets. Their
approach is comparable to that of a human labeling the craters and can enable a remote
spacecraft to quickly focus on areas of interest.
Mach Learn
5Calltoaction
This editorial has outlined the need for advanced machine learning methods for space appli-
cations. Machine learning has the potential to greatly increase these missions’ capabilities,
as well as to enable ambitious new exploration that is not currently possible. We encourage
the machine learning community to (1) actively develop new machine learning concepts and
methods that can meet the unique challenges of the space environment; (2) identify novel
space applications where machine learning can significantly increase capabilities, robust-
ness, and/or efficiency; and (3) develop appropriate evaluation and validation strategies to
establish confidence in the remote operation of these methods in a mission-critical setting.
Acknowledgements The writing of this paper was supported by the University of Oklahoma and was
carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with
the National Aeronautics and Space Administration.
References
Bajracharya, M., Maimone, M. W., & Helmick, D. (2008). Autonomy for Mars rovers: Past, present, and
future. IEEE Computer,41, 44–50.
Bluethmann, W., Ambrose, R., Diftlet, M., Huber, E., Fagg, A. H., Rosenstein, M., Platt, R., Grupen, R.,
Breazeal, C., Brooks, A., Lockerd, A., Peters, R. A., Jenkins, O. C., Mataric, M., & Bugajska, M.
(2004). Building an autonomous humanoid tool user. In Proceedings of the IEEE-RAS international
conference on humanoid robots (pp. 402–421).
Castaño, R., Mazzoni, D., Tang, N., Doggett, T., Chien, S., Greeley, R., Cichy, B., & Davies, A. (2005).
Learning classifiers for science event detection in remote sensing imagery. In Proceedings of the 8th
international symposium on artificial intelligence, robotics, and automation in space.
Castaño, R., Wagstaff, K. L., Chien, S., Stough, T. M., & Tang, B. (2007). On-board analysis of uncalibrated
data for a spacecraft at Mars. In Proceedings of the thirteenth international conference on knowledge
discovery and data mining (KDD) (pp. 922–930).
Castaño, A., Fukunaga, A., Biesiadecki, J., Neakrase, L., Whelley, P., Greeley, R., Lemmon, M., Castaño,
R., & Chien, S. (2008). Automatic detection of dust devils and clouds on Mars. Machine Vision and
Applications,19, 467–482.
Chien, S., Sherwood, R., Tran, D., Cichy, B., Rabideau, G., Castaño, R., Davies, A., Mandl, D., Frye, S.,
Trout, B., Shulman, S., & Boyer, D. (2005). Using autonomy flight software to improve science return
on Earth Observing One. Journal of Aerospace Computing, Information, and Communication,2, 196–
216.
Estlin, T., Castaño, R., Bornstein, B., Gaines, D., Anderson, R. C., de Granville, C., Thompson, D., Burl,
M., Judd, M., & Chien, S. (2009). Automated targeting for the MER rovers. In Proceedings of the third
IEEE international conference on space mission challenges for information technology.
Gieseke, F., Moruz, G., & Vahrenhold, J. (2010). Resilient k-d trees: K-means in space revisited. In Proceed-
ings of the IEEE international conference on data mining (pp. 815–820).
Kean, S. (2010). Making smarter, savvier robots. Science,329, 508–509.
Maimone, M., Johnson, A., Cheng, Y., Wilson, R., & Matthies, L. (2004). Autonomous navigation results
from the Mars Exploration Rover (MER) mission. In Proceedings of the 9th international symposium
on experimental robotics (pp. 3–13).
Rabideau, G., Tran, D., Chien, S., Cichy, B., Sherwood, R., Mandl, D., Frye, S., Shulman, S., Szwaxzkowski,
J., Boyer, D., & Van Gassbeck, J. (2006). Mission operations of Earth Observing-1 with onboard auton-
omy. In Proceedings of the IEEE international conference on space mission challenges for information
technology.
Rhea, J. (2002). BAE Systems moves into third generation rad-hard processors. Military & Aerospace Elec-
tronics,13.
Wagstaff, K. L., & Bornstein, B. (2009a). How much memory radiation protection do onboard machine learn-
ing algorithms require. In Proceedings of the IJCAI-09/SMC-IT-09/IWPSS-09 workshop on artificial
intelligence in space.
Wagstaff, K. L., & Bornstein, B. (2009b). K-means in space: A radiation sensitivity evaluation. In Proceed-
ings of the twenty-sixth international conference on machine learning (ICML) (pp. 1097–1104).
... for the fields of image analysis, time series analysis, classification, clustering and reinforcement learning [8]. ...
... DSP Slices: FPGAs have DSP slices to implement signal processing functions. The DSP operation most commonly used is Multiply-Accumulate, or MAC operation Figure (2)(3)(4)(5)(6)(7)(8) shows the structure of the most common DSP by Xilinx, the DSP48E1, which has a 25 × 18 two's-complement multiplier, a 48-bit adder, and accumulator. Block Ram (BRAM): BRAM is a dual-port RAM module embedded throughout the FPGA fabric. ...
... limitations are used to model the design with respect to latency and resources mathematically. fpgaConvNet reconfigures the whole FPGA when data exits a subgraph, seen in Figure ( [2][3][4][5][6][7][8][9][10][11] and enters the next, this allows for the design to be split into subgraphs along its depth, each subgraph can effectively use all available resources as data is streamed through, however, constantly reconfiguring the whole FPGA adds a substantial time overhead, that is why the authors recommend that it is used for scenarios where the latency of a single input is not critical for the application and batch processing can be tolerated. ...
... In the upcoming space missions, the need of designing spacecraft with highly autonomous on-board capabilities is an emerging trend (Frost, Butt, & Silva, 2010;McGovern & Wagstaff, 2011;Shirobokov, Trofimov, & Ovchinnikov, 2021;Tipaldi & Glielmo, 2018). Such capabilities rely on the usage of tools, mainly implemented on-board the spacecraft (Eickhoff, 2011), aimed at reducing the humanin-the-loop intervention, while dealing with uncertain and complex environments (Frost et al., 2010). ...
... The potential payoffs are considerable, such as the overall improvement of spacecraft availability and reliability and the reduction of costs in ground segment operations. In addition to this, any improvement in on-board autonomy and decision making for remote spacecraft can enable entirely new kinds of missions that are not currently possible (McGovern & Wagstaff, 2011). For instance, the next generation of landers for large and small planetary bodies (e.g., Mars and small asteroids) will require more advanced and integrated Guidance, Navigation, and Control (GNC) systems, able to satisfy increasingly stringent accuracy requirements (driven by the ambition of exploring regions having the potential to yield challenging GNC functions (Silvestrini & Lavagna, 2021;Smith et al., 2021) and decision making and planning capabilities (Tipaldi & Glielmo, 2018). ...
... Space missions usually operate in challenging and changing environments with an extremely high cost of failure (McGovern & Wagstaff, 2011). Many space missions actually take place in environments with complex and time-varying dynamics that may be incompletely modeled during the mission design phase. ...
Article
This paper presents and analyzes Reinforcement Learning (RL) based approaches to solve spacecraft control problems. Different application fields are considered, e.g., guidance, navigation and control systems for spacecraft landing on celestial bodies, constellation orbital control, and maneuver planning in orbit transfers. It is discussed how RL solutions can address the emerging needs of designing spacecraft with highly autonomous on-board capabilities and implementing controllers (i.e., RL agents) robust to system uncertainties and adaptive to changing environments. For each application field, the RL framework core elements (e.g., the reward function, the RL algorithm and the environment model used for the RL agent training) are discussed with the aim of providing some guidelines in the formulation of spacecraft control problems via a RL framework. At the same time, the adoption of RL in real space projects is also analyzed. Different open points are identified and discussed, e.g., the availability of high-fidelity simulators for the RL agent training and the verification of RL-based solutions. This way, recommendations for future work are proposed with the aim of reducing the technological gap between the solutions proposed by the academic community and the needs/requirements of the space industry.
... In addition, large distances result in two obstacles: extremely long communication times between spacecraft and Earth lead to time wasted and unproductivity while messages transmit between spacecraft and Earth, and a limit on data transmitted due to bandwidth limitations at large distances leads to discarding data in order to adhere to the low bandwidth. Due to these risks and drawbacks, autonomous robotic agents such as rovers are advantageous for space exploration [2]. Additionally, Mars exploration is a data-rich field, with future missions set to collect larger and more detailed datasets than before, significantly increasing the total data available and the rate of new observations reception. ...
... The Autonomous Exploration for Gathering Increased Science system (AEGIS) is a component of the OASIS autonomous framework that provides automatic targeting for remote sensing instruments on Mars rovers and data analysis of images collected for the identification and targeting of features of interest [2], [8]. AEGIS uses the terrain feature identification of OASIS to guide the ChemCam laser of the rover, automatically selecting then vaporizing target terrain features to analyze emanated plasma. ...
... Past missions have demonstrated that using onboard autonomy to enable faster response times improves operational efficiency, optimizes costs and increases system reliability. 28 For example, by using the Autonomous Sciencecraft Experiment to automatically plan and adaptively re-plan observations as required the EO-1 spacecraft was able to cut operational costs by $1 M per year, with a 50% increase in science return 40 . 35 Anomaly Detection To enable the system to act intelligently, it should be able to recognize patterns and anomalies. ...
Conference Paper
Full-text available
This decade has seen historical worldwide involvement in the space sector. The number of commercial companies building spacecraft is at its highest, and the number of humans going to space is rapidly increasing every month. With its innate curiosity to explore our cosmos, humanity plans to venture further than ever before. Although we have been able to send rovers and other functionally smart spacecraft to the edge of our solar system, human spaceflight beyond LEO comes with added difficulties that we have yet to fully resolve. This undertaking requires a foundational understanding of the requirements for Deep Space Transportation Systems. That includes designing spacecraft that can accommodate efficient propulsion systems for long duration space travel, as well as choosing materials capable of protecting all living beings aboard the vehicle from galactic cosmic rays and solar radiation. Studying existing technologies and recognising where the complexities lie will help in the allocation of resources to tackle the most pressing of issues. Before being able to develop solutions, we must first understand existing scientific achievements that have supported deep space missions to date. This study will review technological advancements in the field, as well as critically assess which open questions are most crucial to deal with. This begins with defining the phases of interplanetary deep space mission operations, consolidating state-of-the-art protective materials for extreme environments, assessing current energy/power generation systems, and finally understanding which propulsion systems could be most efficient for this endeavour. These technologies will be summarised in a table format, to allow for ease of assessment by future researchers. This will include information about the stages of development for each technology, and whether or not it has been tested. This could help resurface forgotten systems that were thought to be out of bounds at the time, but could now be of great use if integrated with recent technological advancements. Finally, the outcome of this study is to define research questions in each of these areas, in order to effectively develop solutions for future deep space missions. This study is performed by the Deep Space Initiative (DSI); a non-profit company for which the goal is to increase accessibility and opportunity for space research, and its main focus is to help enable deep space exploration for all Humankind.
... Autonomous off-road navigation has improved the capabilities of the 2003 Exploration Rover on Mars, which has made it possible to overcome significantly larger terrain. [14] One of the main goals of machine vision is to create autonomous cars. The aim is for vehicles to recognize other means of transport and for driving the car to be possible without interaction with the driver. ...
Chapter
This article addresses the issue of detecting traffic signs signalling cycle routes. It is also necessary to read the number or text of the cycle route from the given image. These tags are kept under the identifier IS21 and have a defined, uniform design with text in the middle of the tag. The detection was solved using the You Look Only Once (YOLO) model, which works on the principle of a convolutional neural network. The OCR tool PythonOCR was used to read characters from tags. The success rate of IS21 tag detection is 93.4%, and the success rate of reading text from tags is equal to 85.9%. The architecture described in the article is suitable for solving the defined problem.KeywordsYOLOv5YOLOOCRObject detectionMachine learningComputer vision
... The rapid development of powerful processors as readily available commercial off-the-shelf OBCs or payloads is changing the landscape of the spacecraft computing environment and creating new opportunities for the space-segment to develop and deploy AI platforms. In a 2011 editorial, McGovern et al. issued a call to action to " (1) actively develop new machine learning concepts and methods that can meet the unique challenges of the space environment; (2) identify novel space applications where machine learning can significantly increase capabilities, robustness, and/or efficiency; and (3) develop appropriate evaluation and validation strategies to establish confidence in the remote operation of these methods in a mission-critical setting" [4]. Since then, only a few spacecraft missions have flown AI technology demonstrators all of which have restricted their experiments to model inference. ...
Conference Paper
Full-text available
OPS-SAT is a 3U CubeSat launched on December 18, 2019, it is the first nanosatellite to be directly owned and operated by the European Space Agency (ESA). The spacecraft is a flying platform that is easily accessible to European industry, institutions, and individuals for rapid prototyping, testing, and validation of their software and firmware experiments in space at no cost and no bureaucracy. Equipped with a full set of sensors and actuators, it is conceived to break the "has never flown, will never fly" cycle. OPS-SAT has spearheaded many firsts with in-orbit applications of Artificial Intelligence (AI) for autonomous operations. AI is of rising interest for space-segment applications despite limited technology demonstrators on-board flying spacecrafts. Past missions have restricted AI to inferring models trained on the ground prior to being uplinked to a spacecraft. This paper presents how the OPS-SAT Flight Control Team (FCT) breaks away from this trend with various AI solutions for in-flight autonomy. Three on-board case studies are presented: 1) image classification with Convolutional Neural Network (CNN) model inferences using TensorFlow Lite, 2) image clustering with unsupervised learning using k-means, and 3) supervised learning to train a Fault Detection, Isolation, and Recovery (FDIR) model using online machine learning algorithms. CNN inference with TensorFlow Lite running on-board the spacecraft showcases in-space application of an industry standard open-source solution originally developed for terrestrial edge and mobile computing. Furthermore, the solution is "openable" with an inference pipeline that can be constructed from crowdsourced trained models. This mechanism enables open innovation methods to extend on-board ML beyond its original mission requirement while stimulating knowledge transfer from established AI communities into space applications. Further classification is achieved by re-using an open-source k-means algorithm to cluster images in groups of "cloudiness" and initial results in image segmentation (feature extraction) have promising outlooks. Results from training a FDIR model to protect the spacecraft's camera lens against direct exposure to sunlight are presented, achieving balanced accuracies ranging from 85% to 99% from models trained with the Adagarad RDA, AROW, and NHERD online ML algorithms in multi-dimensional input spaces using photodiode diagnostics data stream as training data. The ability to train models in-flight with data generated on-board without human involvement is an exciting first that stimulates a significant rethink on how future missions can be designed.
... The rapid development of powerful processors as readily available commercial off-the-shelf OBCs or payloads is changing the landscape of the spacecraft computing environment and creating new opportunities for the space-segment to develop and deploy AI platforms. In a 2011 editorial, McGovern et al. issued a call to action to " (1) actively develop new machine learning concepts and methods that can meet the unique challenges of the space environment; (2) identify novel space applications where machine learning can significantly increase capabilities, robustness, and/or efficiency; and (3) develop appropriate evaluation and validation strategies to establish confidence in the remote operation of these methods in a mission-critical setting" [4]. Since then, only a few spacecraft missions have flown AI technology demonstrators all of which have restricted their experiments to model inference. ...
Preprint
Full-text available
OPS-SAT is a 3U CubeSat launched on December 18, 2019, it is the first nanosatellite to be directly owned and operated by the European Space Agency (ESA). The spacecraft is a flying platform that is easily accessible to European industry, institutions, and individuals for rapid prototyping, testing, and validation of their software and firmware experiments in space at no cost and no bureaucracy. Equipped with a full set of sensors and actuators, it is conceived to break the "has never flown, will never fly" cycle. OPS-SAT has spearheaded many firsts with in-orbit applications of Artificial Intelligence (AI) for autonomous operations. AI is of rising interest for space-segment applications despite limited technology demonstrators on-board flying spacecrafts. Past missions have restricted AI to inferring models trained on the ground prior to being uplinked to a spacecraft. This paper presents how the OPS-SAT Flight Control Team (FCT) breaks away from this trend with various AI solutions for in-flight autonomy. Three on-board case studies are presented: 1) image classification with Convolutional Neural Network (CNN) model inferences using TensorFlow Lite, 2) image clustering with unsupervised learning using k-means, and 3) supervised learning to train a Fault Detection, Isolation, and Recovery (FDIR) model using online machine learning algorithms. CNN inference with TensorFlow Lite running on-board the spacecraft showcases in-space application of an industry standard open-source solution originally developed for terrestrial edge and mobile computing. Furthermore, the solution is "openable" with an inference pipeline that can be constructed from crowdsourced trained models. This mechanism enables open innovation methods to extend on-board ML beyond its original mission requirement while stimulating knowledge transfer from established AI communities into space applications. Further classification is achieved by re-using an open-source k-means algorithm to cluster images in groups of "cloudiness" and initial results in image segmentation (feature extraction) have promising outlooks. Results from training a FDIR model to protect the spacecraft's camera lens against direct exposure to sunlight are presented, achieving balanced accuracies ranging from 85% to 99% from models trained with the Adagarad RDA, AROW, and NHERD online ML algorithms in multi-dimensional input spaces using photodiode diagnostics data stream as training data. The ability to train models in-flight with data generated on-board without human involvement is an exciting first that stimulates a significant rethink on how future missions can be designed.
Article
Tactile and embedded sensing is a new concept that has recently appeared in the context of rovers and planetary exploration missions. Various sensors such as those measuring pressure and integrated directly on wheels have the potential to add a "sense of touch" to exploratory vehicles. We investigate the utility of deep learning (DL), from conventional Convolutional Neural Networks (CNN) to emerging geometric and topological DL, to terrain classification for planetary exploration based on a novel dataset from an experimental tactile wheel concept. The dataset includes 2D conductivity images from a pressure sensor array, which is wrapped around a rover wheel and is able to read pressure signatures of the ground beneath the wheel. Neither newer nor traditional DL tools have been previously applied to tactile sensing data. We discuss insights into advantages and limitations of these methods for the analysis of non-traditional pressure images and their potential use in planetary surface science.
Article
INTEGRAL (INTErnational Gamma-Ray Astrophysics Laboratory) is an astronomical observatory of the European Space Agency, responsible for many significant scientific discoveries in the last few decades. It orbits Earth since 2002 in a highly elliptical orbit, passing through the Van Allen belts – areas with high-energy ionized particles that can damage the spacecraft’s on-board equipment. An essential part of mission planning and operation of INTEGRAL is thus the prediction of its radiation belts entry and exit times. We propose a novel compact representation of the data and evaluate its potential using several machine learning methods. The experimental validation identifies gradient boosted trees with quantile loss as the best performing method. By using our approach, INTEGRAL can perform 2 additional hours (on average) of scientific measurements per orbit (with adjustment for uncertainty at the 95th percentile). This approach protects INTEGRAL from damages and improves its scientific return at the same time. It can be easily extended and applied to other spacecraft with similar orbits.
Article
Full-text available
We describe four pixel-based classifiers that were developed to identify events in hyperspectral data onboard a spacecraft. One of the classifiers was developed manually by a domain expert, while the other three were developed using machine learning methods. The top two performing classifiers were uploaded to the Earth Observing-1 (EO-1) spacecraft and are now running on the satellite. Classification results are used by the Autonomous Sciencecraft Experiment of NASA's New Millennium Program on EO-1 to automatically target the spacecraft to collect follow-on imagery. This software demonstrates the potential for future deep space missions to identify short-lived science events and make decisions onboard.
Article
Full-text available
NASA's Earth Observing One Spacecraft (EO-1) has been adapted to host an advanced suite of onboard autonomy software designed to dramatically improve the quality and timeliness of science-data returned from remote-sensing missions. The Autonomous Sciencecraft Experiment (ASE) enables the spacecraft to autonomously detect and respond to dynamic scientifically interesting events observed from EO-1's low earth orbit. ASE includes software systems that perform science data analysis, mission planning, and run-time robust execution. In this article we describe the autonomy flight software, as well as innovative solutions to the challenges presented by autonomy, reliability, and limited computing resources.
Conference Paper
Full-text available
Space mission operations are extremely labor and knowledge-intensive and are driven by the ground and flight systems. Inclusion of an autonomy capability can have dramatic effects on mission operations. We describe the prior, labor and knowledge intensive mission operations flow for the Earth Observing-1 (EO-1) spacecraft as well as the new autonomous operations as part of the Autonomous Sciencecraft Experiment (ASE)
Conference Paper
Full-text available
We propose a k-d tree variant that is resilient to a pre-described number of memory corruptions while still using only linear space. While the data structure is of independent interest, we demonstrate its use in the context of high-radiation environments. Our experimental evaluation demonstrates that the resulting approach leads to a significantly higher resiliency rate compared to previous results. This is especially the case for large-scale multi-spectral satellite data, which renders the proposed approach well-suited to operate aboard today’s satellites.
Conference Paper
Full-text available
Analyzing data on-board a spacecraft as it is collected en- ables several advanced spacecraft capabilities, such as pri- oritizing observations to make the best use of limited band- width and reacting to dynamic events as they happen. In this paper, we describe how we addressed the unique chal- lenges associated with on-board mining of data as it is col- lected: uncalibrated data, noisy observations, and severe limitations on computational and memory resources. The goal of this eort, which falls into the emerging applica- tion area of spacecraft-based data mining, was to study three specific science phenomena on Mars. Following previ- ous work that used a linear support vector machine (SVM) on-board the Earth Observing 1 (EO-1) spacecraft, we de- veloped three data mining techniques for use on-board the Mars Odyssey spacecraft. These methods range from simple thresholding to state-of-the-art reduced-set SVM technol- ogy. We tested these algorithms on archived data in a flight software testbed. We also describe a significant, serendipi- tous science discovery of this data mining eort: the confir- mation of a water ice annulus around the north polar cap of Mars. We conclude with a discussion on lessons learned in developing algorithms for use on-board a spacecraft.
Conference Paper
The Autonomous Exploration for Gathering Increased Science System (AEGIS) will soon provide automated targeting for remote sensing instruments on the Mars Exploration Rover (MER) mission, which currently has two rovers exploring the surface of Mars. Targets for rover remote-sensing instruments, especially narrow field-of-view instruments (such as the MER Mini-TES spectrometer or the 2011 Mars Science Laboratory (MSL) Mission ChemCam Spectrometer), are typically selected manually based on imagery already on the ground with the operations team. AEGIS enables the rover flight software to analyze imagery onboard in order to autonomously select and sequence targeted remote-sensing observations in an opportunistic fashion. In this paper, we first provide background information on the larger autonomous science framework in which AEGIS was developed. We then describe how AEGIS was specifically developed and tested on the JPL FIDO rover. Finally we discuss how AEGIS will be uploaded and used on the Mars Exploration Rover (MER) mission in early 2009.
Article
Onboard autonomy is necessary for achieving the goals of future space missions, given the communication de- lays imposed by the extreme distances involved (e.g., to Saturn). The high-radiation space environment can have severe negative effects on unprotected onboard computa- tion, for example by flipping bits in memory. Radiation- hardened components that protect against the majority of these errors exist, but whether such extreme protection is needed is an open question. We developed a method for simulating radiation-induced bit flips and quantita- tively assessed the sensitivity of clustering and classifica- tion algorithms likely to be used onboard spacecraft. We found that, for small data sets in a low-Earth orbit radi- ation environment, commercial RAM would suffice; no radiation-hardening of the memory is needed. We also found that simpler algorithms (regular k-means cluster- ing, linear support vector machines) have less sensitivity (more tolerance) than more sophisticated versions (kd-k- means, Gaussian support vector machines). The develop- ment of algorithms with even less sensitivity to radiation is an open area of research.
Article
The Autonomous Exploration for Gathering Increased Science System (AEGIS) will soon provide automated targeting for remote sensing instruments on the Mars Exploration Rover (MER) mission, which currently has two rovers exploring the surface of Mars. Targets for rover remote-sensing instruments, especially narrow field-of-view instruments (such as the MER Mini-TES spectrometer or the 2011 Mars Science Laboratory (MSL) Mission ChemCam Spectrometer), are typically selected manually based on imagery already on the ground with the operations team. AEGIS enables the rover flight software to analyze imagery onboard in order to autonomously select and sequence targeted remote-sensing observations in an opportunistic fashionIn this paper, we first provide background information on the larger autonomous science framework in which AEGIS was developed. We then describe how AEGIS was specifically developed and tested on the JPL FIDO rover. Finally we discuss how AEGIS will be uploaded and used on the Mars Exploration Rover (MER) mission in mid 2009.
Article
The vehicles used to explore the Martian surface require a high degree of autonomy to navigate challenging and unknown terrain, investigate targets, and detect scientific events. Increased autonomy will be critical to the success of future missions. In July 1997, as part of NASA's Mars Pathfinder mission, the Sojourner rover became the first spacecraft to autonomously drive on another planet. The twin Mars Exploration Rovers (MER) vehicles landed in January 2004, and after four years Spirit had driven more than four miles and Opportunity more than seven miles-lasting well past their projected three-month lifetime and expected distances traveled. The newest member of the Mars rover family will have the ability to autonomously approach and inspect a target and automatically detect interesting scientific events. In fall 2009, NASA plans to launch the Mars Science Laboratory (MSL) rover, with a primary mission of two years of surface exploration and the ability to acquire and process rock samples. In the near future, the Mars Sample Return (MSR) mission, a cooperative project of NASA and the European Space Agency, will likely use a lightweight rover to drive out and collect samples and bring them back to an Earth return vehicle. This rover will use an unprecedented level of autonomy because of the limited lifetime of a return rocket on the Martian surface and the desire to obtain samples from distant crater walls.