Content uploaded by Verena Nitsch
Author content
All content in this area was uploaded by Verena Nitsch
Content may be subject to copyright.
Situation Awareness in Autonomous Service Robots
Verena Nitsch1
Keywords: Cognitive Modeling, Situation Awareness, Autonomous Robots
Abstract
Despite the increasing demand for service robots, considerable challenges re-
main to be solved before these robots can be ubiquitously employed in personal
households. To this date, autonomously operating robots lack the flexibility to
react appropriately in unforeseen situations. At the Human Factors Institute of
the Universität der Bundeswehr München, a multi-disciplinary research group of
psychologists, engineers and computer scientists aims to ascertain to which ex-
tent cognitive models of situation awareness may enhance a robot’s ability to
display adaptive behaviour in unforeseen situations.
Introduction
In 2011, about 2.5 million robots were sold for personal and domestic use
(International Federation of Robotics, 2012). The International Federation of
Robotics projects sales of over 15 million units for the period 2012 to 2015 with
experts indicating that service robots have an innovation and market potential
similar to that currently presented by industrial robots (Decker, et al., 2011). To
this date, autonomously operating robots lack the flexibility to react appropriate-
ly in unforeseen situations as important perceptual and decision-making struc-
tures are still designed by programmers before the robot begins its task. This
leads to fragmented abilities and brittle performance, in particular in unstruc-
tured and dynamic environments, which are prevalent in domestic contexts
(Benjamin, Monaco, Lin, Funk, & Lyons, 2012). Hence, perhaps the greatest
challenge in the engineering of autonomously operating robotic systems remains
the instillation of flexibility in the robot’s decision-making processes that would
allow it to adapt to the demands presented by a dynamically changing environ-
ment at any given moment, whilst still pursuing a particular operative goal (as
opposed to purely reflexive systems). This line of research can be subsumed un-
der the label of cognitive robotics.
In an effort to tackle this problem of failing adaptability in robots, several ap-
proaches have surfaced in the different scientific disciplines that are currently
active in the field of cognitive robotics. One school of thought, which derived
from the computer sciences, focuses primarily on the effective harnessing of
computational processing capabilities and the development of mathematical al-
gorithms to create learning and hence adaptable systems. Within the paradigm of
1 Universität der Bundeswehr München, Institut für Arbeitswissenschaft, Werner-Heisenberg-Weg 39, 85577
Neubiberg; Email: verena.nitsch@unibw.de
machine learning, numerous attempts were made to develop heuristics based on
statistical properties of sensor signals and reinforcement learning in an effort to
reduce processing time and computational power requirements that would lead
to slow response times and likely ineffective behaviour in dynamically changing
situations (e.g. Bagnell, Bradley, Silver, Sofman, & Stentz, 2010; Peters,
Morimoto, & Tedrake, 2009; Bratko, 2010). Although considerable progress has
been made in this domain in recent years, real-time learning and in particular
adaptive behavior of robots in unforeseen situations remain a great challenge in
robotics (Benjamin, Monaco, Lin, Funk, & Lyons, 2012). A different approach
may be sought to complement these efforts. Cognitive scientists have tackled the
issue of system adaptability by investigating human cognitive abilities that lead
to adaptive behavior in such situations. Based on decades of experimental psy-
chological work, numerous models have been proposed in this domain that spec-
ify human cognitive processes in dynamic situations (e.g. Baumann & Krems,
2009; Gonzales & Dutt, 2007; Durso & Sethumadhavan, 2008). As of yet, how-
ever, the evaluation of these cognitive models remains chiefly theoretical work;
to our knowledge, a truly cognitive architecture has not yet been implemented in
a robotic system (Benjamin, Monaco, Lin, Funk, & Lyons, 2012). Consequently,
it is not sufficiently clear to which extent relevant models of human cognitive
processes may be applicable to autonomously operating robots and improve per-
formance of such systems. Systematic investigations of the issues which would
surface in the implementation of such models in robots, are hence urgently re-
quired.
Situation awareness as a pivotal cognitive process in dynamic en-
vironments
Cognitive researchers have suggested that the basis for humans’ decision-
making ability in dynamic situations forms their situation awareness (SA), a
psychological construct which has withstood the scrutiny of several decades of
empirical research. A commonly accepted definition of situation awareness has
been proposed by Endsley (Endsley M. , 1995, p. 36): „Situation awareness is
the perception of the elements in the environment within a volume of time and
space, the comprehension of their meaning and the projection of their status in
the near future.“ Endsley further specifies situation awareness to occur at three
consecutive stages, which are contingent upon the successful interplay of three
key cognitive components: attention, working memory (WM) and long-term
memory (LTM) (s. Figure 1).
The first step in achieving SA is the perception of status, attributes, and dy-
namics of goal-relevant stimuli in the environment. These are only perceived a)
if the physical characteristics of these stimuli capture the human’s attention (bot-
tom-up processing) and/or b) if they are focused upon and determined to be rel-
evant as indicated by operative goals that are stored in the WM and LTM (top-
down processing).
Figure 1. Schematic depiction of the cognitive processes involved in attaining situation awareness.
Adapted from (Endsley M. , 1995).
In a second step towards achieving SA, perceived stimuli are synthesized and
associated with operative goals in the WM, which have been retrieved from a
larger number of goals stored in LTM. Hereby, the WM allows one to modify
attention deployment on the basis of other perceived information or a change in
active goals (Braune & Trollip, 1982). The goal-dependent synthesis of infor-
mation stored in the WM leads to a comprehension of the current situation and
the stimuli’s relative significance in terms of achieving the goals.
In the third and final step towards achieving SA, future states of the elements
in the environment may be projected based on the comprehension of the situa-
tion, leading to a timely and effective decision. For this purpose, a situation
model which was created in the previous steps is stored in LTM and matched to
schemata in memory that depict prototypical situations or states of the system
model, which may be linked with goals that dictate further decision-making and
actions. Hence SA equips the human agent with the ability to react appropriately
in dynamic systems. In fact, numerous studies found a link between SA and ef-
fective performance (e.g. Gaba, Howard, & Small, 1995; Ma & Kaber, 2007;
Gugerty & Tirre, 1996; Golightly, Wilson, Lowe, & Sharples, 2010).
Project SAAROS: Situation Awareness in Autonomous Service
Robots
The aim of the proposed research program is to make a substantial theoretical
and practical contribution towards improving the decision-making flexibility and
hence adaptability of autonomously operating service robots using a distinctly
interdisciplinary approach. To this end, a three-pronged approach is taken. First-
ly, empirical data are gathered that reflect human processing of sensory input in
the development of situation awareness, a cognitive process that was identified
as pivotal to achieving adaptive behavior in dynamic situations. Secondly, a
computational cognitive model which reflects these mechanisms is developed,
using ACT-R, a cognitive architecture which was identified as featuring all nec-
essary properties for the simulation of situation awareness (empirically support-
ed cognitive plausibility, appropriate learning mechanisms). Thirdly, the cogni-
tive model is implemented in an autonomously operating NAO Next Gen robot
(by Aldebaran) and evaluated for several scenarios that are relevant to service
robotics and consequently adapted in order to reflect human-like adaptive be-
havior in dynamic environments. In the process of validating the developed
model, systemic as well as structural influences which impact situation aware-
ness in robotic systems are to be empirically uncovered and systematically doc-
umented. The following section details the working program of the Project
SAAROS.
Step1: Identification of testing parameters
First, two benchmark scenarios and appropriate experimental setups within the
domain of service robotics are defined, which serve to validate a computational
SA model with experimental data collected from human participants and which
may be juxtaposed with data provided by a cognitively equipped NAO robot in
later stages of the project. Specifically, the first scenario is required for the de-
velopment and validation of the SA model in a particular task domain of service
robots (Step 6). A second scenario is devised at this stage, which will serve to
evaluate the extent to which the developed model generalizes to other task do-
mains within the field of service robotics (Step 7). For both scenarios, perfor-
mance goals are specified that determine the aimed level of functionality for the
robot once it is equipped with a validated cognitive model of SA.
Step2: Formulation of an ACT-R model of SA
A cognitively plausible ACT-R model of SA is formulated for the chosen con-
text. In order to formulate an SA model within a cognitive architecture, the in-
vestigated scenario first needs to be deconstructed to its lowest operative level.
For this purpose, the scenario is analyzed using goal-directed task analysis
(GDTA), as proposed by Endsley (Endsley M. R., 1993). GDTA is a specific
form of cognitive task analysis that focuses on identifying the goals and critical
information needs for a particular task context. In essence, the product of this
analysis is a hierarchy that specifically outlines the processes in which basic data
needed by the user are integrated into higher SA levels of comprehension and
projection. Second, based on the task analysis, an ACT-R model of SA is formu-
lated.
Step 3: Acquisition of experimental data & model validation for human data
Following the GDTA (Step2), the model validation will occur in two stages,
which are iteratively repeated until the model is considered to reflect human
cognitive processes involved in SA satisfactorily: In a first step, the model is
tested with regard to achievement and disruption of SA in humans. This includes
experimental analysis with an opportunity sample of naïve participants. To
measure the different levels of SA, a combination of tools will be used. Partici-
pants’ visual focus of attention can be tracked using eye-tracking and motion
tracking. The SA levels of comprehension and projection are assessed with two
measurement techniques that have proven reliable and valid measures in numer-
ous applications: SAGAT (Situation Awareness Global Assessment Technique)
and SPAM (Situation Present Assessment Measure). With SAGAT (Endley,
1995), participants are intermittently queried, in the middle of a dynamic simu-
lation, about the values of various parameters, whilst they are deprived of any
feedback of the scene, so that the participant must rely on working memory to
answer the questions. SPAM (Durso, Rawson, & Girotto, 2007) assesses the
speed of accessing information while the scenario continues, so that the partici-
pant could seek the needed information rather than relying on memory. The SA
measure, in this case is the time to respond. Statistical analyses will be conduct-
ed with quantitative measures in order to ascertain their degree of generalizabil-
ity.
The formulated predictions are also tested with the ACT-R model. Functional-
ly equivalent measures to those used with humans (eye-tracking, SPAM,
SAGAT) can be applied to the measurement of SA in the ACT-R model: Visual
focus of attention can be tracked and recorded during the programme’s run-time.
In addition, response times and various task performance indices and buffer con-
tent (i.e. working memory content) can be assessed. In addition, behavioural
trends can be discerned with the ACT-R simulations. Upon conclusion of the
analysis of the model’s results, the ACT-R mechanisms may need to be adjusted
and the hypotheses redefined. Hence, the testing process continues with stage (a)
until the model achieves satisfactory performance (defined as overall model fit
in performance measures of R=.7 or higher and comparable behavioural tenden-
cies).
Step 4: Development of NAO/ACT-R Interface
Development work needs to be invested in order to equip an autonomously op-
erating NAO robot, with an ACT-R model of SA. The higher level controls of
NAO are realized by an embedded PC board in the robot’s head, which runs on
a LINUX operating system. Aldebaran provides an SDK-Toolkit called NAOqi
which provides an interface to the hardware on low and high level bases, sup-
porting different programming languages, including C/C++ and Python. ACT-R,
on the other hand, is composed of a set of functions and algorithms implemented
in Common Lisp, which is not supported by NAOqi. Hence, an interface will be
devised for the robot, which enables the communication of the robot’s controls
with the perceptual-motor modules provided by ACT-R, for example, with use
of CL Python, which is an open-source implementation of Python written in
Common Lisp.
Step 5: Operationalisation of symbolic language with operative terms
With respect to the examined scenario and the results of the GDTA, operative
terms need to be defined which translate the symbolic output of ACT-R into ap-
propriate actions of the NAO robot and vice versa. This work package hence
comprises two stages: Based on the GDTA (Step 2), a database is established
which contains detailed instructions for the robot for each symbolism specified
in the ACT-R SA model. For example, if ACT-R determines during its run-time
that the robot should “scan for next visual cue”, a corresponding translation is
defined for the robot’s control and sensory processors, e.g. specifying the de-
grees of rotation of the robot’s head, the area in which it scans and a choice of
target stimuli. The second step involves the implementation of an interface link-
ing the robot’s API with an external PC on which the database is implemented.
The decision to outsource this database was made to maintain fast run-times and
at the same time ensure cognitive plausibility by considering these as subcon-
scious processes, as humans do not, for example, consciously deliberate the de-
gree to which they rotate their heads when they want to focus on an object of
interest.
Step 6: Validation of ACT-R model in the NAO robot
Once the ACT-R model of SA developed in Step 2 has been validated with re-
gard to human data and the model’s symbolic language has been translated into
operative terms for the robot, the model can be validated in the NAO robot. For
this purpose, the hypotheses that were established during the model validation
process (Step 3) are investigated with the NAO robot using comparable
measures of SA to those described in Step 3. The model is considered valid for
the presented scenario if it achieves satisfactory performance (defined as overall
model fit in performance measures of R=.7 or higher and the display of compa-
rable behavioural tendencies). If the model does not display satisfactory validity,
possible causes are identified and the model and/or physical setup may be ad-
justed accordingly.
Step 7: Assessment of the model’s ability to generalize to other task domains
In a final step, the extent to which the implemented model is task-specific is
evaluated. For this purpose, the embodied ACT-R model is tested in a different
test setup previously defined in Step 1, that requires the same basic skill set, but
different reactions than the first scenario. From this final evaluation, systemic
parameters may be delineated that can be identified as affecting SA in autono-
mously operating robots.
Conclusion
The multi-disciplinary project SAAROS at the Universität der Bundeswehr
München can make contributions to three different domains: the empirical re-
search on situation awareness in humans, the computational modeling of situa-
tion awareness using a cognitive architecture, and the embodiment of cognitive
architectures in a robotic platform. Further investigations are planned to system-
atically assess the extent to which human cognitive processes may be applied to
autonomously operating robots in order to produce adaptive behaviour in un-
foreseen situations.
References
International Federation of Robotics. (2012). Retrieved 10 12, 2012, from
http://www.ifr.org/service-robots/
Anderson, J. R., & Lebiere, C. (1998). The Atomic Components of Thought.
Lawrence Erlbaum Associates.
Bagnell, J., Bradley, D., Silver, D., Sofman, B., & Stentz, A. (2010). Learning
for autonomous navigation. IEEE Robotics & Automation Magazine,
17(2), pp. 74-84.
Baumann, M. R., & Krems, J. F. (2009). A comprehension based cognitive
model of situation awareness. Digital Human Modeling, (pp. 192-201).
Benjamin, D., Monaco, J. V., Lin, Y., Funk, C., & Lyons, D. (2012). Using a
virtual world for robot planning. SPIE Defense, Security, and Sensing, (p.
doi 10.1117/12.923446 ).
Bratko, I. (2010). Comparison of machine learning for autonomous robot
discovery. Advances in Machine Learning I, 262/2010, pp. 441-456.
Braune, R., & Trollip, S. (1982). Towards an internal model in pilot training.
Aviation, Space and Environmental Medicine(53), pp. 996-999.
Decker, M., Dillmann, R., Dreier, T., Fischer, M., Gutmann, M., Ott, I., &
Döhmann, I. (2011). Service robotics: do you know your new companion?
Framing an interdisciplinary technology assessment. Poiesis & Praxis.
International Journal of Ethics of Science and Technology Assessment, 8,
pp. 25-44.
Durso, F. T., & Sethumadhavan, A. (2008). Situation awareness: understanding
dynamic environments. Human Factors: The Journal of the Human
Factors and Ergonomics Society, 50(3), pp. 442-448.
Durso, F., Rawson, K., & Girotto, S. (2007). Comprehension and situation
awareness. In F. Durso, R. Nickerson, S. Dumais, S. Lewandowsky, & T.
Perfect, Handbook of applied cognition (2. ed., pp. 163-194). New York:
Wiley.
Endley, M. (1995). Measurement of situation awareness in dynamic systems.
Human Factors, 37(1), pp. 65-84.
Endsley, M. (1995). Toward a theory of situation awareness in dynamic
systems. Human Factors, 37(1), pp. 65-84.
Endsley, M. R. (1993). Situation awareness and workload: flip sides of the same
coin. Proceedings of the Seventh International Symposium on Aviation
Psychology, pp. 906-911.
Gaba, D. M., Howard, S. K., & Small, S. D. (1995). Situation awareness in
anesthesiology. Human Factors: The Journal of the Human Factors and
Ergonomics Society, 37(1), pp. 20-31.
Golightly, D., Wilson, J. R., Lowe, E., & Sharples, S. (2010). The role of
situation awareness for understanding signalling and control in rail
operations. Theoretical Issues in Ergonomics Science, 1, pp. 84-98.
Gonzales, C., & Dutt, V. (2007). Learning to control a dynamic task: a system
dynamics cognitive model of the slope effect. Proceedings of the 8th
International Conference on Cognitive Modeling ICCM, (pp. 61-66).
Gonzalez, C., & Dutt, V. (2010). Instance-based learning models of training.
Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, (pp. 2319-2323).
Gonzalez, C., Lerch, J. F., & Lebiere, C. (2003). Instance-based learning in
dynamic decision making. Cognitive Science, pp. 591-635.
Gugerty, L. J., & Tirre, W. C. (1996). Situation awareness: a validation study
and investigation of individual differences. Proceedings of the Human
Factors and Ergonomics Society Annual Meeting, (pp. 564-568).
Ma, R., & Kaber, D. B. (2007). Situation awareness and driving performance in
a simulated navigation task. Ergonomics, 50(8), pp. 1351-1364. Retrieved
November 21st, 2011, from http://www.kivasystems.com
Peters, J., Morimoto, J., & Tedrake, R. (2009). Robot Learning. IEEE Robotics
& Automation Magazine, 16(3), pp. 19-20.
Sofge, D., Trafton, J. G., Cassimatis, N., Perzanowski, D., Bugajska, M.,
Adams, W., & Schultz, A. (2004). Human-robot collaboration and
cognition with an autonomous mobile robot. Proceedings of the 8th
Conference on Intelligent Autonomous Systems (IAS-8) (pp. 80-87). IOS
Press.