Conference PaperPDF Available

Human-Robot Complementarity: Learning Each Other and Collaborating

Abstract and Figures

As robot capabilities increase, the complexity of controlling and manipulating them becomes complex and cumbersome making intuitive Human-Robot Interaction all the more necessary for seamless human-robot collaboration. In this paper, we look into the ability of collaborators to understand each other's intentions and act accordingly in order to promote the collaboration. We focus on scenarios that human intentions are communicated through movement. In order to endow robots with understanding of human intentions, as well as with robot behaviours that humans interpret correctly, we need to look at mechanisms humans recruit to perceive and communicate intentions. We then need to distill the essence of these mechanisms so that they can be applied to completely non-anthropomorphic robot collaborators.
Content may be subject to copyright.
Human - Robot Complementarity
Learning each other and collaborating
MARIA DAGIOGLOU,National Center for Scientic Research ‘Demokritos’, Greece
STASINOS KONSTANTOPOULOS, National Center for Scientic Research ‘Demokritos’, Greece
As robot capabilities increase, the complexity of controlling and manipulating them becomes complex and cumbersome making
intuitive Human-Robot Interaction all the more necessary for seamless human-robot collaboration. In this paper, we look into the
ability of collaborators to understand each other’s intentions and act accordingly in order to promote the collaboration. We focus
on scenarios that human intentions are communicated through movement. In order to endow robots with understanding of human
intentions, as well as with robot behaviours that humans interpret correctly, we need to look at mechanisms humans recruit to perceive
and communicate intentions. We then need to distil the essence of these mechanisms so that they can be applied to completely
non-anthropomorphic robot collaborators.
Additional Key Words and Phrases: Human-Robot Collaboration, Human intentions, Robot intentions, Assistive teleoperation
1 MOTIVATION
Intuitive Human-Robot Interaction is necessary to allow human-robot collaboration in a seamless way. While the
capabilities of robots increase, the complexity of controlling and manipulating them becomes complex and cumbersome.
is is further stressed out in cases where the user is impaired and must be relieved from extreme physical or cognitive
load.
Essential for achieving intuitive and seamless human-robot collaboration is the ability of both human and robot to
be able to understand each other’s intentions and act accordingly in order to promote the collaboration. In this paper
we present our research plan for developing methods that will endow robots with this understanding, allow robots to
behave in ways that humans interpret correctly, and enable shared control between a robot and a user to achieve a goal.
We are interested in applications where the medium of interaction is human movement, which guides the robot’s
actions either explicitly (e.g., assistive teleoperation) or implicitly (e.g., shared-workspace collaboration). To guarantee
seamless human-robot collaboration three high-level requirements must be satised:
(1) Communicate to humans the intentions of robots
(2) Communicate to robots the intentions of humans
(3)
Shared control that weighs intention of both sides, decides about the appropriate action and does not disrupt
human or robot behaviour.
To that end, it is of great interest to explore how a human and a robot can collaborate to learn a task in the rst
place. In other words, how do we prompt human motor learning (and thus performance) by involving the robot in the
process and how do we manipulate robot learning to take advantage of human expertise? Such a co-training gives the
e corresponding author: mdagiogl@iit.demokritos.gr
e research described in this paper has been carried out in the context of Roboskel, the robotics activity of Soware and Knowledge Engineering Lab,
Institute of Informatics and Telecommunications, NCSR “Demokritos”. Please see for more details: hp://roboskel.iit.demokritos.gr.
1
2 Maria Dagioglou and Stasinos Konstantopoulos
human and the robot an understanding of each other’s intentions and behaviour that would be extremely dicult to
communicate otherwise.
2 BACKGROUND
2.1 Communicating robot intentions to humans
Humans could understand a robot’s intention by predicting its behaviour or even by using their own motor system to
understand intention in case of anthropomorphic robots. While interacting with a robot, a human is expected to build
an internal representation of the robot movement behaviour in a similar way the brain retains internal models of own
body (Wolpert and Ghahramani 2000), external objects (Cerminara et al
.
2009) and the world in general (Berkes et al
.
2011). ese internal representations allow prediction of own movements, movements of external objects, etc.
An interesting observation is that the control of anthropomorphic and non-anthropomorphic robots dier, possibly
because (in case of anthropomorphic robots) the user controls the robot as if it was their own body (Oztop et al
.
2015). It seems that in this scenario humans could recruit processes similar to those used in social interactions. In
social interactions humans can predict other people’s actions and intentions by observing among other things their
movements (Frith and Frith 2006). Wolpert et al. (2003) have suggested that we use our motor system to understand
actions and that this is an ecient mechanism for making inferences in social interactions. Internal modelling
processes have been proposed to be linked with mirror neurons during cognitive processes, like human interactions
and understanding intention of other (Miall 2003).
2.2 Communicating human intentions to robots
Translating human intention to appropriate signals for the robot could be perhaps a more complicated process. Research
relevant to communicating intention in physical interactions (Strabala et al
.
2012) could be used as a starting point for
reading human intention in human-robot interactions. In certain applications, like robotic assistive therapy, movement
intention could be provided by a brain-computer interface (Wairagkar et al. 2016).
We are mostly interested in exploring how human intention can be modelled in scenarios where the human is
controlling the robot via a medium like a joystick. To the best of our knowledge there are no recent papers discussing
this issue. In an older study (Horiguchi et al
.
2000), the authors explored naturalistic human-robot collaboration based
upon mixed-initiative interactions in a teleoperating environment.
2.3 Shared control and collaboration
roughout literature there are several interesting directions and challenges that one can take to build shared-control
systems that promote collaboration.
A rst question is how can we manipulate behaviour to facilitate intention inference? While humans can use motor
prediction and understand action intention to interact with robots, robots can be built to further facilitate that the
inference of intention unambiguously resolves to the correct robot goals. Movements are usually planned based on cost
functions related to shortest distance, minimum energy consumption etc. An interesting idea is to modify robot motion
planning so that it explicitly prompts human inferences about robot’s expected action (Dragan et al
.
2013). According
to Dragan et al. (2013) motion planning in collaborative agents must be based in eligibility rather than predictability to
allow ‘intention reading’ at an early stage of robot’s motion execution. e authors argue that robots must be eligible
in all collaboration scenarios, including shared-workspace collaboration, robot learning and assistive teleoparation. In a
Human - Robot Complementarity 3
Fig. 1. A DrRobot Jaguar 4x4 robot can be teleoperated to climb a step.
human-robot collaborative study it was actually demonstrated that eligible robotic movements (rather than predictable
or functional ones) mediated more uent collaborations with humans (Dragan et al. 2015).
Another interesting idea is that of understanding each other’s capabilities interactively. Awais and Hernich (2013)
give a very interesting example of interaction based on human intentions. ey suggest a framework where the robot
starts reacting by copying human movements. With each repetition of the task the interaction improves by using history,
action randomness, and heuristic-based action predictions. Specically in the case of assistive robotic arms, a major
issue is how to collaborate with a dexterous and complicated robotic arm by only using a simple joystick. Mapping a
high-dimensional system to low dimensional inputs can denitely be a barrier for collaboration. Time-optimal mode
switching could create a level of shared control and allow user to express their intentions (Herlant et al. 2016).
3 RESEARCH PLAN
Taking into account the body of literature discussed above, we propose a research plan that assumes these ndings as
a starting point to develop the technical capabilities for communicating intentions and shared control. To make this
more concrete, we consider a specic collaborative task, and, namely, the task of teleoperating a rover robot to climb a
step (Figure 1). e robot’s wheels are big enough to clear the step, but some caveats are there: the operator needs to
approach the step at an appropriate angle and speed, or the platform will slam against the step rather than raise its
wheels over it; a sudden acceleration or sharp turn during the manoeuvre can overturn the robot; but the manoeuvre
cannot be performed slowly, as there comes a point when the wheels do not touch the ground and the platform needs
to have picked up enough speed to get on the step.
is simple experiment becomes more interesting when setup in a way that control is shared between the operator
and the robot: the robot uses its accelerometer to predict that some teleoperation has the potential to overturn it, and
should refuse to carry out the command. e operator, on the other hand, has the understanding of the dynamics
behind climbing a step that is tall enough to be possible but challenging.
e actual demonstration task that will be used to collect data will be to challenge novice users to teleoperate the
robotic platform up a step. Initial experiments have shown that managing to teleoperate the robot up the step in
Figure 1requires skilled manoeuvres, and that novice users can either operate the robot into dangerous inclinations or
4 Maria Dagioglou and Stasinos Konstantopoulos
fail to accomplish the task. On the other hand, it is not straightforward how to implement such an autonomous skill for
arbitrary, previously unseen steps and environments. We believe that this paradigm provides plenty of opportunities to
study human-robot collaboration in a series of studies:
(1)
A non-anthropomorphic robot uses movement behaviour to communicate intentions to the user. One such experi-
ment is to compare how well dierent movement behaviours communicate robot ‘objections’ to executing a
teleoperation command. For example, the robot might need to communicate a deviation from its plan (i.e., the
operator is moving away from the goal set) or a more urgent safety concern (i.e., the operator requested an
unsafe movement).
(2)
A robot understands user intention and correctly interprets teleoperation commands. One such experiment is to
develop and compare dierent strategies for interpreting the operator’s intentions rather than blindly executing
the detailed teleoperation commands. For example, the robot should use context and previous interactions with
the user to decide if the operator’s directive is to move against a step should be interpreted as a command to
climb the step, to park by it, or to slam against it. Experiments will include understanding the trade o between
making early and making safe predictions of the operators’ intention, and adapting to dierent operators.
(3)
Integrating the above into a shared control interface. ese, more ambitious, experiment will synthesize commu-
nicating robot objections and human operation into a collaboration that achieves the goal.
At the current state of development, we have identied a hardware, an environment, and a task that is impossible
for standard autonomous navigation, possible but challenging for experienced operators, and impossible for novice
operators. We are currently developing the safety controls that will allow novice users to try the task without risking
overturning robot, and will soon proceed to initial experiments along the lines described above.
REFERENCES
M. Awais and D. Henrich. 2013. Human-Robot Interaction in an Unknown Human Intention Scenario. In 11th International Conference on Frontiers of
Information Technology. 89–94. DOI:hp://dx.doi.org/10.1109/FIT.2013.24
P. Berkes, G. Orb
´
an, M. Lengyel, and J. Fiser. 2011. Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment.
Science 331, 6013 (2011), 83–87. DOI:hp://dx.doi.org/10.1126/science.1195870
N. L. Cerminara, R. Apps, and D. E Marple-Horvat. 2009. An Internal Model of a Moving Visual Target in the Lateral Cerebellum. e Journal of physiology
587, Pt 2 (Jan. 2009), 429–42. DOI:hp://dx.doi.org/10.1113/jphysiol.2008.163337
A. Dragan, S. Bauman, J. Forlizzi, and S. Srinivasa. 2015. Eects of Robot Motion on Human-Robot Collaboration. In Human-Robot Interaction.
A. Dragan, K. Lee, and S. Srinivasa. 2013. Legibility and Predictability of Robot Motion. In Human-Robot Interaction.
C. D. Frith and U. Frith. 2006. How we Predict what Other People Are Going to Do. Brain Research 1079, 1 (2006), 36 – 46.
DOI:
hp://dx.doi.org/10.1016/j.
brainres.2005.12.126 Multiple Perspectives on the Psychological and Neural Bases of Understanding Other People’s Behavior.
L. V. Herlant, R. M. Holladay, and S. S. Srinivasa. 2016. Assistive Teleoperation of Robot Arms via Automatic Time-Optimal Mode Switching. In Proc. 11th
ACM/IEEE Intl Conf. Human Robot Interaction (HRI 2016). IEEE Press, 35–42. hp://dl.acm.org/citation.cfm?id=2906831.2906839
Y. Horiguchi, T. Sawaragi, and G. Akashi. 2000. Naturalistic human-robot collaboration based upon mixed-initiative interactions in teleoperating
environment. In Proceedings Intl Conf. Systems, Man, and Cybernetics, Vol. 2. 876–881. DOI:hp://dx.doi.org/10.1109/ICSMC.2000.885960
R. C. Miall. 2003. Connecting mirror neurons and forward models. NEUROREPORT 14 (2003), 2135–2137.
E. Oztop, E. Ugur, Y. Shimizu, and H. Imamizu. 2015. Humanoid Brain Science. CRC Press, Chapter 2.
K. Strabala, M. K. Lee, A. Dragan, J. Forlizzi, and S. Srinivasa. 2012. Learning the Communication of Intent Prior to Physical Collaboration. In Proceedings
of the 21st IEEE International Symposium on Robot and Human Interactive Communication.
M. Wairagkar, I. Zoulias, V. Oguntosin, Y. Hayashi, and S. Nasuto. 2016. Movement Intention Based Brain Computer Interface for Virtual Reality and So
Robotics Rehabilitation Using Novel Autocorrelation Analysis of EEG. In Proceedings of the 6th IEEE International Conference on Biomedical Robotics
and Biomechatronics (BioRob 2016). 685–685. DOI:hp://dx.doi.org/10.1109/BIOROB.2016.7523705
D. M. Wolpert, K. Doya, and M. Kawato. 2003. A unifying computational framework for motor control and social interaction. Philosophical Transactions of
the Royal Society of London 358 (2003), 593–602.
D. M. Wolpert and Z. Ghahramani. 2000. Computational Principles of Movement Neuroscience. Nature neuroscience 3 Suppl, november (Nov. 2000),
1212–7. DOI:hp://dx.doi.org/10.1038/81497
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Assistive robotic arms are increasingly enabling users with upper extremity disabilities to perform activities of daily living on their own. However, the increased capability and dexterity of the arms also makes them harder to control with simple, low-dimensional interfaces like joy-sticks and sip-and-puff interfaces. A common technique to control a high-dimensional system like an arm with a low-dimensional input like a joystick is through switching between multiple control modes. However, our interviews with daily users of the Kinova JACO arm identified mode switching as a key problem, both in terms of time and cognitive load. We further confirmed objectively that mode switching consumes about 17.4% of execution time even for able-bodied users controlling the JACO. Our key insight is that using even a simple model of mode switching, like time optimality, and a simple intervention, like automatically switching modes, significantly improves user satisfaction.
Conference Paper
Full-text available
When performing physical collaboration tasks, like packing a picnic basket together, humans communicate strongly and often subtly via multiple channels like gaze, speech, gestures, movement and posture. Understanding and participating in this communication enables us to predict a physical action rather than react to it, producing seamless collaboration. In this paper, we automatically learn key discriminative features that predict the intent to handover an object using machine learning techniques. We train and test our algorithm on multi-channel vision and pose data collected from an extensive user study in an instrumented kitchen. Our algorithm outputs a tree of possibilities, automatically encoding various types of pre-handover communication. A surprising outcome is that mutual gaze and inter-personal distance, often cited as being key for interaction, were not key discriminative features. Finally, we discuss the immediate and future impact of this work for human-robot interaction.
Article
Full-text available
Recent empirical studies have implicated the use of the motor system during action observation, imitation and social interaction. In this paper, we explore the computational parallels between the processes that occur in motor control and in action observation, imitation, social interaction and theory of mind. In particular, we examine the extent to which motor commands acting on the body can be equated with communicative signals acting on other people and suggest that computational solutions for motor control may have been extended to the domain of social interaction.
Conference Paper
Brain Computer Interface (BCI) could be used as an effective tool for active engagement of patients in motor rehabilitation by enabling them to initiate the movement by sending the command to BCI directly via their brain. In this paper, we have developed a BCI using novel EEG analysis to control a Virtual Reality avatar and a Soft Robotics rehabilitation device. This BCI is able identify and predict the upper limb movement. Autocorrelation analysis was done on EEG to study the complex oscillatory processes involved in motor command generation. Autocorrelation represented the interplay between oscillatory and decaying processes in EEG which change during voluntary movement. To investigate these changes, the exponential decay curve was fitted to the autocorrelation of EEG windows which captured the autocorrelation decay. It was observed that autocorrelation decays slower during voluntary movement and fast otherwise, thus, movement intention could be identified. This new method was translated into online signal processing for BCI to control the virtual avatar hand and soft robotic rehabilitation device by intending to move an upper limb. The soft robotic device placed on the joint between upper and the lower arm inflated and deflated resulting to extension and flexion of the arm providing proprioceptive feedback. Avatar arm viewed in virtual 3D environment with Oculus Rift also moved simultaneously providing a strong visual feedback.
Conference Paper
Most motion in robotics is purely functional, planned to achieve the goal and avoid collisions. Such motion is great in isolation, but collaboration affords a human who is watching the motion and making inferences about it, trying to coordinate with the robot to achieve the task. This paper analyzes the benefit of planning motion that explicitly enables the collaborator's inferences on the success of physical collaboration, as measured by both objective and subjective metrics. Results suggest that legible motion, planned to clearly express the robot's intent, leads to more fluent collaborations than predictable motion, planned to match the collaborator's expectations. Furthermore, purely functional motion can harm coordination, which negatively affects both task efficiency, as well as the participants' perception of the collaboration.
Conference Paper
A key requirement for seamless human-robot collaboration is for the robot to make its intentions clear to its human collaborator. A collaborative robot's motion must be legible, or intent-expressive. Legibility is often described in the literature as and effect of predictable, unsurprising, or expected motion. Our central insight is that predictability and legibility are fundamentally different and often contradictory properties of motion. We develop a formalism to mathematically define and distinguish predictability and legibility of motion. We formalize the two based on inferences between trajectories and goals in opposing directions, drawing the analogy to action interpretation in psychology. We then propose mathematical models for these inferences based on optimizing cost, drawing the analogy to the principle of rational action. Our experiments validate our formalism's prediction that predictability and legibility can contradict, and provide support for our models. Our findings indicate that for robots to seamlessly collaborate with humans, they must change the way they plan their motion.
Conference Paper
In this paper an approach is introduced to human robot interaction in a known scenario with unknown human intentions. Initially, the robot reacts by copying the human action. As the human-robot interaction proceeds, the level of human-robot interaction improves. Before each reaction, the robot hypothesizes its potential actions and selects one that is found most suitable. The robot may also use the human-robot interaction history. Along with the history, the robot also considers the action randomness and heuristic based action predictions. As solution, a general reinforcement Learning (RL) based algorithm is proposed that suggests learning of human robot interaction in an unknown human intention scenario. A Particle Filter (PF) based algorithm is proposed to support the probabilistic action selection for human-robot interaction. The experiments for human-robot interaction are performed by a robotic arm involving the arrangement of known objects with unknown human intention. The task of the robot is to interact with the human according to the estimated human intention.
Article
The brain maintains internal models of its environment to interpret sensory inputs and to prepare actions. Although behavioral studies have demonstrated that these internal models are optimally adapted to the statistics of the environment, the neural underpinning of this adaptation is unknown. Using a Bayesian model of sensory cortical processing, we related stimulus-evoked and spontaneous neural activities to inferences and prior expectations in an internal model and predicted that they should match if the model is statistically optimal. To test this prediction, we analyzed visual cortical activity of awake ferrets during development. Similarity between spontaneous and evoked activities increased with age and was specific to responses evoked by natural scenes. This demonstrates the progressive adaptation of internal models to the statistics of natural stimuli at the neural level.
Article
In order to overcome the relatively long delay in processing visual feedback information when pursuing a moving visual target, it is necessary to predict the future trajectory of the target if it is to be tracked with accuracy. Predictive behaviour can be achieved through internal models, and the cerebellum has been implicated as a site for their operation. Purkinje cells in the lateral cerebellum (D zones) respond to visual inputs during visually guided tracking and it has been proposed that their neural activity reflects the operation of an internal model of target motion. Here we provide direct evidence for the existence of such a model in the cerebellum by demonstrating an internal model of a moving external target. Single unit recordings of Purkinje cells in lateral cerebellum (D2 zone) were made in cats trained to perform a predictable visually guided reaching task. For all Purkinje cells that showed tonic simple spike activity during target movement, this tonic activity was maintained during the transient disappearance of the target. Since simple spike activity could not be correlated to eye or limb movements, and the target was familiar and moved in a predictable fashion, we conclude that the Purkinje cell activity reflects the operation of an internal model based on memory of its previous motion. Such a model of the target's motion, reflected in the maintained modulation during the target's absence, could be used in a predictive capacity in the interception of a moving object.
Article
Unifying principles of movement have emerged from the computational study of motor control. We review several of these principles and show how they apply to processes such as motor planning, control, estimation, prediction and learning. Our goal is to demonstrate how specific models emerging from the computational approach provide a theoretical framework for movement neuroscience.