PreprintPDF Available

Direct and Indirect Communication in Multi-Human Multi-Robot Interaction

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

How can multiple humans interact with multiple robots? The goal of our research is to create an effective interface that allows multiple operators to collaboratively control teams of robots in complex tasks. In this paper, we focus on a key aspect that affects our exploration of the design space of human-robot interfaces -- inter-human communication. More specifically, we study the impact of direct and indirect communication on several metrics, such as awareness, workload, trust, and interface usability. In our experiments, the participants can engage directly through verbal communication, or indirectly by representing their actions and intentions through our interface. We report the results of a user study based on a collective transport task involving 18 human subjects and 9 robots. Our study suggests that combining both direct and indirect communication is the best approach for effective multi-human / multi-robot interaction.
Content may be subject to copyright.
1
Direct and Indirect Communication
in Multi-Human Multi-Robot Interaction
Jayam Patel Student Member, IEEE, Tyagaraja Ramaswamy, Zhi Li, and Carlo Pinciroli Member, IEEE
Abstract—How can multiple humans interact with multiple
robots? The goal of our research is to create an effective interface
that allows multiple operators to collaboratively control teams of
robots in complex tasks. In this paper, we focus on a key aspect
that affects our exploration of the design space of human-robot
interfaces — inter-human communication. More specifically,
we study the impact of direct and indirect communication on
several metrics, such as awareness, workload, trust, and interface
usability. In our experiments, the participants can engage directly
through verbal communication, or indirectly by representing
their actions and intentions through our interface. We report
the results of a user study based on a collective transport task
involving 18 human subjects and 9 robots. Our study suggests
that combining both direct and indirect communication is the best
approach for effective multi-human / multi-robot interaction.
Index Terms—Multi-human multi-robot interaction, multi-
robot systems
I. INTRODUCTION
Multi-robot systems are envisioned in scenarios in which
parallelism, scalability, and resilience are key features for
success. Scenarios such as firefighting, search-and-rescue,
construction, and planetary exploration are typically imagined
with teams of robots acting intelligently and autonomously
[1]. However, autonomy is only part of the picture — human
supervision remains necessary to assess the correct progress of
the mission, modify goals, and intervene in case of unexpected
faults that cannot be handled autonomously [2].
Significant work has been devoted to interfaces that allow
single human operators to interact with multiple robots. These
interfaces typically enable intuitive interaction for specific
tasks, such as navigation [3], dispersion [4], and foraging [5].
As the complexity of the tasks involved increases, however, the
amount of information, the number of robots, and the nature
of the interactions is likely to exceed the span of apprehension
of any individual operator [6]. For this reason, it is reasonable
to envision that, in future missions with multi-robot systems,
multiple operators will be involved.
One immediate advantage of multiple operators is the oppor-
tunity to partition a large autonomous system in smaller, more
manageable parts, each assigned to a dedicated operator. A
more interesting insight is that cooperative supervisory control
can exploit individual differences among operators. These
include diversity in cognitive abilities and in the way operators
interact with automation. In this context, the main factors are
(i) the ability to focus and shift attention flexibly, (ii) the ability
The authors are with the Department of Robotics Engineering, Worces-
ter Polytechnic Institute, MA, USA. Email: {jupatel, tramaswamy,
zli11, cpinciroli}@wpi.edu
to process spatial information, and (iii) prior experience with
gaming interfaces [7]. Cooperative supervision that embraces
these factors will be extremely effective.
However, the presence of multiple operators introduces new
and interesting challenges. These include ineffective coopera-
tion among the operators [8], unbalanced workload [9], [10],
and inhomogeneous awareness [11], [12], [13]. Neglecting
these challenges produces an undesirable phenomenon: the
out-of-the-loop (OOTL) performance problem, caused by a
lack of engagement in the task at hand, of awareness of its
state, and of trust in the system and other operators [14], [15].
In this paper, we study the impact of inter-human communi-
cation on several metrics, such as awareness, workload, trust,
and interface usability. Although inter-human communication
has been studied extensively [16], it has sparsely been investi-
gated in the context of multi-robot systems, and, to the best of
our knowledge, no study exists that focuses on mobile teams
of robots.
For the purposes of our work, we categorize communica-
tion in two broad types: direct and indirect. In our exper-
iments, direct communication includes verbal and gesture-
based modalities, and we define it as an explicit exchange
of information among human operators. In contrast, we define
indirect communication as an exchange of information that oc-
curs implicitly, through the mediation of, e.g., a graphical user
interface. Which elements of the graphical user interface foster
efficient indirect communication is a key research question we
study in this paper.
This work offers two main contributions:
1) From the technological point of view, we present a
graphical user interface for multi-robot control, with
features designed to support information-rich indirect
communication among human operators. To the best of
our knowledge, this interface is the first of its kind, in that
it enables effective interaction between multiple humans
and multiple mobile robots in a complex scenario;
2) From the scientific point of view, we study the impact
that direct and indirect communication have both in the
design of novel interfaces and in the performance of the
interaction these interfaces produce.
Our experimental evaluation is based on a study that involved
18 operators, grouped in teams of 2, and 9 real mobile robots
involved in an object transport scenario. This paper takes
advantage of our previous work on multi-granularity [17] inter-
face design, in which we showed which interaction modalities
[18] and which graphical elements of a user interface [19]
provide the best performance in a multi-operator setting.
arXiv:2102.00672v1 [cs.RO] 1 Feb 2021
2
The paper is organized as follows. In Sec. II we discuss
relevant literature on human-robot communication. In Sec. III
we present the design of our interface. In Sec. IV we introduce
the user study, and discuss our main findings in Sec. V. We
conclude the paper in Sec. VI.
II. BACKGROU ND
Granularity of control. Over the last two decades, HRI re-
search on mobile multi-robot systems has focused on identify-
ing suitable interfaces and investigating methods to effectively
interact the robots. One of the key aspects of these interfaces
is control granularity [2], [20], and it generally includes
robot-oriented, team-oriented, and environment-oriented con-
trol. In robot-oriented control, the operator interacts with a
single robot, either statically predetermined or dynamically
selected [21], [22], [23], [24], [25]. Team-oriented control
allows an operator to assign a goal to a groups of robots as if
the group were a single entity [26], [27], [28]. Environment-
oriented control occurs when the operators modifies the en-
vironment, typically using augmented or mixed reality, to
influence the behavior of the robots or to indicate goals [29],
[2], [17], [18], [19]. These control modalities are typically
studied with a single operator in mind.
Human-robot communication. Research on communica-
tion has mostly focused on the relationship between a human
operator and one or more robots. The distinction we consider
in this paper between direct and indirect communication is
a common aspect of human-robot communication modali-
ties [30], [31]. Notable works include direct human-robot com-
munication through natural language conveyed verbally [32]
and non-verbal communication through social cues [33], facial
expressions [34] and eye gaze [35], [36]. Lakhmani et al.
[37] presented a method for direct communication between
an operator and a robot in which communication can be
either directional or bidirectional. In directional communi-
cation, only the robot can send information to the operator.
In bidirectional communication, the operator and the robot
can send information to each other. The authors reported that
the operator performed better with directional communication
as compared to bidirectional communication. Indirect com-
munication has been studied in several contexts, including:
(i) a human operator attempting to infer the behavior of
the robots [38], [39], and (ii) robots attempting to predict
the actions of the human-in-the-loop [40], [41], [42], [43].
Che et al. [44], compare the effects of direct and indirect
communication between a human and a robot in a navigation
task. In this work, the human acts as a bystander and the
robot must navigate around the human. The robot can either
indirectly predict the human’s direction of movement and
navigate around, or can directly notify the human about its
intentions of moving in a direction. The authors reported
that the combination of indirect and direct communication
positively impacts the humans performance and trust.
Conflicts among multiple operators. To the best of our
knowledge, analogous studies in communication that focus
on multiple human operators are currently missing. The most
notable works in multi-human multi-robot interaction concern
inter-operator conflict in scenarios in which the operators do
not communicate. A conflict might arise when multiple oper-
ators wish to control the same robot or specify incompatible
goals. If communication is not possible, pre-assigning robots
among operators is a possible solution. Lee et al. [45] compare
the performance of two scenarios: in one, the operators control
robots from a shared pool of robots; and the other, the oper-
ators have disjoint assigned pools. The operators can engage
with robots in a robot-oriented fashion and manipulate one
robot at a time. Lee et al.’s findings show that the performance
of the operators is better when they can manipulate robots from
the assigned pools. Their performance drops when they control
robots from the shared pool, due to the risk of specifying
conflicting goals for the same robots. In a similar vein, You
et al. [46] study a scenario in which two operators control
separate robots, and with them push physical objects from one
place to another. You et al. divide the environment into two
regions, with a operator-robot pair assigned to each region and
limited the operator from moving from one region to another.
No conflicts are possible in this case.
Novelty. In this paper, we take inspiration from the three
research strands discussed above, and combine them in a
coherent study. We investigate whether an interface that offers
multiple granularity of control can enable the operators to
decide when and how to control the robots, and study the
role that communication has in the design of such interface
and in the way the operators collaborate. To the best of
our knowledge, this study is the first to study these research
questions together.
III. MULTI -H UMAN MULT I-ROB OT INTERAC TI ON SYSTEM
A. System Overview
Our overall system, schematized in Fig. 1, consists of a
mixed-reality (MR) interface, a team of 9 Khepera IV robots,1
a Vicon motion tracking system,2and ARGoS [47], a fast
and modular multi-robot simulator. The MR interface enables
an operator to interact with the robots, displaying useful
information on actions and intentions of robots and other
operators. The Khepera IV are differential-drive robots that
we programmed to perform behaviors such as navigation and
collective object transport. ARGoS acts as the software glue
between the MR interfaces, the robots, and the Vicon.
To this aim, we modified ARGoS by replacing its simulated
physics engine with a new plug-in that receives positional data
from the motion capture systems. In addition, we created plug-
ins that allow ARGoS to communicate directly with the robots
for control and debugging purposes. These plug-ins create a
feedback loop between the robots and the environment which
allows us to programmatically control what the robots do and
sense as an experiment is running. Thanks to this feature
of our setup, the MR interfaces, once connected to ARGoS,
can both retrieve data and convey modifications made using
the control modalities offered by the interfaces. To make this
possible, ARGoS also translates the coordinates seen by the
MR interface with those seen by the Vicon and viceversa.
1https://www.k-team.com/khepera-iv
2http://vicon.com
3
Fig. 1: System overview.
When an operator defines a new goal position for the robots
or the objects present in the system, the MR interface transmits
the goal to ARGoS. The latter converts the request into motion
primitives and transmits them to the robots. Finally, the robots
execute the primitives. At any time during the execution, the
operators can interfere with the system by defining new goals
for the robots. This feature is particularly important when
robots temporarily misbehave due to faults or noise.
B. Operator Interface and Control Modalities
The interaction between the operators and the robots hap-
pens through the MR interface installed on an Apple iPad. We
implemented the MR interface with Vuforia,3a software de-
velopment kit for mixed-reality applications. Vuforia provides
a functionality to recognize and track physical objects with
fiducial markers. We integrated Vuforia with the Unity Game
Engine4for visualization. Fig. 2 shows a screenshot of the
view through the app running on an iPad.
The MR interface recognizes objects and robots through
unique fiducial markers applied to the each entity of interest.
The interface overlays virtual objects on the recognized fidu-
cial markers. The operator can move these virtual objects using
a one-finger swipe and rotate them with a two-finger twist. The
operator can also select multiple robots (for, e.g., collective
motion) by drawing a closed contour. The manipulation of
robots and objects occurs in three phase: start, move, and
end. The start phase is initiated when the operator touches the
handheld device. Once the touch is detected, the move phase
3http://vuforia.com
4http://unity3d.com
Fig. 2: Screenshot of the MR interface running on an iPad. The
black arrow was added in this picture to indicate the position
of the origin marker. This marker is used by Vuforia to identify
the origin of the environment.
(a) Object recognition (b) New Goal Defined
(c) Robots approach and push (d) Transport complete
Fig. 3: Object manipulation by interaction with the virtual
object through the interface. The dotted black arrow indicates
the one-finger swipe gesture used to move the virtual object
and the red dotted arrow indicates the two-finger rotation
gesture.
is executed as long as the operator is performing a continuous
motion gesture on the device. The end phase is triggered
when the operator releases the touch. After the completion
of the end phase, the interface parses the gesture and acts
accordingly, e.g., sends the final pose of a selected object to
ARGoS. The interface enables three interaction modalities:
object-oriented (a special case of the environment-oriented
modality), robot-oriented, and team-oriented, explained in the
rest of this section.
Object-oriented Interaction. The interface overlays a vir-
tual cuboid on objects equipped with a fiducial marker (see
4
(a) Robot recognition (b) New robot position
Fig. 4: Robot manipulation by interaction with the virtual
robots through the interface. The dotted black arrow indicates
the one-finger swipe gesture to move the virtual robot and the
arrowhead color indicates the moved virtual robot.
Fig. 3a). The dimensions and textures of the virtual objects
match the dimensions and textures of the fiducial markers. The
operators can differentiate between virtual cuboids using these
similarities while simultaneously moving multiple objects. The
operators can move these virtual objects using a one-finger
swipe and rotate it with a two-finger twist (see Fig. 3b).
If two or more operators simultaneously want to move the
same virtual object, then the robots transport the object to the
last received position. Fig. 3c and Fig. 3d show the robots
transporting the object.
Robot-oriented Interaction. The interface overlays a vir-
tual cylinder on the recognized robot tags (see Fig. 4a). The
dimensions and colors of the virtual cylinder are identical
to the dimensions and colors of the fiducial marker on the
robot. Operators can differentiate between different virtual
cylinders using these colors while controlling multiple robots
at once. The operators can use a one-finger swipe to move the
virtual cylinders to denote a desired position for the robots
(see Fig. 4b). If multiple operators simultaneously want to
move the virtual cylinder, then the robot considers the last
received request. If the robot is performing collective object
transport, then other robots in the team pause their operation
until the selected robot reaches the desired position. If the
robot is part of an operator-defined team, then the other robots
are not affected by the position change. The robot control can
also be used to manually push the object.
Robot Team Selection and Manipulation. Operators can
draw a closed contour with a one-finger continuous swipe to
select robots enclosed in the shape (see Fig. 5a). The interface
draws the closed contour in red color to show the area of
selection (see Fig. 5b). The interface displays a cube, hovering
at the centroid of the created contour, to represent the team
of robots (see Fig. 5b). The operators can move this cube
with a one-finger swipe to define a desired location for the
team to navigate (see Fig. 5d and Fig. 5d). Each operator
can create only one custom team at a time and the interface
deletes all previous selections. If a robot is part of multiple
teams defined by multiple operators, then the robot moves to
the latest location.
C. Collective Transport
We used a finite state machine to implement our collective
transport behaviour. The finite state machine is executed
(a) Robot team selection (b) Robot team creation
(c) Robot team manipulation (d) Robot team re-positioned
Fig. 5: Robot team creation and manipulation. The dotted
black arrow indicates the one-finger swipe gesture to move
the virtual cube for re-positioning the team of robots.
Fig. 6: Collective transport state machine.
independently by each robot, using information from the Vicon
and the MR mediated by ARGoS. Fig. 6 shows the state
machine, along with the conditions for state transition. We
define the states as follows:
Reach Object. The robots navigate to the object upon
receiving a desired position from the operator. The robots
arrange themselves around the object. The state ends when
all the robots reach their chosen positions.
Approach Object. The robots face the centroid of the object
and start moving towards it. The state ends when all the robots
are in contact with the object.
Push Object. The robots orient themselves in the direction
of the goal position and start moving when all robots are facing
that direction. The robot in the front of the formation maintains
a fixed distance from the object allowing all the robots to stay
in the formation. The state transitions to Approach Object if
5
Fig. 7: Transparency features in the mixed-reality interface.
The black arrow indicates the text-based log. The red arrow
indicates the Robot Panel. The green arrow indicates the on-
robot status. The orange arrow indicates the object panel.
Fig. 8: Shared awareness demonstrated using four mixed-
reality interfaces.
the formation breaks. The state ends when the object reaches
the desired position.
Rotate Object. The robots arrange themselves around the
object and move in a circular direction to rotate the object.
The state transitions to Approach Object if the formation
breaks. The state ends when the object achieves the desired
orientation.
D. Information Transparency
We designed our interface to provide a rich set of data about
the robots, the task progress, and other operators’ actions.
We studied the effectiveness of these features in previous
work [19], assessing how they contribute to making the system
more transparent [48] for the operators. The interface is shown
in Fig. 7.
To foster shared awareness, when an operator manipulates a
virtual object, the action is broadcast to the other interfaces in
real-time, making it possible for every operator to see what is
being done by other operators. Fig. 8 shows the image of four
mixed-reality interfaces demonstrating this shared awareness
feature.
The information is organized into a set of panels, updated
in real time whenever robots and operators perform actions
that modify the state of the system and progress of the tasks
in execution. The interface offers the following information
widgets:
Small ‘on-robot’ panels that follow each robot in view to
convey their current state and actions. These panels are
updated when operators assign new tasks to the robots. In
case of faults, this panel displays an error message that
helps to identify which robot has malfunctioned.
A robot panel (on the left side of the screen) that
graphically indicates functional and faulty robots. The
robots currently engaged in a task are displayed with
specific color shades. The panel also displays a blinking
red exclamation point in case a robot acts in a faulty
manner.
A text-based log (on the top-left of the screen) that
notifies the operators about other operators’ interactions
with robots and objects. The interface updates the log
every time other operators manipulate any virtual object.
The log stores the last three actions and discards the rest.
An object panel (on the right side of the screen) that
provides information regarding the objects that are being
manipulated the interface user and the other operators.
The panel also highlights the object icon corresponding
to the object being moved by the robots. Additionally, the
interface offers the option of selecting an object icon to
lock it for future use. An operator can lock the object by
tapping the lock icon. This changes the color of the lock
to blue, signifying the lock. An operator can select only
one object at a time. The interface of the other operators
highlights the lock with a red icon.
IV. USE R STU DY
A. Communication Modes and Hypotheses
This study aims to investigate the effects of direct and
indirect communication on multi-human multi-robot interac-
tion. The consider three modalities of communication: direct,
indirect, and mixed (the combination of direct and indirect).
We based our experiments on three main hypotheses.
H1: Mixed communication (MC) has the best outcome com-
pared to the other communication modalities in terms
of situational awareness, trust, interaction score and task
load.
H2: Operators prefer mixed communication (MC) to the other
modes.
H3: Operators prefer direct communication (DC) over indirect
communication (IC).
B. Task Description
We designed a gamified scenario in which the operators
must instruct the robots to transport objects to target areas.
6
Fig. 9: User study experiment setup.
The environment had 6 objects (2 big objects and 4 small
objects), and the operators received points according to which
objects were successfully transported to their target. The big
objects were worth 2 points each and the small objects were
1 point each, for a total of 8 points.
The operators, in teams of 2, had a maximum of 8 minutes
to accrue as many points as possible. The operators could
manipulate big objects with any kind of interaction modality
(object-oriented, team-oriented, robot-oriented); small objects,
in contrast, could only be manipulated with the robot- and
team-oriented modalities. The operators were given 9 robots
to complete the game.
Fig. 9 shows the initial positions of the objects and the
robots. All the participants had to perform the task four times
with a different communication modalities as follows:
No Communication (NC): The operators are not allowed
to communicate in any way;
Direct Communication (DC): The operators can com-
municate verbally and non-verbally, but the interfaces do
not broadcast information.
Indirect Communication (IC): The operators can com-
municate indirectly through the transparency features of
the interface, but are not allowed to talk, use gestures, or
infer from body language what other operators are doing;
Mixed Communication (MC): The complete set of
communication modalities is allowed for both operators.
C. Participant Sample
We recruited 18 students from our university (11 male, 7
females) with ages ranging from 20 to 30 (M= 24.17,SD =
2.68). No participant had prior experience with our interface,
our robots, or even the laboratory layout.
D. Procedures
Each session lasted approximately 105 minutes. We ex-
plained the study to the participants after signing a consent
form, and gave the participants 10 minutes to play with the
interface. We randomized the order of the tasks in an attempt
to reduce the influence of learning effects.
Did you understand your teammate’s intentions? Were
you able to understand why your teammate was taking a
certain action?
Could you understand your teammate’s actions? Could
you understand what your teammate was doing at any
particular time?
Could you follow the progress of the task? While
performing the tasks, were you able to gauge how much
of it was pending?
Did you understand what the robots were doing? At
all times, were you sure how and why the robots were
behaving the way they did?
Was the information provided by the interface clear to
understand?
Fig. 10: Questionnaire on the effect of the inter-operator
communication modalities we considered in our user study.
E. Metrics
We recorded subjective measures from the operators and
objective measures using ARGoS for each game. We used the
following scales as metrics:
Situational Awareness: We measured situational aware-
ness using the Situational Awareness Rating Technique
(SART) [49] on a 4-point Likert scale [50];
Task Workload: We used the NASA TLX [51] scale on
a 4-point Likert scale to compare the perceived workload
in each task;
Trust: We employed the trust questionnaire [52] on a
4-point Likert scale to assess trust in the system;
Interaction: We used a custom questionnaire on a 5-point
Likert scale to quantify the effects of communication.
The interaction questionnaire had the questions reported
in Fig. 10
Performance: We measured the performance achieved
with each communication modality by using the points
earned in each game.
Usability: We asked participants to select the features
(text log, robot panel, object panel, and on-robot status)
they used during the study. Additionally, we asked them
to rank the communication modalities from 1 to 4, 1 being
the highest rank.
F. Results
We analyzed the data using the Friedman test [53] and
summarized the results based on the significance and the
mean ranks. Table I shows the summarized results along with
relationship ranking between the communication modes. We
formed the rankings for each scale using the mean ranks of the
Friedman test. We categorized the relationship as significant
for p < 0.05 and marginally significant for p < 0.10.
Fig. 11 shows the usability results, i.e., the percentage
of operators that used a particular feature to complete a
specific task. Fig. 12 shows how the participants ranked the
communication modalities.
7
TABLE I: Results with relationships between communication
modes. The relationship are based on mean ranks obtained
through Friedman’s Test. The symbol denotes significant
difference (p < 0.05) and the symbol ∗∗ denotes marginally
significant difference (p < 0.10). The symbol denotes
negative scales and lower ranking is a good ranking.
Attributes Relationship χ2(3) p-value
SART SUBJECTIVE SCALE
Instability of SituationNC>IC>DC>MC9.000 0.029
Complexity of Situationnot significant 2.324 0.508
Variability of SituationIC>NC>MC>DC9.303 0.026
Arousal IC>NC>MC>DC6.371 0.095
Concentration of Attention IC>NC>DC>MC17.149 0.001
Spare Mental Capacity not significant 5.858 0.119
Information Quantity MC>DC>IC>NC15.075 0.002
Information Quality MC>DC>IC>NC15.005 0.002
Familiarity with Situation not significant 6.468 0.101
NASA TLX SUBJECTIVE SCALE
Mental Demandnot significant 2.226 0.527
Physical Demandnot significant 2.165 0.539
Temporal Demandnot significant 3.432 0.330
Performancenot significant 0.412 0.938
Effortnot significant 1.450 0.694
Frustrationnot significant 4.454 0.216
TRUST SUBJECTIVE SCALE
Competence not significant 4.740 0.192
Predictability MC>IC>DC>NC10.626 0.014
Reliability MC>IC>DC>NC8.443 0.038
Faith MC>IC>DC>NC9.451 0.024
Overall Trust MC>IC>DC>NC∗∗ 6.633 0.085
Accuracy not significant 1.891 0.595
INTERACTION SUBJECTIVE SCALE
Teammate’s Intent DC>MC>IC>NC19.610 0.000
Teammate’s Action MC>DC>IC>NC13.810 0.003
Task Progress MC>DC>IC>NC9.686 0.021
Robot Status not significant 0.811 0.847
Information Clarity not significant 5.625 0.131
PERFORMANCE OBJECTIVE SCALE
Points Scored not significant 0.808 0.848
TABLE II: Ranking scores based on the Borda count. The gray
cells indicate the leading scenario for each type of ranking.
Borda Count NC DC IC MC
Based on Collected Data Ranking
(Table I)
18 34 33 45
Based on Preference Data Ranking
(Fig. 12)
18 49 41 72
We used the Borda count [54] to quantify all the derived
rankings based on data and preference to find an overall
winner. We also inverted the ranking of the negative scales
for consistency. Table II shows the results.
Fig. 11: Feature Usability.
Fig. 12: Task Preference.
V. DISCUSSION
Table I shows that mixed communication (MC) has the best
information quality and quantity, leading to the best awareness
and trust. However, with more information, the operators
experienced greater instability of situation and variability of
situation (see the SART Subjective Scale section of Table I).
The participants indicated mixed communication as the best
choice in both the data and their expressed preference (see
Table II), confirming the hypotheses H1 and H2.
Direct communication (DC), compared to indirect commu-
nication (IC), had better information quality and quantity, lead-
8
Fig. 13: Task performance for each communication mode.
Fig. 14: Learning effect in the user study.
ing to better awareness. However, similar to mixed communi-
cation, the operators experienced greater instability of situation
and variability of situation. Although trust is higher in indirect
communication, operators prefer direct communication. The
Borda count for preference data shows direct communication
ranks better than indirect communication, supporting hypoth-
esis H3.
However, in the absence of direct communication, the oper-
ators concentrated more on the task, leading to a higher level
of arousal and better trust (see Trust Subjective Scale section
of Table I). Although indirect communication provides more
diverse and more visually augmented information, operators
relied on direct communication when working as a team.
We also observed operators directly communicating either
at the start of the task, to define a strategy, or near the
end of the task, to coordinate the last part of the task. One
reason can be familiarity of information. Humans are more
familiar with direct communication. The participants were new
to the interface and were unable to use it as effectively as
communicating directly. This raises another research question
and a potential future study on comparing the effects of mixed
communication on novice operators and on expert operators.
Our experiments did not detect a substantial difference
in performance across communication modes. However, we
hypothesize that this lack of difference is because of the
learning effect across the trials each team had to perform.
Fig. 13 represent the points earned for each communication
mode and Fig. 14 shows the learning effects as the increase
in objective performance in order of the performed task.
VI. CONCLUSION AND FUTURE WO RK
We presented a study on the effects of inter-operator
communication in multi-human multi-robot interaction. We
concentrated on two broad types of communication: direct,
whereby two operators engage in an exchange of information
verbally or non-verbally (e.g., with gestures, body language);
and indirect, whereby information is exchanged through a
suitably designed interface. We presented the design of our
interface, which offers multi-granularity control of teams of
robots.
In a user study involving 18 participants and 9 robots,
we assessed the effect of inter-operator communication by
comparing direct, indirect, mixed, and no communication.
We considered metrics such as awareness, trust, workload
and usability. Our results suggest that allowing for mixed
communication is the best approach, because it fosters flexible
and intuitive collaboration among operators.
In future work, we will study how training affects commu-
nication. In comparing direct and indirect communication, we
noticed that our novice participants preferred direct to indi-
rect communication. Our experience as online videogamers
leads us to hypothesize that novice users heavily lean on
verbal communication due to a lack of experience with the
information conveyed by the interface. As the experience of
the operators increases, we expect direct communication to
become more sporadic and high-level. At the same time, more
expert users might request advanced features to further make
indirect communication more efficient.
ACK NOW LE DG EM EN TS
This work was funded by an Amazon Research Award.
REFERENCES
[1] M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo, “Swarm
robotics: A review from the swarm engineering perspective,” Swarm
Intelligence, vol. 7, no. 1, pp. 1–41, 2013.
[2] A. Kolling, P. Walker, N. Chakraborty, K. Sycara, and M. Lewis,
“Human Interaction With Robot Swarms: A Survey,IEEE Transactions
on Human-Machine Systems, vol. 46, no. 1, pp. 9–26, Feb. 2016.
[Online]. Available: http://ieeexplore.ieee.org/document/7299280/
9
[3] N. Ayanian, A. Spielberg, M. Arbesfeld, J. Strauss, and D. Rus,
“Controlling a team of robots with a single input,” in Robotics and
Automation (ICRA), 2014 IEEE International Conference On. IEEE,
2014, pp. 1755–1762.
[4] G. Kapellmann-Zafra, N. Salomons, A. Kolling, and R. Groß, “Human-
Robot Swarm Interaction with Limited Situational Awareness,” in In-
ternational Conference on Swarm Intelligence. Springer, 2016, pp.
125–136.
[5] A. Kolling, K. Sycara, S. Nunnally, and M. Lewis, “Human Swarm
Interaction: An Experimental Study of Two Types of Interaction
with Foraging Swarms,” Journal of Human-Robot Interaction, vol. 2,
no. 2, Jun. 2013. [Online]. Available: http://dl.acm.org/citation.cfm?id=
3109714
[6] M. Lewis, H. Wang, S. Y. Chien, P. Velagapudi, P. Scerri, and K. Sycara,
“Choosing autonomy modes for multirobot search,” Human Factors,
vol. 52, no. 2, pp. 225–233, 2010.
[7] J. Y. C. Chen and M. J. Barnes, “Human–Agent Teaming for Multirobot
Control: A Review of Human Factors Issues,IEEE Transactions on
Human-Machine Systems, vol. 44, no. 1, pp. 13–29, Feb. 2014.
[8] K. Allen, R. Bergin, and K. Pickar, “Exploring trust, group satisfaction,
and performance in geographically dispersed and co-located university
technology commercialization teams,” in VentureWell. Proceedings of
Open, the Annual Conference. National Collegiate Inventors &
Innovators Alliance, 2004, p. 201.
[9] S. E. McBride, W. A. Rogers, and A. D. Fisk, “Understanding the effect
of workload on automation use for younger and older adults,” Human
factors, vol. 53, no. 6, pp. 672–686, 2011.
[10] J. Y. Chen and M. J. Barnes, “Human–agent teaming for multirobot
control: A review of human factors issues,IEEE Transactions on
Human-Machine Systems, vol. 44, no. 1, pp. 13–29, 2014.
[11] J. M. Riley and M. R. Endsley, “Situation awareness in hri with
collaborating remotely piloted vehicles,” in proceedings of the Human
Factors and Ergonomics Society Annual Meeting, vol. 49, no. 3. SAGE
Publications Sage CA: Los Angeles, CA, 2005, pp. 407–411.
[12] J. D. Lee, “Review of a pivotal human factors article:“humans and
automation: use, misuse, disuse, abuse”,” Human Factors, vol. 50, no. 3,
pp. 404–410, 2008.
[13] R. Parasuraman and V. Riley, “Humans and automation: Use, misuse,
disuse, abuse,” Human factors, vol. 39, no. 2, pp. 230–253, 1997.
[14] M. R. Endsley and E. O. Kiris, “The out-of-the-loop performance
problem and level of control in automation,Human factors, vol. 37,
no. 2, pp. 381–394, 1995.
[15] J. Gouraud, A. Delorme, and B. Berberian, “Autopilot, mind wandering,
and the out of the loop performance problem,” Frontiers in neuroscience,
vol. 11, p. 541, 2017.
[16] M. Tomasello, Origins of human communication. MIT press, 2010.
[17] J. Patel, Y. Xu, and C. Pinciroli, “Mixed-granularity human-swarm inter-
action,” in Robotics and Automation (ICRA), 2019 IEEE International
Conference on. IEEE, 2019.
[18] J. Patel and C. Pinciroli, “Improving human performance using mixed
granularity of control in multi-human multi-robot interaction,” in IEEE
International Conference on Robot and Human Interactive Communica-
tion (Ro-Man 2020). IEEE Press, 2020.
[19] J. Patel, T. Ramaswamy, Z. Li, and C. Pinciroli, “Transparency in multi-
human multi-robot interaction,” arXiv preprint arXiv:2101.10495, 2021.
[20] M. R. Endsley, “From here to autonomy: lessons learned from human–
automation research,” Human factors, vol. 59, no. 1, pp. 5–27, 2017.
[21] T. Setter, A. Fouraker, H. Kawashima, and M. Egerstedt, “Haptic
Interactions With Multi-Robot Swarms Using Manipulability,Journal
of Human-Robot Interaction, vol. 4, no. 1, p. 60, Jan. 2015. [Online].
Available: http://dl.acm.org/citation.cfm?id=3109839
[22] G. Kapellmann-Zafra, N. Salomons, A. Kolling, and R. Groß, “Human-
Robot Swarm Interaction with Limited Situational Awareness,” in
International Conference on Swarm Intelligence. Springer, 2016, pp.
125–136. [Online]. Available: http://link.springer.com/chapter/10.1007/
978-3- 319-44427- 7 11
[23] J. Nagi, A. Giusti, L. M. Gambardella, and G. A. Di Caro,
“Human-swarm interaction using spatial gestures,” in 2014 IEEE/RSJ
International Conference on Intelligent Robots and Systems. Chicago,
IL, USA: IEEE, Sep. 2014, pp. 3834–3841. [Online]. Available:
http://ieeexplore.ieee.org/document/6943101/
[24] J. Alonso-Mora, S. Haegeli Lohaus, P. Leemann, R. Siegwart,
and P. Beardsley, “Gesture based human - Multi-robot swarm
interaction and its application to an interactive display,” in 2015 IEEE
International Conference on Robotics and Automation (ICRA). Seattle,
WA, USA: IEEE, May 2015, pp. 5948–5953. [Online]. Available:
http://ieeexplore.ieee.org/document/7140033/
[25] B. Gromov, L. M. Gambardella, and G. A. Di Caro, “Wearable
multi-modal interface for human multi-robot interaction.” IEEE,
Oct. 2016, pp. 240–245. [Online]. Available: http://ieeexplore.ieee.org/
document/7784305/
[26] G. Podevijn, R. O’Grady, Y. S. G. Nashed, and M. Dorigo, “Gesturing
at Subswarms: Towards Direct Human Control of Robot Swarms,”
in Towards Autonomous Robotic Systems, A. Natraj, S. Cameron,
C. Melhuish, and M. Witkowski, Eds. Berlin, Heidelberg: Springer
Berlin Heidelberg, 2014, vol. 8069, pp. 390–403. [Online]. Available:
http://link.springer.com/10.1007/978-3-662- 43645-5 41
[27] N. Ayanian, A. Spielberg, M. Arbesfeld, J. Strauss, and D. Rus,
“Controlling a team of robots with a single input,” in Robotics and
Automation (ICRA), 2014 IEEE International Conference on. IEEE,
2014, pp. 1755–1762. [Online]. Available: http://ieeexplore.ieee.org/
abstract/document/6907088/
[28] Y. Diaz-Mercado, S. G. Lee, and M. Egerstedt, “Distributed dynamic
density coverage for human-swarm interactions,” in American Control
Conference (ACC), 2015. IEEE, 2015, pp. 353–358.
[29] S. Bashyal and G. K. Venayagamoorthy, “Human swarm interaction
for radiation source search and localization,” in 2008 IEEE Swarm
Intelligence Symposium. IEEE, 2008, pp. 1–8.
[30] N. Mavridis, “A review of verbal and non-verbal human–robot interac-
tive communication,Robotics and Autonomous Systems, vol. 63, pp.
22–35, 2015.
[31] S. Saunderson and G. Nejat, “How robots influence humans: A survey
of nonverbal communication in social human–robot interaction,Inter-
national Journal of Social Robotics, vol. 11, no. 4, pp. 575–608, 2019.
[32] Y. Bisk, D. Yuret, and D. Marcu, “Natural language communication
with robots,” in Proceedings of the 2016 Conference of the North
American Chapter of the Association for Computational Linguistics:
Human Language Technologies, 2016, pp. 751–761.
[33] F. Sartorato, L. Przybylowski, and D. K. Sarko, “Improving therapeutic
outcomes in autism spectrum disorders: Enhancing social communi-
cation and sensory processing through the use of interactive robots,
Journal of psychiatric research, vol. 90, pp. 1–11, 2017.
[34] Z. Liu, M. Wu, W. Cao, L. Chen, J. Xu, R. Zhang, M. Zhou, and J. Mao,
“A facial expression emotion recognition based human-robot interaction
system,” IEEE/CAA Journal of Automatica Sinica, 2017.
[35] H. Admoni and B. Scassellati, “Social eye gaze in human-robot interac-
tion: a review,” Journal of Human-Robot Interaction, vol. 6, no. 1, pp.
25–63, 2017.
[36] C. Breazeal, C. D. Kidd, A. L. Thomaz, G. Hoffman, and M. Berlin,
“Effects of nonverbal communication on efficiency and robustness in
human-robot teamwork,” in 2005 IEEE/RSJ international conference on
intelligent robots and systems. IEEE, 2005, pp. 708–713.
[37] S. G. Lakhmani, J. L. Wright, M. R. Schwartz, and D. Barber,
“Exploring the Effect of Communication Patterns and Transparency
on Performance in a Human-Robot Team,Proceedings of the Human
Factors and Ergonomics Society Annual Meeting, vol. 63, no. 1, pp.
160–164, Nov. 2019. [Online]. Available: http://journals.sagepub.com/
doi/10.1177/1071181319631054
[38] H. Knight, R. Thielstrom, and R. Simmons, “Expressive path shape
(swagger): Simple features that illustrate a robot’s attitude toward its goal
in real time,” in Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ
International Conference on. IEEE, 2016, pp. 1475–1482. [Online].
Available: http://ieeexplore.ieee.org/abstract/document/7759240/
[39] B. Capelli, C. Secchi, and L. Sabattini, “Communication Through
Motion: Legibility of Multi-Robot Systems,” in 2019 International
Symposium on Multi-Robot and Multi-Agent Systems (MRS). New
Brunswick, NJ, USA: IEEE, Aug. 2019, pp. 126–132. [Online].
Available: https://ieeexplore.ieee.org/document/8901100/
[40] D. Claes and K. Tuyls, “Multi robot collision avoidance in a
shared workspace,” Autonomous Robots, vol. 42, no. 8, pp. 1749–
1770, Dec. 2018. [Online]. Available: http://link.springer.com/10.1007/
s10514-018- 9726-5
[41] Y. Wang, L. R. Humphrey, Z. Liao, and H. Zheng, “Trust-based
Multi-Robot Symbolic Motion Planning with a Human-in-the-Loop,”
ACM Transactions on Interactive Intelligent Systems, vol. 8, no. 4,
pp. 1–33, Nov. 2018, arXiv: 1808.05120. [Online]. Available:
http://arxiv.org/abs/1808.05120
[42] A. Bajcsy, S. L. Herbert, D. Fridovich-Keil, J. F. Fisac, S. Deglurkar,
A. D. Dragan, and C. J. Tomlin, “A Scalable Framework For Real-Time
Multi-Robot, Multi-Human Collision Avoidance,” arXiv:1811.05929
[cs], Nov. 2018, arXiv: 1811.05929. [Online]. Available: http:
//arxiv.org/abs/1811.05929
[43] L. Zhang and R. Vaughan, “Optimal robot selection by gaze direction in
multi-human multi-robot interaction,” in 2016 IEEE/RSJ International
10
Conference on Intelligent Robots and Systems (IROS). Daejeon,
South Korea: IEEE, Oct. 2016, pp. 5077–5083. [Online]. Available:
http://ieeexplore.ieee.org/document/7759745/
[44] Y. Che, A. M. Okamura, and D. Sadigh, “Efficient and trustworthy social
navigation via explicit and implicit robot–human communication,IEEE
Transactions on Robotics, 2020.
[45] P.-J. Lee, H. Wang, S.-Y. Chien, M. Lewis, P. Scerri, P. Velagapudi,
K. Sycara, and B. Kane, “Teams for Teams Performance in Multi-
Human/Multi-Robot Teams,th ANNUAL MEETING, p. 5, 2010.
[46] S. You and L. E. Robert Jr., “Curiosity vs. Control: Impacts of Training
on Performance of Teams Working with Robots,” in Proceedings of
the 19th ACM Conference on Computer Supported Cooperative Work
and Social Computing Companion - CSCW ’16 Companion. San
Francisco, California, USA: ACM Press, 2016, pp. 449–452. [Online].
Available: http://dl.acm.org/citation.cfm?doid=2818052.2869121
[47] C. Pinciroli, V. Trianni, R. O’Grady, G. Pini, A. Brutschy, M. Brambilla,
N. Mathews, E. Ferrante, G. Di Caro, F. Ducatelle, M. Birattari, L. M.
Gambardella, and M. Dorigo, “ARGoS: a modular, parallel, multi-engine
simulator for multi-robot systems,” Swarm Intelligence, vol. 6, no. 4, pp.
271–295, 2012.
[48] A. Bhaskara, M. Skinner, and S. Loft, “Agent Transparency:
A Review of Current Theory and Evidence,IEEE Transactions
on Human-Machine Systems, pp. 1–10, 2020. [Online]. Available:
https://ieeexplore.ieee.org/document/8982042/
[49] R. M. Taylor, “Situational awareness rating technique (sart): The devel-
opment of a tool for aircrew systems design,” in Situational awareness.
Routledge, 1990, pp. 111–128.
[50] R. LIKERT, “A technique for the measurement of attitudes,
Arch Psych, vol. 140, p. 55, 1932. [Online]. Available: https:
//ci.nii.ac.jp/naid/10024177101/en/
[51] S. G. Hart and L. E. Staveland, “Development of nasa-tlx (task load
index): Results of empirical and theoretical research,” in Advances in
psychology. Elsevier, 1988, vol. 52, pp. 139–183.
[52] A. Uggirala, A. K. Gramopadhye, B. J. Melloy, and J. E. Toler, “Mea-
surement of trust in complex and dynamic systems using a quantitative
approach,” International Journal of Industrial Ergonomics, vol. 34,
no. 3, pp. 175–186, 2004.
[53] M. Friedman, “The use of ranks to avoid the assumption of normality
implicit in the analysis of variance,” Journal of the american statistical
association, vol. 32, no. 200, pp. 675–701, 1937.
[54] D. Black, “Partial justification of the borda count,” Public Choice, pp.
1–15, 1976.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Symbolic motion planning for robots is the process of specifying and planning robot tasks in a discrete space, then carrying them out in a continuous space in a manner that preserves the discrete-level task specifications. Despite progress in symbolic motion planning, many challenges remain, including addressing scalability for multi-robot systems and improving solutions by incorporating human intelligence. In this article, distributed symbolic motion planning for multi-robot systems is developed to address scalability. More specifically, compositional reasoning approaches are developed to decompose the global planning problem, and atomic propositions for observation, communication, and control are proposed to address inter-robot collision avoidance. To improve solution quality and adaptability, a hypothetical dynamic, quantitative, and probabilistic human-to-robot trust model is developed to aid this decomposition. Furthermore, a trust-based real-time switching framework is proposed to switch between autonomous and manual motion planning for tradeoffs between task safety and efficiency. Deadlock- and livelock-free algorithms are designed to guarantee reachability of goals with a human-in-the-loop. A set of nontrivial multi-robot simulations with direct human inputs and trust evaluation is provided, demonstrating the successful implementation of the trust-based multi-robot symbolic motion planning methods.
Article
Full-text available
To satisfy the increasing demand for safer critical systems, engineers have integrated higher levels of automation, such as glass cockpits in aircraft, power plants, and driverless cars. These guiding principles relegate the operator to a monitoring role, increasing risks for humans to lack system understanding. The out of the loop performance problem arises when operators suffer from complacency and vigilance decrement; consequently, when automation does not behave as expected, understanding the system or taking back manual control may be difficult. Close to the out of the loop problem, mind wandering points to the propensity of the human mind to think about matters unrelated to the task at hand. This article reviews the literature related to both mind wandering and the out of the loop performance problem as it relates to task automation. We highlight studies showing how these phenomena interact with each other while impacting human performance within highly automated systems. We analyze how this proximity is supported by effects observed in automated environment, such as decoupling, sensory attention, and cognitive comprehension decrease. We also show that this link could be useful for detecting out of the loop situations through mind wandering markers. Finally, we examine the limitations of the current knowledge because many questions remain open to characterize interactions between out of the loop, mind wandering, and automation.
Article
We present a novel end-to-end solution for distributed multirobot coordination that translates multitouch gestures into low-level control inputs for teams of robots. Highlighting the need for a holistic solution to the problem of scalable human control of multirobot teams, we present a novel control algorithm with provable guarantees on the robots' motion that lends itself well to input from modern tablet and smartphone interfaces. Concretely, we develop an iOS application in which the user is presented with a team of robots and a bounding box (prism). The user carefully translates and scales the prism in a virtual environment; these prism coordinates are wirelessly transferred to our server and then received as input to distributed onboard robot controllers. We develop a novel distributed multirobot control policy which provides guarantees on convergence to a goal with distance bounded linearly in the number of robots, and avoids interrobot collisions. This approach allows the human user to solve the cognitive tasks such as path planning, while leaving precise motion to the robots. Our system was tested in simulation and experiments, demonstrating its utility and effectiveness.
Article
As machines and agents become more autonomous, it has been increasingly clear to human factors/ergonomics researchers and practitioners that agent transparency is a critical issue for effective human–agent teaming. Transparency methods can provide the foundation for establishing shared awareness and shared intent between humans and intelligent machines. However, to date, the existing body of research on agent transparency has not been systematically documented. The purpose of this article is to summarize and evaluate current psychological theories and empirical evidence regarding effective agent transparency in human–autonomy teaming. We start by examining how transparency has been operationalized in the literature by discussing the two prominent theoretical frameworks of human–autonomy teaming. We then present a review of the empirical findings concerning how transparency affects key human–autonomy teaming variables, such as operator accuracy, decision time, situation awareness, perceived usability, and workload. This article includes an overview of the experimental tasks, scenarios, and interfaces that have been used in past studies and synthesizes how transparency has been operationalized and manipulated by prior studies. We then summarize the results and conclude by providing key recommendations for future research.
Article
In this article, we present a planning framework that uses a combination of implicit (robot motion) and explicit (visual/audio/haptic feedback) communication during mobile robot navigation. First, we developed a model that approximates both continuous movements and discrete behavior modes in human navigation, considering the effects of implicit and explicit communication on human decision-making. The model approximates the human as an optimal agent, with a reward function obtained through inverse reinforcement learning. Second, a planner uses this model to generate communicative actions that maximize the robot’s transparency and efficiency. We implemented the planner on a mobile robot, using a wearable haptic device for explicit communication. In a user study of an indoor human–robot pair orthogonal crossing situation, the robot is able to actively communicate its intent to users in order to avoid collisions and facilitate efficient trajectories. Results show that the planner generated plans that are easier to understand, reduce users‘ effort, and increase users’ trust of the robot, compared to simply performing collision avoidance. The key contribution of this article is the integration and analysis of explicit communication (together with implicit communication) for social navigation.
Article
As robots become more prevalent in society, investigating the interactions between humans and robots is important to ensure that these robots adhere to the social norms and expectations of human users. In particular, it is important to explore exactly how the nonverbal behaviors of robots influence humans due to the dominant role nonverbal communication plays in social interactions. In this paper, we present a detailed survey on this topic focusing on four main nonverbal communication modes: kinesics, proxemics, haptics, and chronemics, as well as multimodal combinations of these modes. We uniquely investigate findings that span across these different nonverbal modes and how they influence humans in four separate ways: shifting cognitive framing, eliciting emotional responses, triggering specific behavioral responses, and improving task performance. A detailed discussion is presented to provide insights on nonverbal robot behaviors with respect to the aforementioned influence types and to discuss future research directions in this field.