Conference PaperPDF Available

The Influence of Following Angle on Performance Metrics of a Human-Following Robot


Abstract and Figures

Robots that operate alongside people need to be able to move in socially acceptable ways. As a step toward this goal, we study how and under which circumstances the angle at which a robot follows a person may affect the human experience and robot tracking performance. In this paper, we aimed to assess three following angles (0 • angle, 30 • angle, and 60 • angle) under two conditions: when the robot was carrying a valuable personal item or not. Objective and subjective indicators of the quality of following and participants' perceptions and preferences were collected. Results indicated that the personal item manipulation increased awareness to the quality of the following and the following angles. Without the manipulation, participants were indifferent to the behavior of the robot. Our following algorithm was successful for tracking at a 0 • and 30 • angle, yet it must be improved for wider angles. Further research is required to obtain better understanding of following angle preferences for varying environment and task conditions.
Content may be subject to copyright.
Abstract Robots that operate alongside people need to be able
to move in socially acceptable ways. As a step toward this goal,
we study how and under which circumstances the angle at which
a robot follows a person may affect the human experience and
robot tracking performance. In this paper, we aimed to assess
three following angles (0◦ angle, 30◦ angle, and 60◦ angle) under
two conditions: when the robot was carrying a valuable personal
item or not. Objective and subjective indicators of the quality of
following and participants' perceptions and preferences were
collected. Results indicated that the personal item manipulation
increased awareness to the quality of the following and the
following angles. Without the manipulation, participants w ere
indifferent to the behavior of the robot. Our following algorithm
was successful for tracking at a 0 and 30 angle, yet it must be
improved for wider angles. Further research is required to
obtain better understanding of following angle preferences for
varying environment and task conditions.
As service robots take on an increasingly significant role
in society, they must be able to move in socially acceptable
ways [1]. Person-following is a fundamental part of such social
interactions, as there are many likely situations in which a
robot needs to follow a person; mainly when the human shows
the robot around, gives it procedural instructions, or when they
need to jointly perform activities, like carrying groceries.
Consequently, a robot's following behavior must be robust,
comprehensible and adhere to social expectations [2].
Research in person-following focused primarily on
algorithmic considerations for robots that follow people
directly from behind at zero degrees of separation [3][6].
However, when human beings accompany each other, they
often walk side by side [7] or in less constrained angles than
zero. Since predictability and comprehension of robot
movements can affect user comfort and safety [1], this
discrepancy may lead to suboptimal human-robot interactions
while walking. Several studies [1], [2], [8][12] have provided
insight into people’s preferences of robotic spatial behavior,
however, few of them have focused on a person-following
scenario. One exception is Gockley, Forlizzi & Simmons [1],
who discovered that people felt a robot's following behavior
was most natural when it followed by moving directly towards
them, while disregarding their exact walking path.
A robot's angle of approach has been previously examined
in static studies revealing that robots appear less threatening
when approaching at an angle to a person's face than when
approaching from a frontal position [10], [12]. These findings
*All authors are from the Department of Industrial Engineering and
Management, Ben-Gurion University of the Negev
illustrate the importance of orientation to human-robot
interactions. The only study that we are aware of that has
attempted to assess the angle of following in a human-robot
walking interaction was Young et. al [11], who evaluated a
dog-leash interface for behind following (0◦ angle), front
following (180◦ angle), and a behind angle (45◦ angle). Their
results indicated that the front behavior was generally
preferred. The least favorite following method was the 45◦
angle: "participants reported that it was harder to see, and
tended to make them feel less safe and more agitated with the
robot" [11]. Surprisingly, the behind following did not receive
similar complaints. While these results may shed some light
on following angle preferences in a robotic dog-leash
interface, they cannot be generalized to all interactions with
human-following robots since one cannot isolate the effects of
the leash on the human-robot interaction.
We aim to expand upon the above studies and examine
whether and under which circumstances the angle at which a
robot follows a person may affect the human experience and
robot tracking performance. As a step toward this goal, three
initial following angles were assessed: 0◦ angle (following
from behind), 30◦ angle, and 60◦ angle (Fig. 1). These angles
were selected based on the maturity of our person-following
algorithm, which in preliminary tests could consistently handle
following of up to a 60◦ angle, but was not yet equipped to deal
with the complexities of side-by-side following (e.g. historical
tracking and obstacle avoidance [13]).
Each of the selected following angles provides unique
tactical advantages which could influence user preferences and
produce valuable insights. In 60 following, contrary to the 0
and 30◦ conditions, the user can see the robot without turning
their torso. This may be preferential in circumstances in which
obtaining constant feedback of the whereabouts of the robot is
important, or when the user wishes to interact with the robot
while walking. The primary advantage of 0 following is that
*This research was supported by the Ministry of Science, Technology &
Space, Israel, Grant # 47897, “Follow me”, the Helmsley Charitable Trust
through the Agricultural, Biological and Cognitive Robotics Center, and the
Rabbi W. Gunther Plaut Chair in Manufacturing Engineering, both at Ben-
Gurion University of the Negev.
The Influence of Following Angle on Performance Metrics of a
Human-Following Robot
Honig S. Shanee, Katz Dror, Oron-Gilad Tal, Edan Yael, Ben-Gurion University of the Negev
Figure 1. The robot's following angles: 0º (left), 30º (center), 60º (right)
25th IEEE International Symposium on
Robot and Human Interactive Communication (RO-MAN)
August 26-31, 2016. Columbia University, NY, USA
978-1-5090-3928-9/16/$31.00 ©2016 IEEE 593
it is most compact in terms of space. The human-robot team
occupies only one walking lane instead of two, allowing more
space for other people to pass. This may be helpful in busy or
narrow environments, however could also be perceived as
threatening, negatively affecting user comfort. The 30
following offers an intermediate solution: it takes less
horizontal space than the 60 following, but a quick glance to
the side reveals the robot's location.
Proxemic preferences in human-robot interactions are
known to be influenced by gender [14], personality [14], robot
appearance [15], negative attitudes [16], and type of
interaction [17]. In a person-following scenario, preliminary
research [18] has shown that walking efficiency, user comfort
and robot likeability can be affected by a robot's following
distance, acceleration value, environment (open hall vs.
corridor) and the inclusion of a secondary task. Thus,
following angle preferences are likely to change dynamically
based on contextual parameters.
In our experiment, the robot was introduced to participants
as their live-in personal assistant. Consequently, we assumed
that participants would perceive the robot more as a social
companion than as a tool, and would be interested in its
whereabouts and behavior during trials. Our testing
environment was open, void of other people, and wide enough
to comfortably accommodate 60 following. We therefore
hypothesized that in the context we were testing, the robot
following would be perceived as more comfortable as the
angle of following grew closer to a side by side form.
Contrary to our expectations, preliminary trials with people
who performed a secondary task while walking indicated no
distinguishable difference among the three following angles.
These results lead us to question our assumption that
participants perceived the robot as a social companion rather
than as a tool. Further research indicated that a person's level
of involvement with an object, situation, or action affect the
extent and focus of attention and comprehension [19], that
people often do not direct conscious operations to mediating
artifacts [20], and that inclusion of a secondary task in a
person-following scenario has been known to reduce
engagement [18]. As a result, we theorized that the
indifference in our preliminary results may be due to lack of
reason to attend to the robot's activities. Whereas most human-
robot interactions encourage or require participants to actively
engage with the robot, person-following is a joint activity
which does not demand such engagement. Since involvement
is determined by the degree to which people perceive an
artifact as personally relevant [19], the study was revised to
assess effects of increased felt involvement by asking half of
the participants to place their personal wallet on the robot for
the duration of the study.
A. Overview
Two conditions were compared for three different
following angles under the assumption that increased personal
relevance leads to an increase in felt involvement, resulting in
distinguishable preferences for following angle. In the first
condition, participants were asked to place their wallet on the
robot for the duration of the experiment, and in the second
condition, participants were not asked to do so. To imitate a
real world walking scenario, which is rarely constant or
without distraction, participants were asked to make two stops
at predefined locations and play a game on a smartphone as
they walked. The study took place in an office lobby at the
Center for Digital Innovation, in Be'er Sheba, Israel.
B. Hardware
To facilitate person following at different angles, a
Microsoft Kinect V2 was mounted on a pan mechanism and
connected by a rod to a Pioneer LX (Fig. 2). Connected to the
rod was an aluminum tray used to hold personal objects. The
Kinect was connected to an Asus laptop (intel core i7-4710HQ
processor) from which the person tracking and following
commands were executed in ROS [21]. The robot commands
were sent to the Pioneer LX's onboard computer using a TP-
LINK router with wireless speed up to 300 Mbps.
The Pioneer LX is equipped with an integrated on-board
computer with a 1.8 GHz Dual Core processor, and 2GB
DDR3 RAM. A built-in SICK S300 scanning laser range-
finder, mounted approximately 20 cm above the ground, was
used to detect nearby obstacles and stop the robot if it detected
an object 50 cm from its core.
C. Person Tracking & Following Algorithms
OpenPTrack [22] was used to identify and track the
location of the participant. To optimize performance,
adjustments were made to its parameters based on preliminary
testing, including changing the confidence level used as a
threshold to identify people to 1.1, and changing the minimum
and maximum height of a person to 1.4m and 2m, respectively.
In order to move the robot according to the location of the
person, the person-following algorithm described in [3] was
used. This algorithm selects the first detected person whose
parameters meet the defined height and confidence thresholds
and, without using a map of the environment, moves the robot
to a defined distance behind the person (0 following).
To facilitate person-following at different angles, two new
parameters were defined and added to the original algorithm.
The first, AngleErrorPan, represents the angle of the Pan in
radians from the center of the robot. The second,
AngleSmallError, represents the angle in radians between the
person and the center of the Kinect. During following, the pan
aims to keep the person in the center of the Kinect by
continuously detecting the AngleErrorPan and adjusting its
position accordingly.
Kinect V2
Pan Mechanism
Pioneer LX Robot
Laser Range Finder
Aluminum Tray
Figure 2. A front view of the pan mechanism for the Kinect
The modified person-following algorithm used the angle
of the pan (AngleErrorPan) and the angle of the person
(AngleError) from the center of the robot to estimate the
location of the person while walking (Fig. 3):
yperson=distance*sin(AngleSmallError+AngleErrorPan) 
In 0 following, the angular velocity of the robot
(KpAngle) was set to zero when the angle of the person
(AngleError) from the center of the robot was zero. This
ensured that the robot wouldn't turn if the person was standing
directly in front of it:
Similarly, in 30 following from the left, the angular velocity
(KpAngle) was set to zero when the person's position was
0.5236 radians (30 degrees) from the center of the robot:
cmd_vel.angular.z=(AngleError+0.5236)*KpAngle 
The linear velocity of the robot (KpDistance) at 0 and 30
following changed according to the distance between the robot
and the subject. When the actual distance (Distance) was larger
than the target distance (Target), the velocity increased until
the robot reached the target following distance or until the
velocity reaches a predefined limit. When the actual distance
was smaller than the target distance, the velocity was set to
zero until the robot arrived at the target distance:
cmd_vel.linear.x=KpDistance*(Distance-Target) 
To achieve 60 following from the right, the linear velocity
was adapted to equal zero when the person's position was -
1.0472 radians (-60 degrees) from the center of the robot,
while the angular velocity was set to zero:
cmd_vel.linear.x=(AngleError-1.0472)*KpDistance  
This decision was made because changing the angular velocity
at such a sharp angle does not enable the robot to move closer
to the person: if the robot is at a 60 following angle and would
move forward, the following angle would become smaller,
causing the robot to turn away from the person in order to try
and correct its angle.
The robot's parameters for speed and following distance
were selected for optimal results in preliminary experiments.
The maximum angular and linear acceleration coefficients
were set to 0.5, the minimum distance between the robot and
the person was set to 1.2 meters, and the maximum speed of
the robot was set to 0.8 m/s. Higher speeds and acceleration
values caused the robot to slip on the laminate flooring.
D. Participants
Twenty-five participants (8 Females, 17 Males) age range
18-31 years were involved in the experiments. Ten of the
participants were Industrial Engineering students at Ben-
Gurion University of the Negev and were offered 1 bonus
point in a course for their participation. The remaining 14
participants were Hi-Tech employees that volunteered their
time without receiving any compensation.
E. Experimental Design
The experiment was set with a mixed between and within-
subject design. The wallet manipulation was the between-
subject variable: 13 participants were asked to place their
wallet on the robot for the duration of the study and 12
participants were not. The following angle was the within-
subject variable: each participant completed a straight
predetermined 20-meter walking path under three conditions
while being followed by the robot: (1) the robot was
programmed to follow directly behind (0◦ angle), (2) the robot
was programmed to follow at a 30◦ angle from the left, and (3)
the robot was programmed to follow at a 60◦ angle from the
right. The direction of approach alternated between 30 and
60 following due to environmental constraints: the room on
one side contained many reflective surfaces which frequently
caused false tracking. Changing the direction of approach
minimized such instances. The order of the following angle
trials was counterbalanced between participants.
F. Objective and Subjective Performance Measures
1) Distance and following angle. The distance in meters and
following angle in degrees were recorded by the robot.
2) Number of losses. The number of times the robot lost track
of the target person during each trial.
3) Number of Interventions. The number of times the
experimenter took control over the robot during the duration
of the trial. Interventions were classified into two types:
interventions due to robot safety and interventions due to loss.
Interventions due to safety resulted from the robot getting too
close to an obstacle or a wall. Interventions due to loss were
made to veer the robot back toward the participant when the
robot lost track of the person.
4) Subjective experience. Participants completed a post-
session questionnaire after each trial, and a final questionnaire
after completing all 3 trials (Fig. 4). Both questionnaires used
5-point Likert scales with 5 representing "Strongly agree" and
1 representing "Strongly disagree".
G. Procedures
Initially, participants filled in informed consent forms and
pre-test questionnaires that used the Technology Adoption
Propensity (TAP) index [23] to assess their level of experience
with technology. The robot was then introduced to the
participants as their live-in personal assistant who could be
used to carry personal belongings. Half of the participants
were asked at this point to place their wallet on the robot's tray.
All participants completed the same straight 20-meter walking
path for each experimental condition (0, 30, or 60) while
playing a simple smartphone game. The game used is called
"aa" [24], and was chosen because it requires the participant's
attention in order to succeed. Participants were required to play
for the duration of each trial, aiming to complete as many
Figure 3. Angles of the Kinect and the Pan
levels as possible. Participants were instructed to walk at their
natural walking pace, and stop at two predetermined locations
until the robot made a complete stop behind them. At the
beginning of each run, rqt_console [25] was activated in order
to record the robot's distance and angle from the participant.
Questionnaires were administered after each trial and at the
end of the experiment (Fig. 4).
Two researchers were present during trials. One was
responsible for the wellbeing and safety of participants,
walking a couple meters behind the robot with an Xbox
controller that could take charge of the robot in case of any
unforeseen issues. In case of an intervention, the robot was
manually turned to the direction of the participant until the
robot began tracking the person again. If the robot was
unsuccessful at doing so, the robot was manually controlled to
follow the participant for the remaining of the trial. The second
researcher collected, prepared, and organized the
questionnaires administered. During the completion of
questionnaires, one researcher was available to answer
questions, while the other researcher reset the robot's
parameters and location in preparation for the next condition.
H. Data analysis
All tests were designed as two tailed, and used a
significance level of 0.05. Data for objective measure analysis
of quality of following included: the number of times the robot
lost the participant, the recorded distance and following angle
of the robot, and the number of interventions due to safety and
losses. The following angle condition and the participants’
gender were the independent variables. The framework of
analysis was the General Linear Mixed Model (GLMM) where
participants were included as a random effect to account for
individual differences. Subjective measures analysis included
analysis of post-session and final questionnaires. Post-session
questionnaire items were also analyzed using GLMM.
A. Participants
The TAP index [23] illustrated that the studied participants
were technologically oriented. Participants felt that technology
allows them to more easily do the things they want to do
(mean=4.4, SD=0.5), that new technology makes their lives
easier (mean=4,0, SD=0.7), and that they are good at figuring
out new high-tech products and services without help from
others (mean=4.2, SD=0.9). The majority of participants had
no prior experience with robots (16/24).
B. Quality of Following
Objective indicators used to assess the quality of following
are summarized in Table 1. While the mean following angle at
0◦ and 30◦ following were consistently close to our intended
angles (2.31◦ and 28.26◦ to the left, respectively), the
implementation of 60◦ right-following was unsuccessful
(mean angle = 26.11◦ to the right, SD=7.3). For parsimonious
reasons, the 2.31 left following angle will be referred to as a
behind angle, the 28.26◦ left following angle will be referred
to as a behind left angle, and the 26.11 right following angle
will be referred to as a behind right angle.
TABLE 1. Cumulative result table of objective measures for walk quality
Actual Angle
Std. Devb
Std. Dev c
a. The standard deviation for each variable of all trials in the 0 angle condition.
b. The standard deviation of all distance measurements that the robot recorded from the participant.
c. The standard deviation of all following angle measurements that the robot recorded during trials
There was no significant difference across trials in terms of
following distance. The task was perceived as more stressful
during the behind left condition than in the behind right or
behind conditions (p=0.019). The number of times the robot
lost track of a participant (losses), the mean number of
interventions due to loss and the mean number of
interventions due to robot safety were highest in the behind
right condition, second highest in the behind condition, and
lowest in the behind right condition. However, these
differences were not statistically significant (p=0.206,
p=0.205, p=0.297, respectively). Despite the need for many
interventions, at the end of the experiment, the majority of
participants felt the robot was safe (mean=3.9, SD=0.6),
considerate (mean=3.8, SD=0.6), and friendly (mean=3.4,
SD=0.9) (Fig. 5).
C. Following Angle Preferences
Sixteen participants stated that they felt differences among
trials. Of those participants, the majority attributed the
difference to a variance in robot speed (14/16) and/or
following distance (8/16). Only 6/16 stated explicitly that the
following angle was changed between trials.
Post-trial Questionnaire
1. The task was stressful
2. The robot was stressful
3. My behavior was in direct response to the robot's behavior
4. The robot's behavior was in direct response to my behavior
5. The robot was considerate of my personal space
6. I walked independently of the robot's behavior
7. I felt comfortable with the speed of the robot
8. The robot moved too slowly
9. I walked naturally
10. I was satisfied with the way in which the robot followed me
11. I liked the robot
12. I felt safe with the distance of the robot
13. I adapted my walking speed to suit the speed of the robot
Final Questionnaire
1. I felt a difference between the sessions
2. In which session did you feel most comfortable? (choice of session #)
3. My impressions of the robot are: (1) friendly,
(2) intruding, (3) considerate, (4) safe, (5) frightening,
(6) irritating, (7) human-like, (8) calming, (9) indifferent
4. In your opinion, what was the difference between trials?
Figure 4. Questions administered to assess participant perceptions
The personal item (wallet) manipulation had a significant
impact on following angle preferences. Participants whose
wallet was on the robot during trials could differentiate
between the three conditions (10/13 participants, p-value:
0.021), whereas people who did not place their wallet on the
robot did not (11/12 participants, p-value: 0.01). Moreover,
participants in the wallet manipulation had an observable
preference of following angles (7 preferred the behind angle
condition, 5 preferred the behind left angle condition, and 1
preferred the behind right condition), whereas participants
without the wallet manipulation were indifferent (Fig. 6).
Several indicators show that participants felt less
comfortable when the robot was carrying their personal wallet,
even though statistical tests failed to find strong numerical
results (Fig. 7). First, participants who were asked to place
their wallet on the robot walked about 20cm closer on average
to the robot, and felt less safe with the robot's distance from
them (with wallet: mean rating of 3.98, SE=0.15, without
wallet: mean rating of 4.33, SE=0.142, p=0.076). Second,
participants who were asked to place their wallet on the robot
rated it as less considerate of their personal space (with wallet:
mean rating of 4.38, SE=0.14, without wallet: mean rating of
4.38, SE=0.14, p=0.045). Third, when the wallet was present
on the robot, the robot was perceived to be more stressful (with
wallet: mean rating of 1.91, SE=0.18, without wallet: mean
rating of 1.53, SE=0.17, p=0.093). Lastly, participants with the
wallet manipulation were less comfortable with the robot's
speed than participants without the wallet manipulation (with
wallet: mean rating of 3.73, SE=0.15, without wallet: mean
rating of 4.11, SE=0.14, p=0.06).
Overall, robot following without the personal item (wallet)
manipulation yielded a better subjective feeling among
participants. This was manifested by feeling more safe and
comfortable with the robot's distance and speed, and feeling
less stressed by the robot itself. While their overall experience
was more positive, participants in the "no-manipulation"
condition could not differentiate between the three following
angles. Without manipulation, the following angles did not
influence the users' preferences. In contrast, participants who
were asked to place their wallet on the robot not only
differentiated between following conditions, but also
developed a clear preference: the behind following was
preferred, followed by the behind left angle, and behind right
angle, respectively. The sharp differences between these two
conditions, coupled with indicators showing that participants
were less comfortable when the robot was carrying their
personal wallet, support the idea that increasing personal
relevance and by extension felt involvement [19] increases
motivation to attend to the robot's activities and is a necessary
basis for establishing human preferences.
Contrary to the results, we predicted that as the angle of
following grew closer to a side-by-side formation, the robot
following would be perceived as more comfortable. In
actuality, the behind angle was most preferred. Perhaps this
unexpected outcome could be explained by the objective
quality of following, which was best in the behind angle
condition and poorer in the behind left and behind right
conditions. Our conjecture is that reliable following is a more
dominant preference than convenient following, particularly
when the robot is carrying valuable cargo like a wallet. While
the behind left or behind right angles could have made it less
effortful for users to glance at their wallet, they were still able
to do so during the behind condition if they became nervous
about the state of their wallet.
Interestingly, even though there was no significant change
in following speed or distance between trials, participants
attributed the difference to these parameters. This may be
because human detection of absolute motion [26], acceleration
[27], and positional information [28] differ under peripheral as
opposed to foveal viewing conditions. In the behind left and
behind right angles the robot could be seen in peripheral vision
by turning one's head, whereas in the behind condition the
participants would have had to turn their entire torso toward
the robot, perhaps allowing them to use foveal vision.
This study has several attributes that limit the generality of
its results. First, the following algorithm limited the ability to
derive meaningful subjective accounts on our intended range
of angles. Although our method of following at an angle
worked relatively well for 0◦ and 30◦ following, it must be
improved to reliably support larger angles. We believe the 60
following failed because we set the angular velocity to zero,
removing the robot's ability to turn. While our walking track
was perfectly straight and could theoretically support the
interaction, in reality, participants did not walk in a perfectly
straight line and the starting angle of the robot relative to the
participant did not always match 60. If the robot did not walk
perfectly parallel to the participant at a 60 angle, it had no way
to correct its course: moving forward or waiting couldn't
change the robot's direction of movement, which was often
necessary in order to obtain the desired angle. To overcome
Figure 6. Preferences as a factor of personal item manipulation
Figure 7. Cumulative results table of post-trial questions (see Fig.4)
Figure 5. Cumulative result table of participants’ impressions of the
robot, value represents number of people who gave that response.
this obstacle in future studies, the algorithm is currently being
expanded to support multi-directional following at an angle
using historical tracking. Second, due to constraints of our
experimental environment, the direction of approach was
alternated for the behind left and behind right following
angles. Research has shown that a robot's direction of approach
can influence a person's impressions [12], yet the impact of
this confounding variable could not be evaluated in our study
since the robot's performance was inconsistent between
angles. In future studies, we will use an environment that does
not contain reflective surfaces, allowing independent
evaluation of the influence of the direction of approach and the
angle of following in a person-following scenario. Lastly,
participants included a relatively small number of engineering
students and high-tech employees, which have significantly
more experience with technology than most. This makes it
difficult to generalize the results to the general public.
In addition to improving our person-following algorithm
so that it consistently follows people at a range of up to 180◦,
future work will include exploring additional factors that can
affect following angle preferences, such as the person’s
personality, age and negative attitudes, robot appearance, and
the context of the task. Furthermore, while our current study
evaluated static measurements of following angles,
undergoing work aims to assess effects of dynamic following
angles that change based on the context of the walk. The way
in which these various aspects affect perceived following
quality is currently unknown. Evaluating their impact on
human perceptions will be a necessary step toward designing
robots that can follow in a socially acceptable manner.
In this paper we evaluated different following angles and
found that when felt involvement is sufficient, they are
perceived differently by people. Understanding these
perceptions and how they influence spatial preferences is
important to developing robots capable of accompanying
people in comfortable and comprehendible ways. While many
important questions remain to be answered in order to develop
robots with proper spatial skills, this study takes an important
step toward achieving this goal.
[1] R. Gockley, J. Forlizzi, and R. Simmons, “Natural Person
Following Behavior for Social Robots,” Hri07, pp. 1724, 2007.
[2] H. Zender, P. Jensfelt, and G. J. M. Kruijff, “Human- and
situation-aware people following,” Proc. - IEEE Int. Work. Robot
Hum. Interact. Commun., pp. 11311136, 2007.
[3] G. Doisy, A. Jevtić, E. Lucet, and Y. Edan, “Adaptive person-
following algorithm based on depth images and mapping,” IROS
2012 Work. Robot Motion Plan., pp. 4348, 2012.
[4] S. Karakaya, G. Kucukyildiz, C. Toprak, and H. Ocak,
“Development of a human tracking indoor mobile robot
platform,” in Proceedings of the 16th International Conference on
Mechatronics - Mechatronika 2014, 2014, pp. 683687.
[5] E. J. Jung, J. H. Lee, B. J. Yi, J. Park, S. Yuta, and S. T. Noh,
“Development of a laser-range-finder-based human tracking and
control algorithm for a marathoner service robot,” IEEE/ASME
Trans. Mechatronics, vol. 19, no. 6, pp. 19631975, 2014.
[6] C. Granata and P. Biduad, “Interactive person following for social
robots: hybrid reasoning based on Fuzzy and Multiple-Objectives
Decision Making,” 11th Int. Conf. Climbing Walk. Robot. Support
Technol. Mob. Mach. CLAWAR’11, Feb. 2016, pp. 1126, 2011.
[7] M. Moussaïd, N. Perozo, S. Garnier, D. Helbing, and G.
Theraulaz, “The walking behaviour of pedestrian social groups
and its impact on crowd dynamics.,” PLoS One, vol. 5, no. 4, p.
e10047, Jan. 2010.
[8] P. Althaus, H. Ishiguro, T. Kanda, T. Miyashita, and H. I.
Christensen, “Navigation for human-robot interaction tasks,”
IEEE Int. Conf. Robot. Autom. 2004. Proceedings. ICRA ’04.
2004, pp. 18941900 Vol.2, 2004.
[9] E. Pacchierotti, H. I. Christensen, and P. Jensfelt, “Embodied
social interaction for service robots in hallway environments,”
Springer Tracts Adv. Robot., vol. 25. 2000, pp. 293304, 2006.
[10] M. L. Walters, K. Dautenhahn, S. Woods, K. L. Koay, R. Te
Boekhorst, and D. Lee, “Exploratory studies on social spaces
between humans and a mechanical-looking robot.,” Conn. Sci.,
vol. 18, no. 4, pp. 429439, 2006.
[11] J. E. Young, Y. Kamiyama, J. Reichenbach, T. Igarashi, and E.
Sharlin, “How to walk a robot: A dog-leash human-robot
interface,” Proc. - IEEE Int. Work. Robot Hum. Interact.
Commun., pp. 376382, 2011.
[12] K. Dautenhahn, M. Walters, S. Woods, K. L. Koay, C. L.
Nehaniv, C. Lane, E. A. Sisbot, R. Alami, and T. Siméon, “How
May I Serve You ? A Robot Companion Approaching a Seated
Person in a Helping Context,” 2006.
[13] L. Y. Morales Saiki, S. Satake, R. Huq, D. Glas, T. Kanda, and N.
Hagita, “How do people walk side-by-side?: using a
computational model of human behavior for a social robot,” Proc.
seventh Annu. ACM/IEEE Int. Conf. Human-Robot Interact., no.
February 2016, pp. 301308, 2012.
[14] D. S. Syrdal, K. Lee Koay, M. L. Walters, and K. Dautenhahn, “A
personalized robot companion?-The role of individual differences
on spatial preferences in HRI scenarios,” 16th IEEE Int. Symp.
Robot Hum. Interact. Commun., pp. 11431148, 2007.
[15] M. L. Walters, D. S. Syrdal, K. Dautenhahn, R. Te Boekhorst, and
K. L. Koay, “Avoiding the uncanny valley: Robot appearance,
personality and consistency of behavior in an attention-seeking
home scenario for a robot companion,” Auton. Robots, vol. 24,
no. 2, pp. 159178, 2008.
[16] T. Nomura, T. Shintani, K. Fujii, and K. Hokabe, “Experimental
investigation of relationships between anxiety, negative attitudes,
and allowable distance of robots,” Proc. 2nd IASTED Int. Conf.
Human-Computer Interact. HCI 2007, pp. 1318, 2007.
[17] L. K. Kheng, D. S. Syrdal, M. L. Walters, and K. Dautenhahn,
“Living with robots: Investigating the habituation effect in
participants’ preferences during a longitudinal human-robot
interaction study,” Proc. - IEEE Int. Work. Robot Hum. Interact.
Commun., pp. 564569, 2007.
[18] T. Oron-Gilad, Y. Edan, and V. Fleishman, “‘Follow me’:
proxemics and robot movement considerations in a person
following setup (in preparation).”
[19] R. L. Celsi and J. C. Olson, “the Role of Involvement in Attention
and Comprehension Processes,” J. Consum. Res., vol. 15, no. 2,
pp. 210224, 1988.
[20] O. W. Bertelsen and S. Bodker, “Activity Theory,” HCI Model.
Theor. Fram., pp. 291324, 2003.
[21] M. Quigley, K. Conley, B. Gerkey, J. FAust, T. Foote, J. Leibs, E.
Berger, R. Wheeler, and A. Mg, “ROS: an open-source Robot
Operating System,” Icra, vol. 3, no. Figure 1, p. 5, 2009.
[22] M. Munaro, A. Horn, R. Illum, J. Burke, and R. B. Rusu,
“OpenPTrack: People Tracking for Heterogeneous Networks of
Color-Depth Cameras,” pp. 1–13.
[23] M. Ratchford and M. Barnhart, “Development and validation of
the technology adoption propensity (TAP) index,” J. Bus. Res.,
vol. 65, no. 8, pp. 12091215, Aug. 2012.
[24] “aa,” Android Apps on Google Play. [Online]. Available:
[25] R. Sankar and D. Muddu, “Scholarship at UWindsor A Frontier
Based Multi-Robot Approach for Coverage of Unknown
Environments,” 2015.
[26] P. D. Tynan and R. Sekuler, “Motion processing in peripheral
vision: Reaction time and perceived velocity,” Vision Res., vol.
22, no. 1, pp. 6168, 1982.
[27] A. Traschütz, W. Zinke, and D. Wegener, “Speed change
detection in foveal and peripheral vision,” Vision Res., vol. 72,
pp. 113, 2012.
[28] R. F. Hess and D. Field, “Is the increased spatial uncertainty in
the normal periphery due to spatial undersampling or uncalibrated
disarray?,” Vision Res., vol. 33, no. 18, pp. 26632670, 1993.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
In this study we propose an innovative decision making engine for interactive person following for a mobile social robot. The developed engine combines 1. A Strategy controller that selects the strategy to apply 2. A Multiple-Objective system that provides to the robot the way to perform this strategy. We show the performances of this system by two sets of experiments (in simulation environment): the first one is based only on the observed data; in the second one we predict the state of the user with a Kalman filter and the state of the robot with a predictive model. We demonstrate that the proposed reasoning system is able to regulate the robot behavior in complex and critical situations. The suggested predictors improve the system performances considerably.
Full-text available
This paper presents a human detection algorithm and an obstacle avoidance algorithm for a marathoner service robot (MSR) that provides a service to a marathoner while training. To be used as a MSR, the mobile robot should have the abilities to follow a running human and avoid dynamically moving obstacles in an unstructured outdoor environment. To detect a human by a laser range finder (LRF), we defined features of the human body in LRF data and employed a support vector data description method. In order to avoid moving obstacles while tracking a running person, we defined a weighted radius for each obstacle using the relative velocity between the robot and an obstacle. For smoothly bypassing obstacles without collision, a dynamic obstacle avoidance algorithm for the MSR is implemented, which directly employed a real-time position vector between the robot and the shortest path around the obstacle. We verified the feasibility of these proposed algorithms through experimentation in different outdoor environments.
Conference Paper
Full-text available
This paper presents a computational model for side-by-side walking for human-robot interaction (HRI). In this work we address the importance of future motion utility (motion anticipation) of the two walking partners. Previous studies only considered a robot moving alongside a person without collisions with simple velocity-based predictions. In contrast, our proposed model includes two major considerations. First, it considers the current goal, modeling side-by-side walking, as a process of moving towards a goal while maintaining a relative position with the partner. Second, it takes the partner's utility into consideration; it models side-by-side walking as a phenomenon where two agents maximize mutual utilities rather than only considering a single agent utility. The model is constructed and validated with a set of trajectories from pairs of people recorded in side-by-side walking. Finally, our proposed model was tested in an autonomous robot walking side-by-side with participants and demonstrated to be effective.
Full-text available
Perception of constant motion has been extensively studied both psychophysically and physiologically, but the human ability to detect dynamic changes in motion, such as rapid speed changes, is only poorly characterized and understood. Yet, perception and representation of such dynamic changes is of strong behavioral relevance, as illustrated by their potential for attentional capture. In the present study, we measured and compared detection thresholds for instantaneous accelerations and decelerations of drifting Gabor patches at different retinal eccentricities. As a main result, we find that detection performance depends strongly on eccentricity. Under foveal viewing conditions, average thresholds were lower for accelerations than for decelerations. However, between 5° and 15° eccentricity, this relation is inverted, and deceleration detection becomes better than acceleration detection. Results of an additional experiment suggest that this can be explained by a fast eccentricity-dependent adaptation effect. Our findings are discussed with special emphasis on their relation to data from neurophysiological experiments.
Full-text available
When people interact with robots in daily life, each indi-vidual has different attitude and emotion toward the robots, which cause different behavior toward them. Thus, we should empirically investigate influences of attitudes and emotions into human–robot interaction, in particular, those of negative attitudes and anxiety which may directly af-fect behaviors toward robots. For this aim, an experiment was conducted to investigate relationships between nega-tive attitudes and anxiety toward, and allowable distance of a robot. The results revealed that negative attitudes and anxiety toward robots affected allowable distances between the subjects and the robot, and the subjects' anxiety toward robots changed before and after the experiment session, de-pending on the robot's behavioral characteristics such as its walking speed.
Full-text available
The results from two empirical studies of human–robot interaction are presented. The first study involved the subject approaching the static robot and the robot approaching the standing subject. In these trials a small majority of subjects preferred a distance corresponding to the 'personal zone' typ-ically used by humans when talking to friends. However, a large minority of subjects got significantly closer, suggesting that they treated the robot differently from a person, and possibly did not view the robot as a social being. The second study involved a scenario where the robot fetched an object that the seated subject had requested, arriving from different approach directions. The results of this second trial indicated that most subjects disliked a frontal approach. Most subjects preferred to be approached from either the left or right side, with a small overall preference for a right approach by the robot. Implications for future work are discussed.
Conference Paper
Full-text available
This paper presents the combined results of two studies that investigated how a robot should best approach and place itself relative to a seated human subject. Two live Human Robot Interaction (HRI) trials were performed involving a robot fetching an object that the human had requested, using different approach directions. Results of the trials indicated that most subjects disliked a frontal approach, except for a small minority of females, and most subjects preferred to be approached from either the left or right side, with a small overall preference for a right approach by the robot. Handedness and occupation were not related to these preferences. We discuss the results of the user studies in the context of developing a path planning system for a mobile robot.