Conference PaperPDF Available

Exploring Proxemics for Human-Drone Interaction

Authors:

Abstract and Figures

We present a human-centered designed social drone aiming to be used in a human crowd environment. Based on design studies and focus groups, we created a prototype of a social drone with a social shape, face and voice for human interaction. We used the prototype for a proxemic study, comparing the required distance from the drone humans could comfortably accept compared with what they would require for a nonsocial drone. The social shaped design with greeting voice added decreased the acceptable distance markedly, as did present or previous pet ownership, and maleness. We also explored the proximity sphere around humans with a social shaped drone based on a validation study with variation of lateral distance and heights. Both lateral distance and the higher height of 1.8 m compared to the lower height of 1.2 m decreased the required comfortable distance as it approached.
Content may be subject to copyright.
Exploring Proxemics for Human-Drone Interaction
Alexander Yeh1, Photchara Ratsamee2, Kiyoshi Kiyokawa3, Yuki Uranishi2,
Tomohiro Mashita2, Haruo Takemura2, Morten Fjeld1, Mohammad Obaid4
1Department of CSE, Chalmers University of Technology, Gothenburg, Sweden
2Cybermedia Center, Osaka University, Osaka, Japan
3Nara Institute of Science and Technology, Nara, Japan
4Department of Information Technology, Uppsala University, Uppsala, Sweden
alex@yeh.nu
ABSTRACT
We present a human-centered designed social drone aiming
to be used in a human crowd environment. Based on design
studies and focus groups, we created a prototype of a social
drone with a social shape, face and voice for human interaction.
We used the prototype for a proxemic study, comparing the
required distance from the drone humans could comfortably
accept compared with what they would require for a nonsocial
drone. The social shaped design with greeting voice added
decreased the acceptable distance markedly, as did present or
previous pet ownership, and maleness. We also explored the
proximity sphere around humans with a social shaped drone
based on a validation study with variation of lateral distance
and heights. Both lateral distance and the higher height of
1.8 m compared to the lower height of 1.2 m decreased the
required comfortable distance as it approached.
ACM Classification Keywords
H.5.2. User Interfaces: User-centered design.
Author Keywords
Human-Drone Interaction; Social Drone; Proxemics.
INTRODUCTION
In the near future, robots of different types are expected to
populate urban environments with the purpose of supporting
human activity. While there is ongoing research addressing so-
cial ground robots [10, 12, 23, 25], such robots are still limited
when it comes to perceiving, understanding, and interacting
in a crowded environment [23], [27] (pages 335-356). Today
drones are widely used for aerial video recording, first-person
view racing, or military surveillance, but are not yet consid-
ered as something humans would socially interact with, but
rather control. In order for drones to approach humans, the
latter’s acceptance of the former needs to be better understood.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from permissions@acm.org.
HAI ’17, October 17–20, 2017, Bielefeld, Germany
© 2017 ACM. ISBN 978-1-4503-5113-3/17/10.. .15.00
DOI: https://doi.org/10.1145/3125739.3125773
Figure 1. Humans and a social drone.
As shown in Fig. 1, we assume three advantages drones could
have over social ground robots:
A drone can maneuver unobtrusively above the ground to-
wards a target user without disturbing human movement in
busy and densely crowded areas.
From a drone’s-eye view, it has a better capacity to recog-
nize humans who need help. The drone’s perception also
benefits crowd tracking and robot path planning. Social
ground robots suffer from occlusion.
Humans can easily recognize a drone and interact with it
from a far distance.
Based on these assumptions, we introduce a concept of social
drone which we propose as a suitable solution to the question
of what type of robot is best fit for a crowded human environ-
ment. We explore the possibility of using a drone as an agent
in human crowd environments by pioneering fundamental
components such as social drone design, human acceptability
and proximity between drones and humans. Our intention is
to decrease the acceptable distance between a social drone and
humans by conducting proxemic studies. This paper offers
a design study section, followed by a focus group section, a
prototyping section, and a two-part proxemic study section
(Fig. 2).
RELATED WORKS
Human-Drone Interaction (HDI) is an emerging research area
within the robotics community. For example, Obaid et al. [20]
Proxemics Study
Part A: 16 participants; PA01-PA16
Part B: 6 participants; PB01-PB06
Prototyping
Focus Group
5 participants; F01-F05
Design Study
20 participants; D01-D20
Figure 2. Human-centred design process.
proposed a drone agent to help humans keep the environment
clean, by persuading a user to pick up trash, leading him/her
to the nearest trash bin and then communicating with him/her
when the job is done. Cauchard et al. [4] proposed an elicita-
tion study on how to naturally interact with drones. Similarly,
Obaid et al. [19] investigated user-defined gestural interac-
tions to control a drone. Emotional states to drones have also
been studied. Cauchard et al. [5] report that several of their
participants compared the drone to a pet, which entitles it to
anthropomorphic status in that situation.
Most off-the-shelf drones do not promote close interaction
due to fast rotating propellers being an immediate danger to
humans. Some researchers have addressed the safety aspects
of interacting with a drone, such as the picocopter [24] and
the collisions-resilient flying robot [2], where light drones
with cages are made safe to coexist with humans. A small
quadcopter [17] to be used in a ball to manipulate speed and
behavior for new sports interaction has also been proposed as a
safe way for humans to interact with it. Bitdrone [7] is a drone
that allows for many types of physical interaction between the
user and the drone.
The features of the drone make them a suitable subject to in-
vestigate as a social entity, although there is very little research
on identifying the social properties of a social drone. However,
the Human-Robot Interaction (HRI) field has explored social
robots to a large extent in the past decade; one of the main
topics being the impact of social proxemics (inter-personal
distances) between robots and humans. Takayama et al. [26]
conducted research on a robot approaching humans, and hu-
mans approaching the robot. Kamide et al. [9] presented not
only proxemic research in the HRI field but also comparing
proxemic results using virtual reality (VR) of a robot. Obaid et
al. [21] looked at the influence of posture on the human-robot
proximity, while Mutlu et al. [16] investigated the influence
of eye-gaze and likeability aspects on human-robot proximity.
Moreover, other researchers have taken a user-centered ap-
proach to identify robotic features or attributes such as the
work presented by Lee et al. [11], who compared cultural
differences in designs of future domestic robots created by
participants from Korea and United States. Woods [28] ex-
amined children’s perspectives, feelings and attitudes towards
robots concluding with a discussion about design implications
for robots, and their use in the educational context. Obaid et
al. [18] presented a summary of robotic attributes extracted
from a study where interaction designers, children with robotic
knowledge, and children without robotic knowledge drew pic-
tures of robots.
In this paper, we follow the trend from previous HRI research
to investigate the impact a social drone would have on human-
drone proxemics. We first explored the social drone attribute,
then conducted two human-drone user-studies.
DESIGN STUDY
In order to achieve a socially acceptable drone, we needed
to identify the social requirements from its users. Thus, we
conducted a survey to gather information from participants
on what a social and friendly drone would look like. This
was then presented in a survey of different key attributes in a
graph.
Requirement Analysis
With a human-centered design [6] approach in mind, we
wanted to understand what kind of shape and design of a
social drone would make humans more comfortable with in-
teracting with a drone. We discovered early that a drone itself
was hard to approach due to the danger of the propellers and
the loud noise. Woods [28] presented several categories and
attributes such as body shape, looks and likes, and facial fea-
tures. Obaid et al. [18] added more attributes (interaction,
size and characteristics). However, both of these works con-
cerned ground robots not drones. In order to design our study
and identify specific attributes for a social drone, we used the
methods from Obaid et. al and Woods to conduct our own
design study.
Drawing Session
To gain an understanding of how users envision a social drone,
and to allow them to elaborate on their preferred features, we
conducted drawing sessions. The session started by handing
out an A4 sheet of paper that had a silhouette illustration of
the DJI Phantom 3 Drone [13]. In addition, we showed on the
original drone where they could interact with it and gave some
basic instructions on how it operates. This was followed with a
brief explanation of the context of the task and the participants
were asked to their vision of a social drone hovering in a
human crowded environment. At the end, the participants
were asked the drone size they preferred from smaller, original
size, or bigger than the original.
In total, we had 20 participants (13 Female, 7 Male), where
the majority (14) were Japanese, with age range between
18 to 31 years (M = 22.2, SD = 4.05). Most were students
of Osaka University, and most were not working within the
technological field (19 non-technical, 1 technical). Only one
participant had any prior experience with drones.
To categorize the outcome of the drawing session, we used
some of Woods’ categories such as body shape and facial
features [28]. Obaid et al. proposed some further categories
and sub-categories such as interaction, size, and characteristics
[18]. Following the design study, safety and extensions were
added (Fig. 3). For each attribute and sub-attribute, a counter
was incremented when it was observed in a drawing.
APPEARANCE
SAFETY
Mental
Physical 15
8
INTERACTION
Light
Screen
Audio
Camera
Projector
7
4
10
4
1
SIZE
Small
Original
Big
6
12
2
CHARACTERISTICS
Color
Cute
Detectable
Friendly
9
5
6
4
EXTENSIONS
Arm/Pointer 4
FACIAL FEATURE
Face 5
BODY SHAPE
Circular/Oval
Rectangular/Square
7
2
Figure 3. Drone attributes were categorized into appearance, interac-
tion, and safety. Appearance has five sub-attributes.
Appearance
We identified that most of our participants (45%) suggested
some sort of a shape around the drone. They favored a cir-
cular/oval drone over a rectangular/square drone, which is
similar to ohkura et al. [22] whose study showed that rounded
objects and blue color rated highly for cuteness. In terms of
facial feature, a number of participants (25%) drew pictures of
the drone with a face, and in some cases in combination with a
screen. An unexpected observation was that a few participants
(20%) wanted some sort of extension of the drone, like an arm,
in order for it to help them with various tasks. Many of the par-
ticipants mentioned that they would have wanted the drone to
carry their bag. “I would like it to carry my bag” and “It would
be nice if it could carry my shopping bag” representative of
the comments.
The characteristics of the drone are closely related to many of
the other attributes listed in the graph. However, we defined
them differently as a more “look and feel” of the overall drone.
For instance, colorful (45%) and cute were often mentioned in
the survey. Some participants reported that they thought the
drone should have some anthropomorphic attributes. Many
of these fell into the category of friendliness (30%) or cute
(25%), some were drawn and some were mentioned in the
survey.
The survey results show that the original size (60%) of the
drone selection was preferred over the smaller size (30%) and
the bigger size (10%). Some participants mentioned that it
might be dangerous if the drone did not make any sound or
was undetectable at a first glance as it might startle people.
Interaction
A number of screens (35%) were drawn for navigation and
in combination of using it as a face, but one of the more
important aspects of interaction was to be able to speak to it
or get information via audio. Some participants either drew
maps inside the screen or explained that they would like to
have a map as well for the social drone to guide them around
in the vicinity.
Safety
One of the concerns was related to safety issues when using
drones. In this work, we identified mentions and drawings on
noise and unexpected movement of drones as an attribute of
environmental safety (40%). Direct observation as propellers
could be dangerous or cause accidents were categorized in
physical safety (75%). Safety issues mentioned were the noise
of the drone or the way it looked, and how dangerous the
propellers appeared to be. However, most participants did not
mention the propellers or the noise it would generate at all,
which could perhaps be attributed to most of our participants
having little experience with drones and not knowing how it
would sound during flight. Some participants also raised their
concerns in the survey about cameras and surveillance, as they
did not want to be monitored.
(a) (b)
Figure 4. Drawings from design study where participants were not given
attributes (a). Drawings from focus group where participants were given
attributes and sub-attributes (b).
FOCUS GROUP
Using the results from the drawing session, we held a small
focus group in two different occasions to find out what some
of the most popular attributes would benefit a social drone
design. Our participants were given approximately 20 minutes
with some of the attributes to design a social drone. The two
focus groups had 5 participants in total (non-technical); all
students at Osaka University (3 Male; 2 Female), all Japanese,
and aged between 21 to 32 years (M = 23.6 SD = 4.78). On
a scale from 1 to 5 in drone experience only one participants
answered 3; the rest of them had no drone experience.
The results from this phase were mainly used as an inspira-
tional phase where we tried our best to design a social drone.
From Fig. 4 we can see that the focus group’s drawings were
much more anthropomorphic than those from the first study.
Safety was also taken into consideration as most of the draw-
ings had something to either cover the propellers or make them
non visible.
PROTOTYPING
Combining both the attributes chart from the survey and the
results from the focus group, we tried to develop a design that
would be social and accepted by humans.
Social Shape
From the results presented in the previous sections we tried
to design a shape around our drone that would give it a more
social and friendly feeling. Following our results presented in
Fig. 3, inspiration from focus groups Fig. 4 and Ohkura et al.
[22] study, we designed a blue oval-shaped drone as shown in
Fig. 5(b). The social shape was to function as a safety guard
as well.
Face
Following Mori’s [15] hypothesis, the uncanny valley, and the
previous studies, we wanted to develop the face so that the
social drone would have an anthropomorphic character of a
cartoon. Using an android tablet we could display a friendly
face with similar color to the social shape. The blue color of
the face was also chosen as the social shape to keep the color
consistent throughout the study, which also corresponds to the
Ohkura et al. [22] study.
Voice
As several participants mentioned audio or voice as an im-
portant interaction channel to feel the presence of the drone,
we added a greeting voice by using a text to speech applica-
tion. Breazeal [1] proposed several synthesized speeches for
an anthropomorphic robot, where a relatively fast, high mean
pitch and wide pitch range was analyzed and categorized as a
"happy" voice.
Nonsocial Shape
We also tried to design a drone that would be safe for the future
participants to not get hurt on. This was made to be just an
rectangle box, see Fig. 5(c).
PROXEMICS STUDY
Apart from appearance design, the range of interaction be-
tween the social drone and human have to be considered for
smooth social interaction. Hall [8] proposed a model that
states that the public space between two human beings starts
at 7.6 meters; within 3.6 meters is the social space; within 1.2
meters is the personal space; and the intimate space is at 0.45
meters.
(a) (b)
(c) (d)
Figure 5. Prototyping departed from an off-the-shelf drone (a) combined
with the outcome of the design study and focus group (b). A nonsocial
safety guard (c) and a social safety guard (d) were prototyped for the
off-the-shelf drone.
To our knowledge, there have not been any proxemics-based
studies on drones. Proxemic studies in the HRI field have
previously been conducted with ground based robots [9, 26].
Compared to other proximity based papers in HRI, which uses
unmanned ground vehicles [9], distance between humans and
robots can vary depending on the robot’s purpose and what
kind of interaction is required. In this section we will describe
our proxemic studies, setup, participants, and the results.
!"#$%&
'"$$%&
(a)
!"#$%&
'"$$%&
'"$$%&
$"(%&
!")$%&
'"$$%&
$"*%&
(b)
Figure 6. Setup of proxemic study, part A, where participants stood
4.00 m away from the drone that approached slowly at the height of
2.10 m (a). In part b, participants stood 4.00 m away from the drone
that approached slowly at the height of 1.20 m or 1.80 m with different
lateral distance of 0.3 m or 0.6 m (b).
SV
SN
NV
NN
050 100 150 200 250 300
Distance (cm)
Height = 2.10 m
(a)
No
Yes
050 100 150 200 250 300
Height = 2.10 m; Pet Ownership
Average Personal Space (cm)
(b)
Female
Male
050 100 150 200 250 300
Height = 2.10 m; Gender
Average Personal Space (cm)
(c)
Figure 7. (a) An average distance where the participants wanted the drone to stop. SV: Social shaped drone approaching with greeting voice; SN:
Social shaped drone approaching with no voice; NV: Nonsocial drone approaching with greeting voice; NN: Nonsocial drone approaching with no voice;
(b) Average personal space with pet ownership. (c) Average personal space depending on gender.
Setup
As presented in Fig. 6(a) and Fig. 6(b), we setup a face to face
confrontation between participant and robot along a single
dimension. The drone used did not have Vision Positioning
System (VPS), and therefore would slightly drift off and in
some occasions move unpredictably. By adding zip-lines to
the drone, we could have it fully controllable and greatly
minimize the risk of collision with either its environment or
the participant. Kamide et al.’s [9] extensive proxemic study
was used as an inspiration while designing our own. Previous
study has shown that people felt uncomfortable with a speed
of 1 m/s [3], but instead were comfortable with all kind of
speeds that was slower than a normal paced human walking.
Mean velocity was 0.30 m/s.
For the social drone to approach and interact with humans,
we decided to place the drone at the height of 2.1 m for the
first experiment. In the second experiment we set two heights;
1.8 m and 1.2 m respectively. We were interested to find the
difference between the different heights with a drone at almost
same eye-height level with the participant.
(a) (b)
Figure 8. (a) The participant is approached by a drone with nonsocial
safety guard. (b) The participant is approached by a drone with a social
safety guard.
Part A: Social Shape and Greeting Voice
Our aim was to find out if our social drone could decrease
the distance from humans, compared to a nonsocial drone. In
our first experiment we had a total of 16 participants. There
were 14 with technical and 2 with non-technical background.
Participants’ mean height was 1.69 m (SD = 9.72). On a scale
1 to 5 the participants stated their drone experience on an
average of 1.625 (SD = 0.885). Ten of the participants were
or had been pet-owners, while the other six had never had a
pet. This part examined four alternative drone states (Table
1). The setup is presented in Fig. 6(a) where participants stood
at 4.00 meters away from the drone. The drone approached
them at a low velocity. We asked the participant to raise their
hand and say stop when they no longer felt comfortable with
the drone coming closer. The Balanced Latin Square method
[14] (pages 177-180) was used, as some participants might get
used to the drone by the end of the experiment.
Table 1. Drone states.
state shape audio
SV social drone greeting voice
SN social drone no voice
NV nonsocial drone greeting voice
NN nonsocial drone no voice
Part B: Height and Lateral Distance
The aim of this part of the experiment was to explore the
proxemic sphere around a human when a drone passes by. The
drone had 6 initial positions and only the social shaped drone
was used without a greeting voice. The setup is presented in
Fig. 6(b) and Table 2 respectively. This part comprised six
participants; all of them students with a technical background.
Participant mean height was 1.70 m (SD = 5.27). On a scale
1 to 5 the participants stated their drone experience on an
average of 1.833 (SD = 1.169). Two participants had current
or past experience with pet ownership.
Results
Results from the proxemics studies part A and B will be pre-
sented in this section. Note that
M
represents mean and
SD
represents standard deviation.
Part A: Social Shape and Greeting Voice
As presented in Fig. 7(a), with the social shape, we found a
difference between the mean values of personal space between
Table 2. Drone positions.
pos lateral distance height
A 0.0 m 1.8 m
B 0.3 m 1.8 m
C 0.6 m 1.8 m
D 0.0 m 1.2 m
E 0.3 m 1.2 m
F 0.6 m 1.2 m
human and social drone. The average distance with social
shape is found to be
MSV
= 1.06 m;
SDSV
= 0.61; and
MSN
=
1.14 m;
SDSN
= 0.57; while nonsocial shape is found be
MNV
= 1.33 m;
SDNV
= 0.55;
MNN
= 1.38 m;
SDNN
= 0.61;. On
average, the personal space of social drone with greeting voice
is reduced by 30% when compared to nonsocial drone without
greeting voice.
We also analyzed data by comparing the results between pet
owners and non-pet owners. The average distance with all
drone states of personal space of pet owners is
M
= 1.13 m
while the personal space is
M
= 1.39 m for non-pet owners;
see Fig. 7(b). Although the number of participants without
having or have owned a pet is not significant, the numbers
do indicate that having a pet allows a drone to come closer
to the user. Finally, the average distance with all drone states
of personal space for females (
M
= 1.50 m) is greater than the
average distance with all states of personal space for males;
which is M= 1.10 m; see Fig. 7(c).
Part B: Height and Lateral Distance
As a comparison between 2 height levels (1.20 m and 1.8 m)
in Fig. 9(a) and Fig. 9(b), social drones that fly at the height
of 1.20 m have an average distance (
MD
= 1.14 m;
SDD
=
0.46;
ME
= 1.02 m;
SDE
= 0.61;
MF
= 0.95 m;
SDF
= 0.51)
lower than at the height of 180 cm (
MA
= 1.35 m;
SDA
=
0.43;
MB
= 1.38 m;
SDB
= 0.48;
MC
= 1.27 m;
SDC
= 0.56).In
terms of change in lateral distance at the height of 1.20 m,
we can clearly see that increased lateral distance gradually
decreased the distance between the social drone and human
(MD>ME>MF).
DISCUSSION
From our surveys, we found that safety regarding drones is a
major concern for participants. However, they still prefer a
medium size of social drone due to visibility. We discovered
that the noise of the drone added to the mental stress of any
human in the vicinity of the drone. To our knowledge, silent
drone technology has yet to emerge. We decided not to make it
our challenge to try to decrease the noise of the drone. Several
hypotheses were assumed about how noise cancellation could
perhaps improve the social drone’s penetration of a human’s
personal space.
Based on current technology, flying a drone indoors is still
highly unstable when weight is added on; even if the drone
has a small additional weight (e.g. styrofoam cup), it would
still behave somewhat unpredictably. Due to that we would
not be able to have a consistent experiment or study operating
a drone in an indoor environment, we decided to connect zip
LD = 0.0 m
LD = 0.3 m
LD = 0.6 m
050 100 150 200 250 300
Height = 1.20 m
Distance (cm)
(a)
LD = 0.0 m
LD = 0.3 m
LD = 0.6 m
050 100 150 200 250 300
Height = 1.80 m
Distance (cm)
(b)
Figure 9. Part B: Height and lateral distance with the height set at 1.20
m (a) or at 1.80 m (b).
lines to be able to reproduce our experiment and studies. There
were also concerns that the nonsocial drone would affect the
drone from its current state, as it would look safer than a drone
without a styrofoam cup. One of the participants said that
“having the drone inside a protective cage feels safer than the
drone flying without”. Another participant mentioned after the
proximity study “I concentrated more on the face of the drone
and therefore did not think so much about the propellers”.
Nevertheless, the social drone performed significantly better
in closing the distance than what the nonsocial drone could
do.
Finally, in comparison with human personal space (within 1.2
m) proposed by Hall [8], an average personal space of the
social drone and human was closer when compared with the
personal space between human and human. This result shows
positive signs of having social drones in human environments.
CONCLUSION AND FUTURE WORK
We have presented a social drone design and proxemic study
aiming to be used in human crowd environments. Our ex-
periment was preliminary in nature where the factors were
evaluated together. Future work will include more extensive
experiments on each factor to emphasize how strong each
component help reduce personal space. A study concerning
how color feature could affect proximity between human and
social drone or ground robot, will also be conducted. The
authors hope that this paper will contribute design and proxim-
ity insights to several different continuous studies and further
development in various social agents interacting with humans.
ACKNOWLEDGEMENTS
This work was partially funded by the Gadelius scholarship
from the Sweden-Japan Foundation and supported in part
by the programs of the Grant-in-Aid for Challenging Ex-
ploratory Research No. 16K12501. We are also grateful
to Philippa Beckman, Adam Dunford, Mafalda Samuelsson
Gamboa, Philip Tham, Velko Vechev, and Osman Malik for
proofreading.
REFERENCES
1. Cynthia Breazeal. 2001. Emotive qualities in robot
speech. In Proceedings of the 2001 IEEE/RSJ
International Conference on Intelligent Robots and
Systems, Vol. 3. IEEE, 1388–1394.
2.
Adrien Briod, Przemyslaw Kornatowski, Jean-Christophe
Zufferey, and Dario Floreano. 2014. A Collision-resilient
Flying Robot. Journal of Field Robotics 31, 4 (2014),
496–509.
3. John Travis Butler and Arvin Agah. 2001. Psychological
effects of behavior patterns of a mobile personal robot.
Autonomous Robots 10, 2 (2001), 185–202.
4. Jessica R. Cauchard, Jane L. E, Kevin Y. Zhai, and
James A. Landay. 2015. Drone & Me: An Exploration
into Natural Human-drone Interaction. In Proceedings of
the 2015 ACM International Joint Conference on
Pervasive and Ubiquitous Computing (UbiComp ’15).
ACM, New York, NY, USA, 361–365.
5. Jessica Rebecca Cauchard, Kevin Y. Zhai, Marco
Spadafora, and James A. Landay. 2016. Emotion
Encoding in Human-Drone Interaction. In The Eleventh
ACM/IEEE International Conference on Human Robot
Interaction (HRI ’16). IEEE Press, 263–270.
6.
Mike Cooley. 2000. Human-centered design. Information
design (2000), 59–81.
7. Antonio Gomes, Calvin Rubens, Sean Braley, and Roel
Vertegaal. 2016. BitDrones: Towards Using 3D
Nanocopter Displays As Interactive Self-Levitating
Programmable Matter. In Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems
(CHI ’16). ACM, New York, NY, USA, 770–780.
8. Edward T. Hall. 1966. The Hidden Dimension. Vol. 6.
Doubleday. 113–127 pages.
9.
Hiroko Kamide, Yasushi Mae, Tomohito Takubo, Kenichi
Ohara, and Tatsuo Arai. 2014. Direct comparison of
psychological evaluation between virtual and real
humanoids: Personal space and subjective impressions.
International Journal of Human-Computer Studies 72, 5
(2014), 451–459.
10. Chi-Pang Lam, Chen-Tun Chou, Kuo-Hung Chiang, and
Li-Chen Fu. 2011. Human-Centered Robot Navigation -
Towards a Harmoniously Human-Robot Coexisting
Environment. IEEE Transactions on Robotics 27 (2011),
99–112.
11. Hee Rin Lee, JaYoung Sung, Selma Šabanovi´
c, and
Joenghye Han. 2012. Cultural design of domestic robots:
A study of user expectations in Korea and the United
States. In Proceedings of the 21st IEEE International
Symposium on Robot and Human Interactive
Communication (2012 IEEE RO-MAN). IEEE, 803–808.
12. Min Kyung Lee, Jodi Forlizzi, Paul E Rybski, Frederick
Crabbe, Wayne Chung, Josh Finkle, Eric Glaser, and Sara
Kiesler. 2009. The snackbot: documenting the design of a
robot for long-term human-robot interaction. In
Proceedings of the 4th ACM/IEEE International
Conference on Human-Robot Interaction (HRI). IEEE,
7–14.
13. SZ DJI Technology Co. Ltd. 2016. DJI phantom 3
standard. (2016). http://www.dji.com/phantom-3- standard
[Online; accessed 19-July-2016].
14.
I. Scott MacKenzie. 2013. Human-Computer Interaction:
An Empirical Research Perspective (1st ed.). Morgan
Kaufmann Publishers Inc., San Francisco, CA, USA.
15. Masahiro Mori, Karl F MacDorman, and Norri Kageki.
2012. The uncanny valley [from the field]. IEEE Robotics
& Automation Magazine 19, 2 (2012), 98–100.
16. Jonathan Mumm and Bilge Mutlu. 2011. Human-robot
Proxemics: Physical and Psychological Distancing in
Human-robot Interaction. In Proceedings of the 6th
International Conference on Human-robot Interaction
(HRI ’11). ACM, New York, NY, USA, 331–338.
17. Kei Nitta, Keita Higuchi, and Jun Rekimoto. 2014.
HoverBall: Augmented Sports with a Flying Ball. In
Proceedings of the 5th Augmented Human International
Conference (AH ’14). ACM, New York, NY, USA,
Article 13, 4 pages.
18. Mohammad Obaid, Wolmet Barendregt, Patricia
Alves-Oliveira, Ana Paiva, and Morten Fjeld. 2015.
Designing Robotic Teaching Assistants: Interaction
Design Student’s and Children’s Views. In Proceedings
of the 7th International Conference on Social Robotics.
Springer International Publishing, 502–511.
19. Mohammad Obaid, Felix Kistler, Markus Häring, René
Bühling, and Elisabeth André. 2014. A Framework for
User-Defined Body Gestures to Control a Humanoid
Robot. International Journal of Social Robotics 6, 3
(2014), 383–396.
20. Mohammad Obaid, Omar Mubin, Christina Anne
Basedow, A. Ayça Ünlüer, Matz Johansson Bergström,
and Morten Fjeld. 2015. A Drone Agent to Support a
Clean Environment. In Proceedings of the 3rd
International Conference on Human-Agent Interaction
(HAI ’15). ACM, New York, NY, USA, 55–61.
21. Mohammad Obaid, Eduardo B Sandoval, Jakub
Złotowski, Elena Moltchanova, Christina A Basedow,
and Christoph Bartneck. 2016. Stop! That is close
enough. How body postures influence human-robot
proximity. In Proceedings of the 25th IEEE International
Symposium on Robot and Human Interactive
Communication (RO-MAN). IEEE, 354–361.
22. Michiko Ohkura, Tsuyoshi Komatsu, and Tetsuro Aoto.
2014. Kawaii Rules: Increasing Affective Value of
Industrial Products. Industrial Applications of Affective
Engineering (2014), 97–110.
23. Photchara Ratsamee, Yasushi Mae, Kazuto Kamiyama,
Mitsuhiro Horade, Masaru Kojima, and Tatsuo Arai.
2015. Social interactive robot navigation based on human
intention analysis from face orientation and human path
prediction. ROBOMECH Journal 2, 1 (2015), 1.
24. Paul Robinette, Alan R Wagner, and Ayanna M Howard.
2014. Assessment of robot guidance modalities
conveying instructions to humans in emergency situations.
In The 23rd IEEE International Symposium on Robot and
Human Interactive Communication. IEEE, 1043–1049.
25. Emrah Akin Sisbot, Luis F. Marin-Urias, Rachid Alami,
and Thierry Simeon. 2007. A human aware mobile robot
motion planner. IEEE Transactions on Robotics 23, 5
(2007), 874–883.
26.
Leila Takayama and Caroline Pantofaru. 2009. Influences
on Proxemic Behaviors in Human-robot Interaction. In
Proceedings of the 2009 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS’09).
IEEE Press, Piscataway, NJ, USA, 5495–5502.
27. Pete Trautman, Jeremy Ma, Richard M Murray, and
Andreas Krause. 2015. Robot navigation in dense human
crowds: Statistical models and experimental studies of
human–robot cooperation. The International Journal of
Robotics Research 34, 3 (2015), 335–356.
28.
Sarah Woods. 2006. Exploring the design space of robots:
Children’s perspectives. Interacting with Computers 18, 6
(2006), 1390–1418.
... Chang et al. [33] found that the drone's color, size, and shape seem to influence how it is perceived. Yeh et al. [34] proposed a blue oval-shaped drone and discussed how a tablet could be used to display a "friendly face". Karjalainen et al. [35] investigated several features and found that emotional characteristics were desirable, and they also suggested that the drone's appearance should be a round shape with a face. ...
... Furthermore, generated images by the StyleGAN2-ADA suggest that the color blue characterizes the likable drones cluster. This finding supports prior findings [34]. ...
... Prior literature suggested that a social drone should have a face [34] and facial features [6,35], and that emotional expressions emotionally affect users [29]. Our results show that the emotional expression type impacts likability. ...
Article
Full-text available
Novel applications for human-drone interaction demand new design approaches, such as social drones that need to be perceived as likable by users. However, given the complexity of the likability perception process, gathering such design information from the interaction context is intricate. This work leverages deep learning-based techniques to generate novel likable drone images. We collected a drone image database (N=360) applicable for design research and assessed the drone’s likability ratings in a user study (N=379). We employed two clustering methodologies: 1. likability-based, which resulted in non-likable, neutral, and likable drone clusters; and 2. feature-based (VGG, PCA), which resulted in drone clusters characterized by visual similarity; both clustered using the K-means algorithm. A characterization process identified three drone features: colorfulness, animal-like representation, and emotional expressions through facial features, which affect drone likability, going beyond prior research. We used the likable drone cluster (N=122) for generating new images using StyleGAN2-ADA and addressed the dataset size limitation using specific configurations and transfer learning. Our results were mitigated due to the dataset size; thus, we illustrate the feasibility of our approach by generating new images using the original database. Our findings demonstrate the effectiveness of Generative Adversarial Networks (GANs) exploitation for drone design, and to the best of our knowledge, this work is the first to suggest GANs for such application.
... The work [66] studied the variation of HRP per the socialness of a drone. In this regard, four cases of different degrees of socialness, as shown in Figure 7 have been considered. ...
... Furthermore, females preferred higher HRP than males. In addition to these, the variation of preferred HRP with height and lateral distance has also been studied in the work [66]. In a scenario where a drone approaches a person at a lower height can observe a closer HRP than that of a higher height. ...
... The taxonomy of cases considered in the study[66]. ...
Article
Full-text available
An emerging trend in utilizing service robots in a vast range of application areas could be seen nowadays as a promising effort to uplift the living standard. These service robots are intended to be used by non-expert users, and their service tasks often require navigation in human-populated environments. Thus, human-friendly navigation behavior is expected from these robots by users. A service robot should be aware of Human–Robot Proxemics (HRP) to facilitate human-friendly navigation behavior. This paper presents a review on HRP. Both user studies conducted for exploring HRP preferences and methods developed toward establishing HRP awareness in service robots are considered within the scope of the review. The available literature has been scrutinized to identify the limitations of state of the art and potential future work. Furthermore, important HRP parameters and behavior revealed by the existing user studies are summarized under one roof to smooth the availability of data required for developing HRP-aware behavior in service robots.
... Proxemics. So far, researchers have explored how drones should approach people [42,66], the distance at which people feel comfortable around drones, what factors impact this distance [25,26,36,52,67], to which extent it differs from ground robots [2] and interaction methods that rely on close proximity [1,8,17,53]. More specifically, investigating the effect of a drone's height on comfortable distance, Duncan and colleagues and Han and colleagues did not report any effect of the flying height comparing high (2.13m) with low (1.52m) hovering heights [25] and a drone overhead (2.6m) with eye level (1.7m) [36] respectively. ...
... Associated safety measures critically challenge researchers' ability to perform real-world experiments, thus compromising the ecological validity of research in this domain. Researchers have used a transparent safety wall [36], fixed the drone's position [25,67], used a fake drone [19], or limited the minimum distance between a drone and a human [2,25,36] to investigate proxemic preferences. All of these choices have the potential to significantly impact the results. ...
... The within-participants factor of flying height (independent variable) has three levels: above the eyes (1.95m), eye-level (1.5m), and below the eyes (1m). Different categorical levels (i.e., tall, short, overhead, eye level) associated to fixed drone's heights have been explored in previous experiments [25,25,36,67]. In this experiment, we consider the drone at eye_level when bewteen +/-15cm relative to the participant's eyes height. ...
... This work has demonstrated many potential benefts to society when introducing drones in human spaces, such as helping frefghters fnd survivors in emergency situations [10], escorting individuals as they walk alone at night [110], as well as sending warnings to maintain social distancing during the COVID-19 pandemic [204]. Additional works explored drones for pedestrian guidance [51], as well as for assisting people in their daily lives [235], even enabling them to chase lions away [228]. This widespread use of drones has come along with a plethora of applications and domains of use. ...
... Throughout these applications, drones are mentioned in a plethora of roles, which we suggest can be described as Drone Metaphors. 108, 112, 115, 117, 119, 129, 130, 132, 134, 135, 140, 141, 144, 145, 161, 163, 167, 169, 172, 174, 177, 181, 183, 184, 187, 190, 196, 199, 200, 202, 206-208, 210, 212, 217, 221, 222, 226-228, 232] Make Videos [3, 6, 8, 10, 13-15, 18, 20, 23, 35-38, 40, 43-45, 51, 66, 69, 70, 72, 76, 80, 82, 86, 87, 90, 92-94, 96, 102, 104, 108, 112, 122, 133, 134, 140-142, 145, 146, 154, 158, 163, 169, 170, 174, 177, 181, 183, 185, 187, 189, 190, 194, 200, 202, 205, 210 185-187, 190, 192-194, 196, 197, 200, 212, 216, 217, 222, 223, 229-231, 233] Make Pace [8,54,76,157,158,190,200] Navigate [8, 10, 13, 18, 20-22, 24, 25, 31, 38, 39, 43, 44, 49, 51-55, 61, 66, 68, 70-72, 74, 86, 87, 91-93, 97, 107, 112, 116, 117, 122, 137, 146, 149, 150, 169, 172, 177, 191, 193, 196, 197, 199, 200, 202, 212, 215 [44,131,181,187,191,199, 219] Perform [52,71,98,111,158,181,187,189,193,200,222,230] Race [23,25,44,83,112,138,181,193,212,213,222,234,235] ...
... In line with the exploration of novel roles for drones and their sensing abilities, we found many applications that go beyond functional usage. Within these applications, drones help people get dressed [38], cook [105], sing [112], keep people company [52], and carry their bags [235]. With their roles evolving, the drone's physical appearance is changing. ...
Conference Paper
Interacting with flying objects has fueled people’s imagination throughout history. Over the past decade, the Human-Drone Interaction (HDI) community has been working towards making this dream a reality. Despite notable findings, we lack a high-level perspective on the current and future use cases for interacting with drones. We present a holistic view of domains and applications of use that are described, studied, and envisioned in the HDI body of work. To map the extent and nature of the prior research, we performed a scoping review (N=217). We identified 16 domains and over 100 applications where drones and people interact together. We then describe in depth the main domains and applications reported in the literature and further present under-explored use cases with great potential. We conclude with fundamental challenges and opportunities for future research in the field. This work contributes a systematic step towards increased replicability and generalizability of HDI research.
... Then, in Sections 4-9, papers will be analyzed based on the type of employed robot (see, Fig. 1), and a description of the most relevant works will be provided: due to space limitation, we will only describe the most representative papers of the specific category, selected based on number of citations and publication venue. We will analyze how perceived safety in pHRI has been studied for industrial poly-articulated manipulators , indoor mobile robots [51][52][53][54][55][56][57][58][59][60][61][62][63], mobile manipulators [64][65][66][67][68][69][70], humanoid robots , drones [105][106][107][108][109][110][111][112][113][114][115][116][117][118][119][120], and autonomous vehicles [121][122][123][124][125][126][127][128][129][130][131][132][133][134][135]. Section 10 will report general considerations on factors determining perceived safety, experimental duration and location, and the connection with safety standards. ...
... The following drones were utilized by the studies discussed in this section of the paper: AirRobot AR100-B [105], DJI Phantom 2 [106,107] and 3 [110,116], Parrot Bebop [108] and Parrot Bebop 2 [118], Parrot AR.Drone [106,113,114] and Parrot AR.Drone 2 [120], AscTec Hummingbird [111], and Georgia Tech Miniature Autonomous Blimp (GT-MAB) [119]. Table 6 presents some characteristics of the mentioned UAVs. ...
... Informed by a human-centered interaction design that involved a design study and a focus group, Yeh et al. [110] built a social drone by adding a blue oval-shaped safety guard, an android tablet displaying a face, and a greeting voice to the DJI Phantom 3 drone. Their findings suggest that a social drone is significantly better at comfortably reducing the distance to the human in comparison to a non-social drone (a square-shaped drone and no face): indeed, a social drone was allowed to come as close as 1.06 meters on average which is 30% closer than a non-social drone with no voice. ...
Article
Full-text available
This review paper focuses on different aspects of perceived safety for a number of autonomous physical systems. This is a major aspect of robotics research, as more and more applications allow humans and autonomous systems to share their space, with crucial implications both on safety and on its perception. The alternative terms used to express related concepts (e.g., psychological safety, trust, comfort, stress, fear, and anxiety) are listed and explained. Then, the available methods to assess perceived safety (i.e., questionnaires, physiological measurements, behavioral assessment, and direct input devices) are described. Six categories of autonomous systems are considered (industrial poly-articulated manipulators, indoor mobile robots, mobile manipulators, humanoid robots, drones, and autonomous vehicles), providing an overview of the main themes related to perceived safety in the specific domain, a description of selected works, and an analysis of how motion and characteristics of the system influence the perception of safety. The survey also discusses experimental duration and location of the reviewed papers, and the connection between perceived safety and safety standards.
... Human-drone interaction (HDI) is an emergent multidisciplinary research area deriving from the well-established fields of human-robot (HRI) and humancomputer (HCI) interactions, focusing on designing and investigating the interaction dynamics between humans and drones [40][41][42]. While HRI and HDI are somehow related in the sense that drones can also be categorized as robots, multidisciplinary research combining researchers and practitioners from the engineering, computer science, and cognitive science domains showed that not all HRI findings are directly applicable to aerial platforms [43]. ...
... While HRI and HDI are somehow related in the sense that drones can also be categorized as robots, multidisciplinary research combining researchers and practitioners from the engineering, computer science, and cognitive science domains showed that not all HRI findings are directly applicable to aerial platforms [43]. In fact, the emergence and uniqueness of the HDI research sub-field stem from: (1) the drones' recent technical (i.e., software and hardware) advancements; (2) their efficient and wide-ranging deployment in a variety of tasks; (3) their distinctive natures of flying; as well as (4) their improved capabilities to unpretentiously perceive, be perceived by, and interact with humans from elevated and farther distances compared to other platform types (i.e., ground robots) [42,43]. Moreover, while HCI and HRI research has been ongoing for several decades, HDI has been only present for the last 10-12 years [41,43]. ...
... By observing the drone's movement and reaction to the control commands, users accurately identified the platform's behavior and emotional state. A more recent study collected information from participants pertaining to the features and characteristics that an aerial platform must have to make it more social and designed a prototype accordingly [42]. The final design incorporated anthropomorphic characteristics and safety features and consisted of a social shape (i.e., ovalshaped protective cage), a face (i.e., Android tablet), as well as voice (i.e., text-to-speech-based greeting voice) as an additional interaction channel. ...
Chapter
The construction industry is witnessing significant growth in drone applications, ranging from site mapping and surveying and progress monitoring to safety monitoring and structure inspection and maintenance. Drones’ recent technological advancements and successful integration with other technologies, along with their ability to accomplish tasks safely, quickly, and cost-effectively, have made them popular robots on construction jobsites. In the upcoming years, drones will be an integral component of construction teams, and these flying robots are envisioned to cooperate collectively or with other types of robots while closely interacting with jobsite workers and personnel. This chapter focuses on: providing a comprehensive overview of the different applications of drone technology in construction; understanding the human-drone interaction elements, research areas, and opportunities; as well as summarizing the interaction considerations while proposing a future research roadmap that ultimately paves the way to safely and more efficiently integrate drones on construction jobsites.
... Advances in Human-Robot Interaction (HRI) have recently opened up for the rising research field of Human-Drone Interaction (HDI). The field generally started by investigating novel interaction approaches such as defining visual representations of a drone Szafir et al. (2015), designing ways for motion control [Obaid et al. (2016a); Walker et al. (2018)], exploring social body motions Cauchard et al. (2016), or defining interpersonal spaces Yeh et al. (2017). In parallel, researchers have looked at utilizing drones in several application domains [see Obaid et al. (2020a)], such as entertainment Rubens et al. (2020), sports [Romanowski et al. (2017); Mueller and Muirhead (2015)], domestic companions Karjalainen et al. (2017), local services [Obaid et al. (2015b); Knierim et al. (2018)], videography Chen et al. (2018), art Kim and Landay (2018), and more. ...
Article
Full-text available
Education is one of the major application fields in social Human-Robot Interaction. Several forms of social robots have been explored to engage and assist students in the classroom environment, from full-bodied humanoid robots to tabletop robot companions, but flying robots have been left unexplored in this context. In this paper, we present seven online remote workshops conducted with 20 participants to investigate the application area of Education in the Human-Drone Interaction domain; particularly focusing on what roles a social drone could fulfill in a classroom, how it would interact with students, teachers and its environment, what it could look like, and what would specifically differ from other types of social robots used in education. In the workshops we used online collaboration tools, supported by a sketch artist, to help envision a social drone in a classroom. The results revealed several design implications for the roles and capabilities of a social drone, in addition to promising research directions for the development and design in the novel area of drones in education.
... Along this rationale, not only behavioural aspects have been studied, but also drone prototypes that may have a pleasant appearance for humans, with a design meant to enhance the sense of comfort while using them. Among the others (Yeh et al., 2017), explores social distance in HDI using a drone prototype with a "socially appealing shape". The researchers added a cage around the drone to increase user comfort and a "friendly face" on the drone, partly inspired by the uncanny valley hypothesis that the sense of familiarity increases when the robot shares some similarities with a human (Mori et al., 2012). ...
Article
Full-text available
Multirotor drones are becoming increasingly popular in a number of application fields, with a unique appeal to the scientific community and the general public. Applications include security, monitoring and surveillance, environmental mapping, and emergency scenario management: in all these areas, two of the main issues to address are the availability of appropriate software architectures to coordinate teams of drones and solutions to cope with the short-term battery life. This article proposes the novel concepts of Social Drone Sharing (SDS) and Social Charging Station (SCS), which provide the basis to address these problems. Specifically, the article focuses on teams of drones in pre- and post-event monitoring and assessment. Using multirotor drones in these situations can be difficult due to the limited flight autonomy when multiple targets need to be inspected. The idea behind the SDS concept is that citizens can volunteer to recharge a drone or replace its batteries if it lands on their property. The computation of paths to inspect multiple targets will then take into account the availability of SCSs to find solutions compatible with the required inspection and flight times. The main contribution of this article is the development of a cloud-based software architecture for SDS mission management, which includes a multi-drone path-optimization algorithm taking the SDS and SCS concepts into account. Experiments in simulation and a lab environment are discussed, paving the path to a larger trial in a real scenario.
Conference Paper
Full-text available
In this paper we present a study that investigates human-robot interpersonal distances and the influence of posture , either sitting or standing on the interpersonal distances. The study is based on a human approaching a robot and a robot approaching a human, in which the human/robot maintain either a sitting or standing posture while being approached. We collected and analysed data from twenty-two participants and the results revealed that robot posture has a significant impact on the interpersonal distances in human-robot interactions. Previous interactions with a robot, and lower negative attitudes towards robots also impacted interpersonal distances. Although the effects of gender, height and age did not yield significant results, we discuss their influence on the interpersonal distances between humans and robots and how they are of interest for future research. We present design implications for human-robot interaction research and humanoid robot design.
Conference Paper
Full-text available
Motivated by the desire to mitigate human casualties in emergency situations, this paper explores various guidance modalities provided by a robotic platform for instructing humans to safely evacuate during an emergency. We focus on physical modifications of the robot, which enables visual guidance instructions, since auditory guidance instructions pose potential problems in a noisy emergency environment. Robotic platforms can convey visual guidance instructions through motion, static signs, dynamic signs, and gestures using single or multiple arms. In this paper, we discuss the different guidance modalities instantiated by different physical platform constructs and assess the abilities of the platforms to convey information related to evacuation. Human-robot interaction studies with 192 participants show that participants were able to understand the information conveyed by the various robotic constructs in 75.8% of cases when using dynamic signs with multi-arm gestures, as opposed to 18.0% when using static signs for visual guidance. Of interest to note is that dynamic signs had equivalent performance to single-arm gestures overall but drastically different performances at the two distance levels tested. Based on these studies, we conclude that dynamic signs are important for information conveyance when the robot is in close proximity to the human but multi-arm gestures are necessary when information must be conveyed across a greater distance.
Conference Paper
Full-text available
This paper presents an exploratory study on children's contributions to the design of a robotic teaching assistant for use in the classroom. The study focuses on two main questions: 1) How do children's designs differ from interaction designers' ? 2) How are children's designs influenced by their knowledge of robotics (or lack thereof)? Using a creative drawing approach we collected robot drawings and design discussions from 53 participants divided into 11 groups: 5 groups of interaction designers (24 participants), 3 groups of children with robotics knowledge (14 participants), and 3 groups of children without formal robotics knowledge (15 participants). These data revealed that (1) interaction designers envisioned a small or child-sized non-gendered animal-or cartoon-like robot, with clear facial features to express emotions and social cues while children envisioned a bigger human-machine robot (2) children without formal robotics knowledge, envisioned a robot in the form of a rather formal adult-sized human teacher with some robotic features while children with robotics knowledge envisioned a more machine-like child-sized robot. This study thus highlights the importance of including children in the design of robots for which they are the intended users. Furthermore, since children's designs may be influenced by their knowledge of robotics it is important to be aware of children's backgrounds and take those into account when including children in the design process.
Article
Full-text available
Robot navigation in a human environment is challenging because human moves according to many factors such as social rules and the way other moves. By introducing a robot to a human environment, many situations are expected such as human want to interact with robot or humans expect robot to avoid collision. Robot navigation modeling have to take these factors into consideration. This paper presents a Social Navigation Model (SNM) as a unified navigation and interaction model that allows a robot to navigate in a human environment and response to human according to human intentions, in particular during a situation where the human encounters a robot and human wants to avoid, unavoid (maintain his/her course), or approach (interact) the robot. The proposed model is developed based on human motion and behavior (especially face orientation and overlapping personal space) analysis in preliminary experiments of human-human interaction. Avoiding, unavoiding, and approaching trajectories of humans are classified based on the face orientation and predicted path on a modified social force model. Our experimental evidence demonstrates that the robot is able to adapt its motion by preserving personal distance from passers-by, and interact with persons who want to interact with the robot with a success rate of 90 %. The simulation results show that robot navigated by proposed method can operate in populated environment and significantly reduced the average of overlapping area of personal space by 33.2 % and reduced average time human needs to arrive the goal by 15.7 % compared to original social force model. This work contributes to the future development of a human-robot socialization environment.
Chapter
Human‐centered design is a method for developing products and services based on the observed needs and desires of people. Human‐centered design starts by envisioning problems facing people — such as hailing a cab, choosing a doctor, or applying for a job—and then works backward to solve them, always keeping the end user in mind. Smart cities use human‐centered design to flip the traditional model of city government on its head. Human‐centered design begins by looking closely at how people behave in real life and then designing services around their needs. Services must be accessible and understandable. They must overcome barriers of language and culture. The service design studio issued the list of simple principles summarizing its approach to solving design problems. Smart cities don't just provide services for their citizens; they provide services that citizens actually use.
Conference Paper
Personal drones are becoming popular. It is challenging to design how to interact with these flying robots. We present a Wizard-of-Oz (WoZ) elicitation study that informs how to naturally interact with drones. Results show strong agreement between participants for many interaction techniques, as when gesturing for the drone to stop. We discovered that people interact with drones as with a person or a pet, using interpersonal gestures, such as beckoning the drone closer. We detail the interaction metaphors observed and offer design insights for human-drone interactions.
Conference Paper
We present BitDrones, a toolbox for building interactive real reality 3D displays that use nano-quadcopters as self-levitating tangible building blocks. Our prototype is a first step towards interactive self-levitating programmable matter, in which the user interface is represented using Catomic structures. We discuss three types of BitDrones: PixelDrones, equipped with an RGB LED and a small OLED display; ShapeDrones, augmented with an acrylic mesh spun over a 3D printed frame in a larger geometric shape; and DisplayDrones, fitted with a thin-film 720p touchscreen. We present a number of unimanual and bimanual input techniques, including touch, drag, throw and resize of individual drones and compound models, as well as user interface elements such as self-levitating cone trees, 3D canvases and alert boxes. We describe application scenarios and depict future directions towards creating high-resolution self-levitating programmable matter.
Chapter
The Japanese word “Kawaii,” which represents a kansei/affective value, has such positive meanings as cute, lovable, and small. In the 21st century, the kansei/affective values of industrial products are becoming very important. However, since few studies have focused on kawaii attributes, we systematically analyze kawaii products themselves: the kawaii feelings caused by shapes, colors, sizes, and texture and tactile sensation caused by materials of those products. In this chapter, we introduce our experimental results for abstract objects in virtual environments and describe interesting tendencies for the visual attributes of kawaii, including thier shapes, colors, and sizes. We present these tendencies as kawaii rules.