Conference PaperPDF Available


Virtual Reality (VR) has made its way into everyday life. While VR delivers an ever-increasing level of immersion, controls and their haptics are still limited. Current VR headsets come with dedicated controllers that are used to control every virtual interface element. However, the controller input mostly differs from the virtual interface. This reduces immersion. To provide a more realistic input, we present Flyables, a toolkit that provides matching haptics for virtual user interface elements using quadcopters. We took five common virtual UI elements and built their physical counterparts. We attached them to quadcopters to deliver on-demand haptic feedback. In a user study, we compared Flyables to controller-based VR input. While controllers still outperform Flyables in terms of precision and task completion time, we found that Flyables present a more natural and playful way to interact with VR environments. Based on the results from the study, we outline research challenges that could improve interaction with Flyables in the future.
Flyables: Haptic Input Devices for Virtual Reality
using adcopters
Jonas Auda
University of
Essen, Germany
Nils Verheyen
University of
Essen, Germany
Sven Mayer
LMU Munich
Munich, Germany
Stefan Schneegass
University of
Essen, Germany
Figure 1: Left: A user piloting an aircraft in VR. The user has a slider in his right hand to control the speed. With the joystick
in his left hand, the aircraft can be steered sideways. Right: A user is rotating an object in VR using a knob.
Virtual Reality (VR) has made its way into everyday life. While VR
delivers an ever-increasing level of immersion, controls and their
haptics are still limited. Current VR headsets come with dedicated
controllers that are used to control every virtual interface element.
However, the controller input mostly diers from the virtual inter-
face. This reduces immersion. To provide a more realistic input, we
present Flyables, a toolkit that provides matching haptics for virtual
user interface elements using quadcopters. We took ve common
virtual UI elements and built their physical counterparts. We at-
tached them to quadcopters to deliver on-demand haptic feedback.
In a user study, we compared Flyables to controller-based VR input.
While controllers still outperform Flyables in terms of precision
and task completion time, we found that Flyables present a more
natural and playful way to interact with VR environments. Based
on the results from the study, we outline research challenges that
could improve interaction with Flyables in the future.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from
VRST ’21, December 8–10, 2021, Osaka, Japan
©2021 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-9092-7/21/12. . . $15.00
Human-centered computing Human computer interac-
tion (HCI);Haptic devices.
Flyables; Virtual Reality; Haptics; Drones; Quadcopter; Toolkit.
ACM Reference Format:
Jonas Auda, Nils Verheyen, Sven Mayer, and Stefan Schneegass. 2021. Fly-
ables: Haptic Input Devices for Virtual Reality using Quadcopters. In 27th
ACM Symposium on Virtual Reality Software and Technology (VRST ’21),
December 8–10, 2021, Osaka, Japan. ACM, New York, NY, USA, 11 pages.
Current virtual reality (
) systems provide immersive virtual ex-
periences with high quality visual and auditory stimuli. Designers
can use such environments to present endless virtual worlds with
myriads of interactive objects. However, the interaction capabil-
ities are limited, as the most popular devices for manipulating
virtual objects are controllers that the user has to carry. While
controllers provide great input capabilities for
, the output capa-
bilities are still limited. The haptic feedback oered by controllers
cannot simulate the variety of textures and form factors of virtual
objects. Thus, researchers are already investigating possible ways
to overcome these limited capabilities [
]. Drones
have shown great potential to act as ying user interfaces (
s) [
or can assist its users autonomously [
]. In
, plenty of research
has focused on employing drones as physical proxies for virtual
VRST ’21, December 8–10, 2021, Osaka, Japan Auda et al.
objects [
]. Here, drones can act as an ungrounded phys-
ical proxy to a simulated virtual object [
]. Therefore, they can be
equipped with haptic props and textures to mimic the haptics of
virtual objects that are perceived or manipulated by VR users.
To utilize drones that deliver well-known haptic
elements for
environments that not only provide matching haptic
feedback but also input capabilities, we present the Flyables toolkit.
The toolkit controls a set of drones equipped with customized 3D-
elements. These elements serve as physical proxies for
elements with which
users can interact. This works as
follows: As soon as a virtual
element is visible in
, a quadcopter
equipped with a matching physical
element – which we call a
Flyable – is steered to the location where a
user expects to touch
or grab it (see Figure 1). During our design process, we developed
ve 3D-printed
elements derived from classical input devices:
abutton, a knob, a joystick, a slider, and a 3D mouse. This enables
users to experience haptic feedback that matches the shape of the
element. Additionally, the Flyable acts as an input device,
fostering a similar experience as using a
element in the real world
(e.g., a real button, joystick, or slider). Moreover, Flyables have the
advantage over
controllers that the user does not need to carry
them all the time, which leaves their hands free. In the future, this
could enable a more natural gestural interaction [19, 21, 34].
We conducted an explorative user study with 12 participants
to compare the Flyables toolkit to state-of-the-art
Specically, we designed four dierent
scenarios to showcase
the functionality of Flyables. These scenarios could be controlled
using Flyables or standard
controllers. We gathered data on per-
formance, usability, and physical movement, as well as qualitative
feedback using post-study interviews. Although the Flyables toolkit
does not outperform standard
controllers in terms of precision
and task completion time in its current state, it can enrich virtual
elements with appropriate haptic feedback and induce greater
body movement. The contribution of this work is threefold: (1) We
provide the Flyables toolkit as open-source software together with
the 3D models of our 5
elements. (2) We compared Flyables to
controllers. The results highlight the strengths, weaknesses, and
future challenges regarding the toolkit. (3) We outline possible re-
search challenges for improving the Flyables toolkit. These include
how Flyables can be used to provide additional force feedback or
can be designed to be repurposed automatically.
Traditional VR applications provide haptic feedback through con-
trollers (e.g., by applying vibration to the user’s hands). To over-
come the limitations of current controllers, drones acting as haptic
proxies to virtual objects have become a popular research topic.
Knierim et al. [18] showed how to use drones as physical coun-
terparts to virtual entities. They designed a scenario in which a
bumblebee attacks a user in
. In reality, a drone stings the user
with a small stick. They ensured user safety by using a drone that
cannot harm the user, as it was not powerful enough to pose any
risk of injury. Hoppe et al. [
] showed that drones providing hap-
tics for virtual objects resulted in a greater sense of presence in
Abtahi et al. [2] later introduced safe-to-touch drones. In a virtual
shopping scenario, they evaluated dierent styles of haptics pro-
vided by such a drone. For example, the drone could be equipped
with textiles to mimic the texture of virtual garments. Further, the
drone could position itself in the room and be picked up by the user
to provide haptic feedback. A user in
could reach out for the
drone to pick up virtual garments. Through a preliminary study,
they could show that their participants successfully interacted with
the drone while shopping in
. Abdullah et al. [
] used drones to
simulate the weight and stiness of virtual objects. A drone was
used to apply a downward force matching the weight of a virtual
object that a
user was holding. In contrast, stiness could be
simulated with an upward force. Another approach to enhance
experiences with drones uses their inherent properties. Yamaguchi
et al. [
] investigated using the airow from a drone to stabilize a
paper hanging from it in order to provide haptics in
. They could
show that the haptic feedback was eective for supporting mid-air
drawing. Tsykunov et al. [
] proposed a string-based approach to
interact with a drone in
. Users can pull on a string attached to
the drone to interact. Through the string, users experience feedback.
Using specic elements of a drone (e.g., the propellers) to provide
haptic feedback has also previously been investigated. Heo et al.
] created a handheld device that can provide haptic feedback.
Six propellers are used to accelerate the device in any direction.
, haptics of dierent elements can be simulated by it. For
example, when a user places a stick in owing water in VR, the
device provides the matching force feedback to mimic the resistance
of the water. Further, when the user travels to another planet in
VR, gravitational forces can be rendered dierently through the
device. Participants in a preliminary study reported being more
immersed in the
experience when using the device. Je et al. [
presented a wearable device that provides force feedback to virtual
weapons used in
games. Through propellers, this device can
apply force to the wrist of the user. A study showed that the system
could increase the enjoyment of
games. A similar approach to
apply forces in
was introduced by Sasaki et al. [
]. Through
propellers attached to a rod, the device applies forces on its user.
In the previously mentioned approaches, it is common that
drones are used to create haptics, either to enhance the
ence or to create a touchable 3D
in reality that supports known
input metaphors (e.g., touch or drag). In this work, we introduce a
toolkit for
that uses interaction metaphors materialized
via 3D-printed haptic props mounted on quadcopters. In contrast to
previous work, such as [
], the Flyables toolkit aims to provide
well-known input elements for arbitrary
experiences. The goal
of Flyables is to mimic haptic feedback as accurately as possible and
provide generic input capabilities such as controllers, but without
requiring the user to constantly have their hands occupied. With
further advancements in fabrication, we might be able to create
such props within a matter of minutes in the near future [
Then, such 3D-printed structures can provide haptic feedback for
virtual objects when they are navigated to the right place at the
right time using quadcopters.
With Shneiderman’s eight golden rules [
] in mind, the Flyables
toolkit provides a consistent set of input devices across arbitrary
Flyables VRST ’21, December 8–10, 2021, Osaka, Japan
(a) Button (b) Knob (c) Joystick (d) Slider (e) 3D Mouse
Figure 2: The ve Flyables. Each Flyable consists of a 3D-printed haptic interface element mounted on a quadcopter and a
corresponding virtual representation in VR. The quadcopter is equipped with markers for optical tracking.
scenarios: a button,knob,joystick,slider, and 3D mouse (see
Figure 2). In the following, we describe the design process of the
ve input devices. Further, we introduce the Flyables control system
and explain how it recognizes input from the ying UI elements.
3.1 Design Process
Our design process for creating Flyables involved multiple stages.
We started with the goal of designing physical haptic counterparts
for possible virtual UI elements. However, at this stage of the pro-
cess, we did not know how the physical objects would look nor
which virtual UI elements there are.
We started our design process by gathering a large number of
interactive items. We looked not only at on-screen elements from
graphical user interfaces (GUIs), but also at everyday physical ob-
jects. During our process, both virtual and physical objects served
as an inspiration for the next step. The virtual
elements helped
us to understand what type of
elements we use daily and how
they look and react in the virtual domain. The physical character
of the objects helped us to design appropriate counterparts for the
elements. The goal was for people to immediately feel
comfortable when using them.
We started o with a wide range of physical (e.g., crossbars
latches, volume knobs, and stove control knobs) and virtual ob-
jects (e.g., buttons, sliders, and drop-down menus). We narrowed
down our search to ve interactive elements that can be directly
manipulated (e.g., translated or re-orientated) in a specic way: a
button, knob, joystick, slider, and 3D mouse. Each element serves
a particular purpose. The button can be used for discrete input
events. The knob enables rotary input in one dimension, while the
joystick oers three-dimensional rotation (yaw, pitch, roll). The
slider can be adjusted along one dimension. Finally, the 3D mouse
enables 3D translation. After extracting the basic interactions, our
next step was to design the virtual representations of the input
devices as well as their physical forms. Here, we began by choosing
real-world objects to serve as templates for the virtual and physical
representations. For the virtual representations, we wanted them
to have an overall coherent "look and feel" and to be noticeable, but
not to distract from the
experience. The button was derived
from a traditional "kill switch", the knob from volume control knobs,
the joystick from a manual gear stick, the slider from an industrial
machine, and the 3D mouse from a free-oating ball like a balloon.
This gave us an overall "look and feel" for our Flyables. With a rst
version of Flyables, we tested their dimensions and ability to y.
For each Flyable, we tested if the drone together with the attach-
ment can lift o on its own and stabilize itself in the air. Over a
number of iterations, we remodeled the Flyables to improve their
ying capabilities. At the same time, we tested them in
to see if
they would meet our expectations. During this process, we asked
people from our institution with a design background for informal
feedback. After weeks of prototyping, remodeling, and redesigning,
we present our ve Flyables (see Figure 2).
Button. The button (see 2a) allows the user to trigger discrete
events. As soon as the user touches the button, the toolkit triggers
an input event. At the same time, the physical button allows the
user to feel the matching haptic feedback.
Knob. The knob (see 2b) can be rotated by the user to adjust a
specic value. A visual marker on the top of the knob indicates its
orientation. The knob is located on top of a round base to communi-
cate its aordance (i.e., turning left or right). Its physical counterpart
mounted on a quadcopter allows the user to feel the round struc-
ture of the knob. When the physical knob is turned, the rotation
of the quadcopter is applied to objects or values that should be
manipulated in VR.
Joystick. The joystick (see 2c) provides a means of input for yaw,
pitch, and roll (3DOF). It consists of a base and a spherical part at
the top. The values for yaw, pitch, and roll are measured in degrees
and can be applied to any virtual object in VR.
Slider. The slider (see 2d) can be used to specify a value within
a specic range. It can be moved in the 3D virtual environment,
but only the translation along one specic axis is considered for
changing the target value. Arrows at the base of the slider indicate
the directions the slider can be moved to adjust this value.
3D Mouse. The 3D mouse (see 2e) allows the user to translate
objects in 3D space, cf. [
]. If an object is linked to the 3D mouse,
the user can translate it by grabbing the 3D mouse and moving it
around. It can be used to position objects without directly touching
them. Objects in
often have no physical representation, so the
3D mouse can act as a proxy, enabling haptic feedback. Further, as
the object is not directly held by the user, the virtual representation
of the hand does not occlude the object. This means that the 3D
Mouse can be used to move distant objects.
3.2 Toolkit
The Flyables toolkit consists of a set of quadcopters with haptic
attachments and a control application that interfaces with an
optical tracking system and the
application. With respect to
the position and orientation of a
element in
, our toolkit
steers a quadcopter mounted with the physical counterpart of the
element to the physical location where a user would expect
VRST ’21, December 8–10, 2021, Osaka, Japan Auda et al.
Figure 3: (A) the participants controlled a crane with the joystick and the buon. (B) a car could be rotated using the knob or
its doors could be opened using the buon. (C) the participants compared molecules by moving them with the 3D mouse and
rotating them with the knob. (D) the participants steered an aircraft with the joystick and controlled its speed with the slider.
the haptic feedback (see Figure 4). Users can touch and hold the
physical object. While in
, they see a virtual representation of
their hands and the virtual input device.
The Flyables toolkit uses proportional-integral-derivative (PID)
controllers to steer the quadcopters. The PID controllers constantly
track the target location of the virtual
element and the physical
position of the quadcopter. They then use this data to calculate
the commands necessary to steer the quadcopter to the location
of the virtual element in 3D tracking space. Tracking the position
can be accomplished by various means, such as optical marker
tracking, indoor localization systems, or even through utilizing
tracked components of modern VR systems (e.g. a VIVE Tracker)
]. The PID controllers can be tuned to the desired ying behavior,
e.g., desired acceleration, maximum velocity, or spatial precision,
similar to [
]. The steering is executed by the control application
without human intervention.
We open-sourced the Flyables toolkit
together with the model
les of the 3D-printed quadcopter attachments. We included the
control application that steers the quadcopters and provided a Unity
3D plugin to integrate Flyables into arbitrary
scenarios. We
included a showcase application for Unity 3D that uses the plugin
to interface with Flyables. Further, we published an instructions
for integrating Flyables into other applications or game engines.
We also provided guidelines and instructions on how to integrate
any drones into the toolkit. This will enable other researchers and
designers to build upon the presented research.
To evaluate the Flyables toolkit, we conducted a user study with
12 participants. We developed four dierent
scenes, each scene
contained a task to be completed using Flyables or VR controllers.
4.1 Apparatus
The four dierent
scenes, which we will now refer to as Scenes,
made up the rst independent variable (see Figure 3). The second in-
dependent variable was Input, which was either Flyables or Oculus
Rift controllers. In each Scene, we integrated two dierent Flyables.
We counterbalanced the order of Input and Scene using a Latin
Square design. We deployed Flyables and the Oculus
system in a
area that was tracked by an OptiTrack 13W system. To de-
ploy the physical
elements, we attached the dierent 3D-printed
elements to o-the-shelf quadcopters (i.e., the Parrot Mambo).
Toolkit and PID congurations of the drones:yables
4.2 Virtual Reality Scenes
We created our four
scenes in Unity3D. In each scenario, we
recorded the task completion time and logged the user’s movement.
Remote Controlled Crane. In this scene, the participants control a
crane to stow away three rocks (see Figure 3A). The crane could be
rotated sideways by tilting the joystick. By pressing the button, the
crane arm could be controlled. Pressing the button once made the
crane move downwards, while pressing it again stopped it. A third
press made the arm move upwards. Then the sequence started back
at the beginning. The arm was stopped when it hit a rock, and the
rock was then attached to the arm. The task was nished when the
rocks were brought to the destination area. The scenario could also
be controlled using the Oculus controllers. Here, the joystick of the
right controller was used to turn the crane. The trigger button on
the left controller was used to move the arm up and down.
Car Showroom. In the Car Showroom scene, the participants could
use the Knob to rotate a car (see Figure 3B). The button could be
pressed to open or close the car doors. The participants had to
nd three price tags that were attached around and inside the car.
We instructed the participants to verbally indicate when they had
found all three price tags. The car could also be turned using the
Oculus controllers. Here, the joystick of the right controller turned
the car. The trigger button on the left controller could be used to
open or close the doors.
Figure 4: An HMD user reaching out for a Flyable. The user’s
hands are detected via a Leap Motion (attached to the user’s
HMD). The quadcopter is tracked by an OptiTrack system.
After grabbing the Flyable, the user can use it to control ele-
ments in VR. We aligned the coordinate systems of the HMD,
the Leap Motion, and the OptiTrack system to allow users
natural interaction using their hands.
Flyables VRST ’21, December 8–10, 2021, Osaka, Japan
Molecule Comparison. In this scene, the participants had to com-
pare a specic molecule (i.e. Thalidomide [
]) to four other molecules
(see Figure 3C). Two of the other molecules were the same and two
were mirrored. The Knob could be used to rotate the molecule, while
the 3D Mouse could be used to translate the molecule into 3D space.
To complete the task, the participants had to approach the four
molecules in the room and compare them to the molecule attached
to the 3D Mouse. We recorded the answers and the time to fulll
the task. To move the molecule with the Oculus controllers, the
participants held down the trigger of the right controller and then
moved the controller to translate the molecule. The joystick of the
left controller could be used to rotate the molecule.
Aircraft Piloting. In this scene, the participants steered an aircraft
by using the Joystick to steer the aircraft sideways and the Slider to
control its speed. The participants sat on a chair in the middle of the
tracking space. After 30
, ve targets popped up at the same altitude
(cf,. Figure 3D). The participants’ task was to hit all the targets. To
steer the aircraft with the Oculus controllers, both joysticks were
used. The left joystick was used to steer the aircraft sideways, and
the other was used to adjust its speed.
4.3 Measurements
As measurements, we use task completion time (
) and move-
ments per task. Here,
is the time the participants actually
worked on the task, excluding the setup time and breaks. Move-
ment is the distance the participants moved during the task, which
we use as a way to measure physical engagement.
We chose the following questionnaires to obtain a comprehen-
sive understanding of the impact of Flyables on users. Specically,
we used the AttrakDi questionnaire [
] for the overall user expe-
rience and the System Usability Scale (
) [
] for overall usability.
We also added ve 7-point Likert scale questions on the follow-
ing properties: Realism, Hardness, Naturalness, Expected Location,
and Future Use. In addition, we assessed simulator sickness via the
Simulator Sickness Questionnaire (SSQ) [
]. Finally, we used the
Presence Questionnaire (PQ) [35] to measure the presence in VR.
4.4 Procedure
After welcoming each participant, we explained the purpose of the
study and answered any questions they had before having them
(a) TCT (b) Movement
Figure 5: (a) Average TCT per condition in seconds. (b) Aver-
age head movement per condition in meters.
sign an informed consent form and ll out a demographics form.
Next, we introduced them to our study and the Flyables toolkit. We
explained the general procedure and showed them the quadcopters
equipped with the haptic
elements. As we used o-the-shelf
indoor consumer quadcopters with low power, we ensured that
the interaction with them would be risk-free and would not cause
injuries like the ones in Knierim et al. [
]. To further ensure the
safety of the participants, experimenters were constantly in prox-
imity to disable the quadcopters at any time. After the introduction,
the participants were seated in the middle of our tracking space.
Then they entered
, interacted with the scene, and then exited to
ll out a
questionnaire. At the end of the study, we asked the
participants to ll out the AttrakDi,PQ, and SSQ questionnaires.
4.5 Participants
We recruited our participants through our university mailing list.
We invited 12 participants to our lab (5 female, 7 male, 0 other).
Our participants were aged between 17 and 32 (
SD =
33). All participants self-identied as right-handed. Nine
participants had used
before: 2 daily, 1 once a week, and 6 once
a month. Two participants owned a VR headset.
For the evaluation, we performed a quantitative analysis of the
collected objective and subjective data. For the non-parametric data,
we applied the Aligned Rank Transform (
) using the ARTool
toolkit and applied a paired-sample t-test with Tukey correction,
as was suggested by Wobbrock et al. [
]. For all other ANOVAs,
we used paired t-tests with Bonferroni correction.
5.1 Task Completion Time
As the normality assumption of the task completion time (
was violated (
p< .
001), we performed a non-parametric two-way
repeated measures analysis of variance (
) equivalent
. We determined whether Input
Scene signicantly
inuence the
, revealing a signicant eect of Input (
F1,77 =
p< .
001) and Scene (
F3,77 =
p< .
001). Moreover,
we found a signicant interaction eect for Input
F3,77 =
p< .
001. Thus, Controllers (
SD =
12) were faster
than Flyables (M=50sec,SD =26)(see 5a).
5.2 Body Movement
We conducted a two-way
as the normality as-
sumption was violated (
p< .
001) to determine whether Input
Scene signicantly inuence the amount of head movement. The
analysis revealed a signicant eect of Input and Scene (
F1,77 =
p< .
F7,77 =
p< .
001; respectively). We found
a signicant interaction eect for Input
F3,77 =
p< .
001. Thus, participants moved less when using Controllers
SD =
62) than when using Flyables (
SD =1.23)(see 5b).
5.3 System Usability Scale (SUS)
We conducted a two-way
(normality assumption
p< .
001) to determine whether Input
Scene signi-
cantly inuence the
]. The analysis revealed a signicant
VRST ’21, December 8–10, 2021, Osaka, Japan Auda et al.
eect of Input:
F1,11 =
p< .
001. However, we could not
nd a statistically signicant inuence for Scene (
F3,33 =
p> .
236). Moreover, we found no statistically signicant inter-
action eect for Input
Scene (
F3,33 =
p> .
210). Thus,
using Controllers (
SD =
12) was rated as better than using
Flyables (M=64.1,SD =22) (see 6c).
5.4 Simulator Sickness Questionnaire (SSQ)
For the SSQ [
], we conducted a Wilcoxon signed-rank test (nor-
mality assumption violated:
p< .
001), which did not show a statis-
tically signicant inuence of Input on nausea (
p> .
Thus, nausea was similar between conditions, with
SD =.
16 for Controllers and
SD =.
15 for Flyables (see
6a). Furthermore, a second Wilcoxon signed-rank test (normality
assumption violated:
p< .
001) did not show a signicant inuence
of Input on oculomotor (
p> .
076). Thus, oculomotor was
similar between conditions, with
SD =.
40 for Controllers
and M=1.49,SD =.45 for Flyables (see 6a).
5.5 AttrakDi
Since the normality assumption (
p> .
05) for a paired Student’s
t-test was met, we performed them on each subscale to investigate
the inuence of Input on PQ (pragmatic quality), HQI (hedonic
quality – identication), HQS (hedonic quality – stimulation), and
ATT (attractiveness). Our analysis revealed signicant dierences
for HQI, HQS, and ATT (
p< .
p< .
043; and
p< .
018; respectively). However,
we could not nd signicant dierences on PQ (
p> .513) (see 6b).
5.6 Presence Questionnaire
We conducted the presence questionnaire [
] to evaluate the users’
experiences in the environment. The results show that the con-
trollers reached higher scores. However, for Quality of interface and
Haptics,Flyables scored higher (see 7b). We performed an additional
seven Wilcoxon signed-rank tests (normality assumption violated
p< .
05), which showed that Possibility to act and Self-evaluation of
performance are signicantly dierent (
< .
(a) SSQ (b) AttrakDi (c) SUS
Figure 6: Average scores for the Simulator Sickness Question-
naire (SSQ) (a) and ArakDi (b) questionnaire scores. Error
bars represent the standard error. (c) Average SUS scores.
(a) Additional Questions (b) Presence
Figure 7: (a) Average scores for the Additional Questions. (b)
Average scores of presence questionnaire categories. R = Re-
alism, PtA = Possibility to act, QoI = Quality of interface,
PtE = Possibility to examine, SEoP = Self-evaluation of per-
p< .
005; respectively). For the others, the analyses did not reveal
statistically signicant dierences (p> .05).
5.7 Additional Questions
We performed an additional ve Wilcoxon signed-rank tests (nor-
mality assumption violated:
p< .
05), which indicated that there
was no signicant inuence of Input on Realistic, Hard, Natural,
Expected Location, or Future Use. We could only show signicant
dierences for Hard and Future Use (
p< .
p< .
022; respectively). For all others,
p> .
05 (see 7a). For the
Molecule Comparison Task, all participants solved the molecule com-
parison task correctly when using Flyables, whereas only 10 out of
the 12 participants solved it correctly using the controllers.
5.8 Interviews
We conducted semi-structured interviews to obtain qualitative feed-
back from our participants. We combined all interviews from the
study sessions for analysis. We transcribed and translated the inter-
views into English literally without summarizing or transcribing
phonetically [
]. Finally, we employed a simplied version of quali-
tative coding with anity diagramming [
] for interview analysis.
5.8.1 Pro-Flyables Feedback. In general, seven participants enjoyed
using the Flyables to fulll the tasks (P1, P3 - P6, P9, P10). As P4 put
it, "you can move around like you would do in everyday life". P10 said
that, for solving tasks, Flyables are more enjoyable. Moreover, the
two main positive comments we received about using Flyables were
that a) the mapping between the
action and the physical action
were in sync, and b) that the haptic feedback from the physical
element made them feel more immersed in
. Four participants
(P3, P4, P9, P10) enjoyed that the mapping of Flyables was in sync
with the physical attachments. P10 noted that the mapping of the
functionality to the controllers is often arbitrary. Here, P10 sees a
benet in using Flyables, as they communicate their functionality.
Six participants (P3 - P7, P9) liked that the physical objects felt
like the virtual ones. Here, we received praise for the realism that
Flyables provided. P5 stated, "I had the feeling of being more inside
with the drones," and P6 said, "I liked the attachments and their
Flyables VRST ’21, December 8–10, 2021, Osaka, Japan
haptics." Also, P3 said that "from a haptics point of view it was
denitely better than the controllers," while P5 pointed out that the
haptics could not be achieved by the controllers. Lastly, P7 stated
that "[...] the drones might be more intuitive for people not used to
controllers." and added that the movement with Flyables is more
natural than with the controllers.
5.8.2 Pro-Controller Feedback. In contrast to the comments we
got on the positives of using Flyables, we also got positive feedback
on the use of controllers. Six participants (P2, P4, P6 - P8, P11)
stated that controllers are well-known to them and are therefore
easy to use. P6 said that "the controllers were better because [...] they
are well-known." P11 concluded that the controllers are easier to
use because they are well-known, but that Flyables also worked
"surprisingly well." P10 stated that using a controllers "is clearly
easier, but therefore also more boring." Two participants (P1, P12)
argued against using Flyables. P1 explained that it was exhausting
to grab Flyables, so the controllers were easier to operate. P12
generally preferred the controllers over Flyables because the control
was easier and more intuitive. P12 also pointed out that one did not
have to think about the usage: "I preferred the controllers in every
scenario. It was easier and more intuitive because I did not have to
think about it. While using the drones, I had to look for where they
were all the time. I had to watch to avoid colliding with them."
5.8.3 Real World Use-Cases. Six participants (P2, P8 - P12) liked
the idea of using Flyables for games. As P10 put it, "It was fun! It
was exciting because it was challenging!" Eight participants (P2, P4,
P5, P7 - P12) suggested using such a system for training purposes
or simulations, such as surgery training (P5), pilot training (P4),
or training for setting up chemical experiments (P11). Supporting
design such as CAD or 3D modeling was also suggested (P9).
5.8.4 Improvement Suggestions. Two participants wanted more
ways to interact with Flyables. Suggestions included being able to
touch Flyables from all sides (P5) or double-tap the button (P12), as
well as having Flyables that can nd their way to the user’s hand
autonomously (P1). One participant added that future systems could
have safety measures for roommates, pets, and house plants (P6).
5.8.5 Scenario Feedback. For comparing molecules, ve participants
liked Flyables (P1, P2, P7 - P9). Being able to hold things in the hand
was perceived positively by P2 while doing the molecule compari-
son: "The drones were better for the molecule thing because one had
to turn and move around while holding the molecule. It was more
haptic, which I liked." P7 stated: "I found it more intuitive. Using
the controllers was monotonous." P10 liked the way the molecule
was rotated via Flyables, but at the same time had eciency con-
cerns. P9 stated: "I tend to the controllers [...] but for investigating
objects and moving them around, the drones also work very well."
Four participants disliked Flyables during the molecule comparison
(P3 - P5, P7). Three participants had no preference for Flyables or
the controllers (P1, P8, P11). P11 explained: "Both are quite similar.
The controllers are faster [...]. Moving objects with the 3D Mouse and
rotating them worked well with both the controllers and the drones".
From eight participants, we got feedback that Flyables worked
well for the car showroom (P1 - P4, P7, P9 - P11). Here, P4 said:
"The motion was relatively easy. I could do it quite well by using the
drones." P7 reported a better spacial feeling for the car showroom
while using Flyables to rotate the car, but mentioned that the button
could not be pressed very hard because the drone would crash. P9
commented that "[...] if the task is to investigate an object, the drones
work, [as] it feels like I have the object in my hand."
In the remote-controlled crane scenario, two participants (P10,
P11) liked Flyables for controlling the crane. P11 said that one could
properly control the crane with Flyables. P6 and P10 noticed the
joysticks’ resistance: "The joystick is cool because the drone generates
a force against my motion and one pushes against that. That is really
cool!" (P10). P10 added that a joystick for turning the crane is well-
known, while the mapping of the functionality to the controllers is
quite arbitrary. Still, P10 said they would prefer the controllers in
terms of input precision and interaction time. Others experienced
diculties and therefore preferred the controllers (P2, P3, P6, P7).
In the aircraft piloting scenario, we found that all participants
who themselves own a joystick liked Flyables (P4, P7, P9, P10). P4
liked how the aircraft was steered in the piloting scenario, but
at the same time appreciated the precision of the controllers. P4
said: "Compared to the controllers it is more realistic! In reality, you
also have a thrust lever." Flyables were also disliked by three par-
ticipants (P2, P8, P11). P5 expressed that steering the aircraft was
very complex and that the dierent types of motion were especially
challenging (i.e., tilting the joystick from left to right while simul-
taneously moving the slider back and forth). This was explained as:
"I found the drones very bad for steering the aircraft. I had to move
around a lot and I had to hold on to the drones all the time. However,
with the controllers I could rest my hands" (P2).
We implemented four dierent
scenes using ve dierent Fly-
ables (i.e., quadcopters) that carry physical
elements to control
objects and provide matching haptic feedback. We provided
ve dierent
elements (i.e., a button, a knob, a joystick, a slider,
and a 3D mouse). Through our exploration, we uncovered several
strengths and weaknesses of Flyables. This enables us to guide the
future development and investigation of Flyables.
6.1 Flyable Handling
We observed a signicantly higher TCT in the
scenarios when
Flyables were used instead of
controllers. This ranks Flyables as
worse than controllers for interaction in
. Further, we observed
that in general, the participants rated the drones as "hard to use." Par-
ticipants reported that controllers were easier to operate. In general,
users are familiar with controllers, as they are a mature technology.
This is a true weakness of the current Flyables toolkit. Independent
of the toolkit itself, the performance of Flyables in our study may
be aected by the drone model that we chose for the evaluation.
Larger, more stable drones might enable better interaction.
6.2 Body Movement
We observed an increase in physical movement when Flyables were
used in contrast to
controllers. Participants mentioned that inter-
acting with Flyables was tiring. However, in specic circumstances,
such body movement may be desired. While the participants argued
that this is a negative aspect of Flyables, it might also provide a
benet. Research on exertion games [
] underlined the positive
VRST ’21, December 8–10, 2021, Osaka, Japan Auda et al.
Figure 8: Showcases of the Flyables toolkit. Here, Flyables are used in dierent scenarios to show their applicability: for ying
(A) for instance using a thrust lever attachment (B) or in a crane scenarios (C + D).
aspects that physical activity can provide to the user. In addition, six
participants explicitly mentioned games as a potential use case. Par-
ticipants enjoyed using Flyables as controls because of the matching
haptic feedback and the communication of functionality through
their design (e.g., using a joystick to control an aircraft). This high-
lights that, for the gaming context, Flyables could be a step towards
serving various control elements to players. This might be improved
by having drones specically designed with more precise input ca-
pabilities, which is an important step for users to engage with a
game [
]. Also, special controlling algorithms could provide active
and scenario-dependent force feedback. Together with powerful
drones, that could lead to a more sophisticated VR experience.
6.3 Usability, UX, & Simulator Sickness
In terms of usability, controllers outperformed Flyables in every
scenario. This is also reected in the AttrakDi results; however,
only in terms of the hedonic quality (stimulation) and attractive-
ness. The pragmatic quality and hedonic quality (identication)
are similar between Flyables and controllers. We argue that this
might be due to the long task completion time when using Flyables,
but we also argue that the largest factor for reduced usability is
the unfamiliarity with using Flyables for interaction. We further
support our argumentation with the qualitative feedback from the
participants, which indicates that the controllers were easier to use.
This allows us to contend that, over time, users could become fa-
miliar with Flyables. Thus, we believe that in the long term Flyables
could provide an alternative means of interaction in
. Yet, only
a long-term investigation could yield such results. Finally, we ob-
served no signicant dierences in simulator sickness for Flyables
or standard
controllers. We can claim that Flyables most likely
do not contribute to simulator sickness any more than controllers.
6.4 Immersion & Presence
Participants reported feeling more inside
when using Flyables;
and thus, felt immersed. Brown and Cairns [
] divided immersion
into three levels: engagement, engrossment, and full immersion.
Becoming immersed in a game means transiting from engagement
to engrossment to full immersion. Usability and control problems
might hinder users from engaging with a game. While Flyables
overall helped participants to feel more immersed, we think that
our scenes and especially our tasks were not constructed to t
the gaming context. We suggest investigating Flyables in playful
scenarios to uncover the suitability for dierent game genres.
For presence, the controllers received a higher score than Flyables
in general. However, Flyables scored higher in terms of Quality of
Interface and Haptics. Moreover, feedback from the participants
conrmed that they liked the drone attachment’s haptics. Being
able to feel what they saw in
was especially appreciated by
the participants. Again, we argue that the users’ lack of familiarity
with Flyables rendered the results lower on average. When we
questioned them in detail, however, we could unveil the positive
aspects, which have the potential to provide greater immersion.
While the Flyables toolkit is not yet ready to be used in an ar-
scenario, this initial evaluation points to directions for
future investigation. Weaknesses of Flyables (e.g. precision) could
be addressed to cover a wider range of applications. Technical im-
provements of quadcopters might also support more use cases.
6.5 Limitations
We acknowledge the following limitations of our work. First, for
the evaluation, we used consumer drones that were not specically
designed for interaction with humans. Custom drones that are de-
signed to be equipped with the Flyables
elements may perform
dierently with regard to stability or precision. The Flyables toolkit
allows to congure the maximum tilt angle individually for each
drone to realize dierent ight characteristics. In our evaluation,
we limited the maximum tilt angle for a drone to move in any
direction to 10
. This allowed us to y precisely in our tracking
space. Further, this limited the speed of the drone to further ensure
safety. Second, we compared Flyables to state-of-the-art controllers
that have improved in recent years. These devices had been used
by the majority of our participants before. Participants were used
to this type of input device and were thus able to solve the tasks
more easily. It remains unclear how participants would perform
after gaining similar experience with Flyables. Finally, drones could
crash when they were hit too strongly by the participants. This
might subtly inuence the participants in a negative way. Flyables
could benet from drones that recover quickly from crashes. We
outline how to tackle this in our future research challenges. Finally,
we must point out that interacting with drones can be dangerous.
In our current version of Flyables, we did not include additional
blade guards that cover the propellers from above. During our eval-
uation, several experimenter reduced the injury risk by constantly
observing our drones and disarm them in case of an emergency.
In a future version of Flyables, we plan the integration of safety
measures like cages [2] or deformable propellers [26].
We envision the Flyables toolkit more as a starting point for novel
interaction prototyping using drones rather than as a framework
that supports out-of-the-box ying
elements. We think that de-
velopers, designers, and researches could use the toolkit to create
Flyables VRST ’21, December 8–10, 2021, Osaka, Japan
drone-enhanced interaction in
without the technical challenges
of drone controlling and integration. Therefore, we introduce chal-
lenges that could be the subjects of future research endeavors to
improve the Flyables toolkit and widen its applicability.
Force Feedback and Anchoring in the Air. Similar to previous ap-
proaches [
], a new type of specially designed drone could be
integrated into the Flyables toolkit to provide force feedback that
matches the given
scenario. Especially because drones are not
anchored to the environment, rendering realistic counter-forces is
challenging. For example, a thrust lever or joystick of an aircraft
has a mechanical resistance. The pilot needs to overcome this resis-
tance while operating the aircraft. To mimic these haptic properties,
we envision that Flyables could integrate further matching haptic
elements (see Figure 8B) to our aircraft scenario (see Figure 8A).
Through specially designed drones, the matching force feedback
could be generated by accelerating horizontally without tilting,
similar to accelerating up and down to render weight and stiness
]. We envision a drone with additional horizontally mounted ro-
tors. This would enable the drone to induce forces sideways while
using the vertical rotors to maintain height and orientation. Be-
sides that, future drones could use the resistance of the air to apply
forces to the interacting
user by adjusting their surface size to
render resistance and inertia [
]. Further, we envision that a specif-
ically designed PID controller could enhance the haptic sensation of
counter-forces. Such controllers could overtake the controlling of a
Flyable when the system detects that specic counter-forces must
be applied (e.g., if the
user grabs a thrust lever). While this was
out of scope for the current version of Flyables, we envision that
future research could investigate in this direction. We are condent
that such research could lead to improvements in the overall idea of
Flyables as future drones evolve rapidly due to the mass market. To
foster such research, we included a detailed document on how to
integrate any kind of remote-controlled drone or quadcopter with
Flyables with little technical eort.
Autonomic Reuse of Flyables. To provide haptics to myriads of
objects in
,Flyables could be reusable, similar to haptic retarget-
ing [
]. Here, one haptic prop is used for multiple virtual objects.
One Flyable could also be used for multiple virtual objects as long
as it is present at the position where the user expects the haptic
feedback. We imagine using machine learning algorithms to predict
the future position of a Flyable with regard to where it is most likely
needed. Future research could investigate the suitability of dierent
prediction approaches.
A major drawback of using drones for haptic feedback is that
drones crash easily. For example, a Flyable could crash when the
user hits the button too hard (see Figure 8C). The button press
event would still be valid to the system, but the drone with the
physical button would not be available for interaction. We envision
that future drones could automatically recover from such crashes
without the user noticing. Drones could be designed to restart after
they crash, similar to the Parrot Rolling Spider. Such drones can
simply roll over and restart. We envision that drones specically
designed to automatically restart and get back in position would
enable a more reliable and enjoyable VR experience, as the user
would not need to handle the drones carefully. Thus, future research
could investigate how to hide the fact that a drone crashed from
the user while preserving the narrative of the VR experience.
Novel Interface Elements. Besides the existing ve
and the previously envisioned thrust lever (see Figure 8B), we imag-
ine new interface elements that can be integrated into Flyables
to support more use cases in
. To support narratives in games
or enhance realism in, for example, interior design experiences, a
pull string to turn on a lamp, open a garage gate, or honk a truck
horn could be mounted to a drone, similar to the work of Tsykunov
and Tsetserukou [
]. To support more specic elements, such as
a door handle, future research could investigate the suitability of
drones that are tilted by the user. Here, proper force feedback and
anchoring could be the keys to providing a realistic experience.
Further Use Cases. Modern
-HMDs can track the hands of
their users, but controllers are still needed or even desired for some
interactions. Here, Flyables could ll the gap by providing controller
devices when they are required without breaking the immersive
experience. Users could quickly switch between haptic
brought to them by a drone and free hand interaction. This would
allow the use of bare hands for gestures (for example, in multi-user
scenarios such as collaboration [
]) as well as the ability to
switch quickly to haptic device input.
We designed, implemented, and evaluated the Flyables toolkit, a
toolkit that uses quadcopters to deliver physical input
devices to a
user. The current toolkit consists of ve
(a button, a knob, a joystick, a slider, and a 3D mouse) that resemble
fundamental interaction patterns of today’s
s. The results of our
study show that Flyables can introduce an exciting, realistic, and
fun way to interact with virtual content. Participants felt more
immersed in the VR environment when using Flyables, appreciated
the haptics of Flyables, and stated that, compared to controllers,
Flyables communicate their functionality through their aordance.
However, state-of-the-art controllers still outperform Flyables in
terms of input precision and task completion time.
To further improve the open-source Flyables toolkit, we extracted
research challenges. These challenges include additional force feed-
back through specially designed drones, approaches to reuse a
limited set of drones for multiple virtual objects, and the creation
and exploration of novel
elements and interaction opportunities.
Addressing these challenges can help to promote Flyables as an al-
ternative to controllers in a variety of
scenarios. Such scenarios
could benet from a richer haptic experience and the communica-
tion of functionality through well-known input devices. We also
aim to further develop the toolkit to enable researchers and practi-
tioners to explore how Flyables can serve as physical
in VR applications.
Muhammad Abdullah, Minji Kim, Waseem Hassan, Yoshihiro Kuroda, and
Seokhee Jeon. 2018. HapticDrone: An encountered-type kinesthetic haptic inter-
face with controllable force feedback: Example of stiness and weight rendering.
In 2018 IEEE Haptics Symposium (HAPTICS ’18). IEEE, Piscataway, NJ, USA, 334–
Parastoo Abtahi, Benoit Landry, Jackie (Junrui) Yang, Marco Pavone, Sean Follmer,
and James A. Landay. 2019. Beyond The Force: Using Quadcopters to Appropriate
VRST ’21, December 8–10, 2021, Osaka, Japan Auda et al.
Objects and the Environment for Haptics in Virtual Reality. In Proceedings of the
2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland
Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, Article
359, 13 pages.
Harshit Agrawal, Sang-won Leigh, and Pattie Maes. 2015. L’evolved: Autonomous
and Ubiquitous Utilities as Smart Agents. In Adjunct Proceedings of the 2015 ACM
International Joint Conference on Pervasive and Ubiquitous Computing and Pro-
ceedings of the 2015 ACM International Symposium on Wearable Computers (Osaka,
Japan) (UbiComp/ISWC’15 Adjunct). Association for Computing Machinery, New
York, NY, USA, 293–296.
Mahdi Azmandian, Mark Hancock, Hrvoje Benko, Eyal Ofek, and Andrew D.
Wilson. 2016. Haptic Retargeting: Dynamic Repurposing of Passive Haptics for
Enhanced Virtual Reality Experiences. In Proceedings of the 2016 CHI Conference
on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16).
Association for Computing Machinery, New York, NY, USA, 1968–1979. https:
Ann Blandford, Dominic Furniss, and Stephann Makri. 2016. Qualitative Hci
Research: Going Behind the Scenes. Morgan & Claypool Publishers, Williston, VT,
USA. 1–115 pages.
John Brooke et al
1996. SUS-A quick and dirty usability scale. Usability evaluation
in industry 189, 194 (1996), 4–7.
Emily Brown and Paul Cairns. 2004. A Grounded Investigation of Game Im-
mersion. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems
(Vienna, Austria) (CHI EA ’04). Association for Computing Machinery, New York,
NY, USA, 1297–1300.
Markus Funk. 2018. Human-Drone Interaction: Let
s Get Ready for Flying User
Interfaces! Interactions 25, 3 (April 2018), 78–81.
Antonio Gomes, Calvin Rubens, Sean Braley, and Roel Vertegaal. 2016. Bit-
Drones: Towards Using 3D Nanocopter Displays As Interactive Self-Levitating
Programmable Matter. In Proceedings of the 2016 CHI Conference on Human Fac-
tors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for
Computing Machinery, New York, NY, USA, 770–780.
Gunnar Harboe and Elaine M. Huang. 2015. Real-World Anity Diagramming
Practices: Bridging the Paper-Digital Gap. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems (Seoul, Republic of Korea)
(CHI ’15). Association for Computing Machinery, New York, NY, USA, 95–104.
Marc Hassenzahl, Michael Burmester, and Franz Koller. 2003. AttrakDi: Ein
Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität.
Vieweg+Teubner Verlag, Wiesbaden, 187–196. 3-
322-80058- 9_19
Seongkook Heo, Christina Chung, Geehyuk Lee, and Daniel Wigdor. 2018. Thor’s
Hammer: An Ungrounded Force Feedback Device Utilizing Propeller-Induced
Propulsive Force. In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing
Machinery, New York, NY, USA, Article 525, 11 pages.
Matthias Hoppe, Marinus Burger, Albrecht Schmidt, and Thomas Kosch. 2019.
DronOS: A Flexible Open-Source Prototyping Framework for Interactive Drone
Routines. In Proceedings of the 18th International Conference on Mobile and Ubiq-
uitous Multimedia (Pisa, Italy) (MUM ’19). Association for Computing Machinery,
New York, NY, USA, Article 15, 7 pages.
Matthias Hoppe, Pascal Knierim, Thomas Kosch, Markus Funk, Lauren Futami,
Stefan Schneegass, Niels Henze, Albrecht Schmidt, and Tonja Machulla. 2018.
VRHapticDrones: Providing Haptics in Virtual Reality Through Quadcopters.
In Proceedings of the 17th International Conference on Mobile and Ubiquitous
Multimedia (Cairo, Egypt) (MUM 2018). Association for Computing Machinery,
New York, NY, USA, 7–18.
Seungwoo Je, Hyelip Lee, Myung Jin Kim, and Andrea Bianchi. 2018. Wind-
blaster: A Wearable Propeller-based Prototype That Provides Ungrounded Force-
feedback. In ACM SIGGRAPH 2018 Emerging Technologies (Vancouver, British
Columbia, Canada) (SIGGRAPH ’18). Association for Computing Machinery, New
York, NY, USA, Article 23, 2 pages.
Robert S. Kennedy, Norman E. Lane, and Michael G. Berbaum, Kevin S.and Lilien-
thal. 1993. Simulator Sickness Questionnaire: An Enhanced Method for Quanti-
fying Simulator Sickness. The International Journal of Aviation Psychology 3, 3
(1993), 203–220.
Pascal Knierim, Thomas Kosch, Alexander Achberger, and Markus Funk. 2018.
Flyables: Exploring 3D Interaction Spaces for Levitating Tangibles. In Proceedings
of the Twelfth International Conference on Tangible, Embedded, and Embodied
Interaction (Stockholm, Sweden) (TEI ’18). Association for Computing Machinery,
New York, NY, USA, 329–336.
Pascal Knierim, Thomas Kosch, Valentin Schwind, Markus Funk, Francisco Kiss,
Stefan Schneegass, and Niels Henze. 2017. Tactile Drones - Providing Immersive
Tactile Feedback in Virtual Reality Through Quadcopters. In Proceedings of the
2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems
(Denver, Colorado, USA) (CHI EA ’17). Association for Computing Machinery,
New York, NY, USA, 433–436.
Sven Mayer, Valentin Schwind, Robin Schweigert, and Niels Henze. 2018. The
Eect of Oset Correction and Cursor on Mid-Air Pointing in Real and Virtual
Environments. In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing
Machinery, New York, NY, USA, Article 653, 13 pages.
Víctor Rodrigo Mercado, Maud Marchal, and Anatole Lécuyer. 2021. “Haptics
On-Demand”: A Survey on Encountered-Type Haptic Displays. IEEE Transactions
on Haptics 14, 3 (2021), 449–464.
Mark R. Mine. 1995. Virtual Environment Interaction Techniques. Technical Report.
Catarina Mota. 2011. The Rise of Personal Fabrication. In Proceedings of the 8th
ACM Conference on Creativity and Cognition (Atlanta, Georgia, USA) (C&C ’11).
Association for Computing Machinery, New York, NY, USA, 279–288. https:
Florian ’Floyd’ Mueller, Martin R. Gibbs, and Frank Vetere. 2008. Taxonomy of
Exertion Games. In Proceedings of the 20th Australasian Conference on Computer-
Human Interaction: Designing for Habitus and Habitat (Cairns, Australia) (OZCHI
’08). Association for Computing Machinery, New York, NY, USA, 263–266. https:
Stefanie Mueller, Sangha Im, Serama Gurevich, Alexander Teibrich, Lisa Ps-
terer, François Guimbretière, and Patrick Baudisch. 2014. WirePrint: 3D Printed
Previews for Fast Prototyping. In Proceedings of the 27th Annual ACM Sym-
posium on User Interface Software and Technology (Honolulu, Hawaii, USA)
(UIST ’14). Association for Computing Machinery, New York, NY, USA, 273–
Jun Murayama, Laroussi Bougrila, Yanlinluo Katsuhito Akahane, Shoichi
Hasegawa, Béat Hirsbrunner, and Makoto Sato. 2004. SPIDAR G&G: A two-
handed haptic interface for bimanual VR interaction. In Proceedings of EuroHaptics
2004. Munich, Germany, 138–146.
Dinh Quang Nguyen, Giuseppe Loianno, and Van Anh Ho. 2020. Towards
Design of a Deformable Propeller for Drone Safety. In 3rd IEEE International
Conference on Soft Robotics (RoboSoft’10). IEEE, Piscataway, NJ, USA, 464–469.
Huaishu Peng, Jimmy Briggs, Cheng-Yao Wang, Kevin Guo, Joseph Kider, Stefanie
Mueller, Patrick Baudisch, and François Guimbretière. 2018. RoMA: Interactive
Fabrication with Augmented Reality and a Robotic 3D Printer. In Proceedings of
the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC,
Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA,
Article 579, 12 pages.
Thammathip Piumsomboon, Youngho Lee, Gun Lee, and Mark Billinghurst. 2017.
CoVAR: A Collaborative Virtual and Augmented Reality System for Remote
Collaboration. In SIGGRAPH Asia 2017 Emerging Technologies (Bangkok, Thailand)
(SA ’17). Association for Computing Machinery, New York, NY, USA, Article 3,
2 pages.
Tomoya Sasaki, Richard Sahala Hartanto, Kao-Hua Liu, Keitarou Tsuchiya, At-
sushi Hiyama, and Masahiko Inami. 2018. Leviopole: Mid-Air Haptic Interactions
Using Multirotor. In ACM SIGGRAPH 2018 Emerging Technologies (Vancouver,
British Columbia, Canada) (SIGGRAPH ’18). Association for Computing Machin-
ery, New York, NY, USA, Article 12, 2 pages.
Ben Shneiderman. 1998. Designing the User Interface – Strategies for Eective
Human-Computer-Interaction (3 ed.). Addison-Wesley Longman, Boston, MA,
Etsuko Tokunaga, Takeshi Yamamoto, Emi Ito, and Norio Shibata. 2018. Un-
derstanding the Thalidomide Chirality in Biological Processes by the Self-
disproportionation of Enantiomers. Scientic Reports 8, 1 (2018), 17131. https:
// 35457-6
Evgeny Tsykunov, Roman Ibrahimov, Derek Vasquez, and Dzmitry Tsetserukou.
2019. SlingDrone: Mixed Reality System for Pointing and Interaction Using a
Single Drone. In 25th ACM Symposium on Virtual Reality Software and Technology
(Parramatta, NSW, Australia) (VRST ’19). Association for Computing Machinery,
New York, NY, USA, Article 39, 5 pages.
Evgeny Tsykunov and Dzmitry Tsetserukou.2019. WiredSwarm: High Resolution
Haptic Feedback Provided by a Swarm of Drones to the User’s Fingers for VR
Interaction. In 25th ACM Symposium on Virtual Reality Software and Technology
(Parramatta, NSW, Australia) (VRST ’19). Association for Computing Machinery,
New York,N Y,USA, Article 102, 2 pages.
Daniel Vogel and Ravin Balakrishnan. 2005. Distant Freehand Pointing and
Clicking on Very Large, High Resolution Displays. In Proceedings of the 18th
Annual ACM Symposium on User Interface Software and Technology (Seattle, WA,
USA) (UIST ’05). Association for Computing Machinery, New York, NY, USA,
Bob G. Witmer and Michael J. Singer. 1998. Measuring Presence in Virtual
Environments: A Presence Questionnaire. Presence: Teleoper. Virtual Environ. 7, 3
(June 1998), 225–240.
Flyables VRST ’21, December 8–10, 2021, Osaka, Japan
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The
Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova
Procedures. In Proceedings of the SIGCHI Conference on Human Factors in Comput-
ing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Ma-
chinery, New York, NY, USA, 143–146.
Haijun Xia, Sebastian Herscher, Ken Perlin, and Daniel Wigdor. 2018. Space-
time: Enabling Fluid Individual and Collaborative Editing in Virtual Reality. In
Proceedings of the 31st Annual ACM Symposium on User Interface Software and
Technology (Berlin, Germany) (UIST ’18). Association for Computing Machinery,
New York, NY, USA, 853–866.
Kotaro Yamaguchi, Ginga Kato, Yoshihiro Kuroda, Kiyoshi Kiyokawa, and Haruo
Takemura. 2016. A Non-grounded and Encountered-type Haptic Display Using a
Drone. In Proceedings of the 2016 Symposium on Spatial User Interaction (Tokyo,
Japan) (SUI ’16). Association for Computing Machinery, New York, NY, USA,
André Zenner and Antonio Krüger. 2019. Drag:On: A Virtual Reality Controller
Providing Haptic Feedback Based on Drag and Weight Shift. In Proceedings of the
2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland
Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12.
Yang Zhang, Wolf Kienzle, Yanjun Ma, Shiu S. Ng, Hrvoje Benko, and Chris
Harrison. 2019. ActiTouch: Robust TouchDetection for On-Skin AR/VR Interfaces.
In Proceedings of the 32nd Annual ACM Symposium on User Interface Software
and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing
Machinery, New York, NY, USA, 1151–1159.
... Another approach is to use robots to create haptic experiences (for example, with stationary robot arms [1] or actuated mobile robots [19]). In more recent work, researchers also investigated the use of drones flying around the user to provide haptics [2,23]. An example of this is Flyables, which uses 3D-printed user interface components mounted to quadcopters that fly to specific locations to offer haptics for their virtual counterparts [2]. ...
... In more recent work, researchers also investigated the use of drones flying around the user to provide haptics [2,23]. An example of this is Flyables, which uses 3D-printed user interface components mounted to quadcopters that fly to specific locations to offer haptics for their virtual counterparts [2]. However, it remains a challenge to use these approaches in AR as, in this case, users still perceive their physical environment (i.e., users would see the humans and robots used to create the haptics). ...
... As haptic input requires the additional step of first linking the virtual counterpart, we expect it to be slower. 2 We hypothesize that haptic input will result in fewer input errors than MRTK input because we directly track the input and not the user's hands (translated to input), which could introduce additional noise. 3 We predict that haptic input results in a better user experience because it contributes additional haptic sensations and is potentially familiar to the participants. ...
Full-text available
Augmented Reality (AR) technology enables users to superpose virtual content onto their environments. However, interacting with virtual content while mobile often requires users to perform interactions in mid-air, resulting in a lack of haptic feedback. Hence, in this work, we present the ARm Haptics system, which is worn on the user's forearm and provides 3D-printed input modules, each representing well-known interaction components such as buttons, sliders, and rotary knobs. These modules can be changed quickly, thus allowing users to adapt them to their current use case. After an iterative development of our system, which involved a focus group with HCI researchers, we conducted a user study to compare the ARm Haptics system to hand-tracking-based interaction in mid-air (baseline). Our findings show that using our system results in significantly lower error rates for slider and rotary input. Moreover, use of the ARm Haptics system results in significantly higher pragmatic quality and lower effort, frustration, and physical demand. Following our findings, we discuss opportunities for haptics worn on the forearm.
... Step , ( 1, , 1, ) with distance 1 , and computes the total distance travelled if 1, flies to a charging station and an FLS is deployed by a dispatcher closest to 1, to illuminate 1, . If the total distance is smaller than 1 then the FLS identified by 1, is scheduled to fly back to a charging station 6 . Otherwise, Step 2 computes the flight path from 1, to 1, and adds this path to . ...
... Finally, a swarm of FLSs may implement encounter-type haptic interactions [35] by generating force back against a user touch [1,2,6,21,26]. This will enable a user to see virtual objects as illuminations without wearing glasses and to touch them without wearing gloves [19]. ...
... Distance may be replaced with the amount of energy required or flight time to consider different objectives.6 In this scenario, Step 2 may consider other following > . ...
Full-text available
This paper presents techniques to display 3D illuminations using Flying Light Specks, FLSs. Each FLS is a miniature (hundreds of micrometers) sized drone with one or more light sources to generate different colors and textures with adjustable brightness. It is network enabled with a processor and local storage. Synchronized swarms of cooperating FLSs render illumination of virtual objects in a pre-specified 3D volume, an FLS display. We present techniques to display both static and motion illuminations. Our display techniques consider the limited flight time of an FLS on a fully charged battery and the duration of time to charge the FLS battery. Moreover, our techniques assume failure of FLSs is the norm rather than an exception. We present a hardware and a software architecture for an FLS-display along with a family of techniques to compute flight paths of FLSs for illuminations. With motion illuminations, one technique (ICF) minimizes the overall distance traveled by the FLSs significantly when compared with the other techniques.
Mid-air haptics allow bare-hand tactile stimulation; however, it has a constrained workspace, making it unsuitable for room-scale haptics. We present a novel approach to rendering mid-air haptic sensations in a large rendering volume by turning a static array into a dynamic array following the user's hand. We used a 6DOF robot to drive a haptic ultrasound array over a large 3D space. Our system enables rendering room-scale mid-air experiences while preserving bare-hand interaction, thus, providing tangibility for virtual environments. To evaluate our approach, we performed three evaluations. First, we performed a technical system evaluation, showcasing the feasibility of such a system. Next, we conducted three psychophysical experiments, showing that the motion does not affect the user's perception with high likelihood. Lastly, we explored seven use cases that showcase our system's potential using a user study. We discuss challenges and opportunities in how large-scale mid-air haptics can contribute toward room-scale haptic feedback. Thus, with our system, we contribute to general haptic mid-air feedback on a large scale.
Conference Paper
Full-text available
We present DronOS, a rapid prototyping framework that can track, control, and automate drone routines. Previous research in the domain of Human-Drone Interaction relied on hardware or proprietary vendor-dependent libraries that had to be exclusively programmed for specific use cases. This forces users to stick with a drone manufacturer or model as well as limiting users in transferring their drone control logic to other drones. To overcome the aforementioned issues, our framework uses low-cost off-the-shelf hardware and applies to a variety of already available or self-crafted drones. To assess the usability of DronOS, we evaluate three drone programming modes: Unity Scripting, Vive Scripting, and Vive Realtime. We find that Vive Scripting required the least subjective workload in programming drone routines while Unity Scripting yielded the highest accuracy and Vive Realtime the least task completion time. We anticipate requirements for drone prototyping frameworks that target novice and expert users as operators.
Conference Paper
Full-text available
We propose a concept of a novel interaction strategy for providing rich haptic feedback in Virtual Reality (VR), when each user’s finger is connected to micro-quadrotor with a wire. Described technology represents the first flying wearable haptic interface. The solution potentially is able to deliver high resolution force feedback to each finger during fine motor interaction in VR. The tips of tethers are connected to the centers of quadcopters under their bottom. Therefore, flight stability is increasing and the interaction forces are becoming stronger which allows to use smaller drones.
Conference Paper
Full-text available
We propose SlingDrone, a novel Mixed Reality interaction paradigm that utilizes a micro-quadrotor as both pointing controller and interactive robot with a slingshot motion type. The drone attempts to hover at a given position while the human pulls it in desired direction using a hand grip and a leash. Based on the displacement, a virtual trajectory is defined. To allow for intuitive and simple control, we use virtual reality (VR) technology to trace the path of the drone based on the displacement input. The user receives force feedback propagated through the leash. Force feedback from SlingDrone coupled with visualized trajectory in VR creates an intuitive and user friendly pointing device. When the drone is released, it follows the trajectory that was shown in VR. Onboard payload (e.g. magnetic gripper) can perform various scenarios for real interaction with the surroundings, e.g. manipulation or sensing. Unlike HTC Vive controller, SlingDrone does not require handheld devices, thus it can be used as a standalone pointing technology in VR.
Conference Paper
Full-text available
Quadcopters have been used as hovering encountered-type haptic devices in virtual reality. We suggest that quadcopters can facilitate rich haptic interactions beyond force feedback by appropriating physical objects and the environment. We present HoverHaptics, an autonomous safe-to-touch quadcopter and its integration with a virtual shopping experience. HoverHaptics highlights three affordances of quadcopters that enable these rich haptic interactions: (1) dynamic positioning of passive haptics, (2) texture mapping, and (3) animating passive props. We identify inherent challenges of hovering encountered-type haptic devices, such as their limited speed, inadequate control accuracy, and safety concerns. We then detail our approach for tackling these challenges, including the use of display techniques, visuo-haptic illusions, and collision avoidance. We conclude by describing a preliminary study (n = 9) to better understand the subjective user experience when interacting with a quadcopter in virtual reality using these techniques.
Conference Paper
Full-text available
We present VRHapticDrones, a system utilizing quadcopters as levitating haptic feedback proxy. A touchable surface is attached to the side of the quadcopters to provide unintrusive, flexible, and programmable haptic feedback in virtual reality. Since the users' sense of presence in virtual reality is a crucial factor for the overall user experience, our system simulates haptic feedback of virtual objects. Quadcopters are dynamically positioned to provide haptic feedback relative to the physical interaction space of the user. In a first user study, we demonstrate that haptic feedback provided by VRHapticDrones significantly increases users' sense of presence compared to vibrotactile controllers and interactions without additional haptic feedback. In a second user study, we explored the quality of induced feedback regarding the expected feeling of different objects. Results show that VRHapticDrones is best suited to simulate objects that are expected to feel either light-weight or have yielding surfaces. With VRHapticDrones we contribute a solution to provide unintrusive and flexible feedback as well as insights for future VR haptic feedback systems.
Full-text available
Twenty years after the thalidomide disaster in the late 1950s, Blaschke et al. reported that only the (S)-enantiomer of thalidomide is teratogenic. However, other work has shown that the enantiomers of thalidomide interconvert in vivo, which begs the question: why is teratogen activity not observed in animal experiments that use (R)-thalidomide given the ready in vivo racemization (“thalidomide paradox”)? Herein, we disclose a hypothesis to explain this “thalidomide paradox” through the in-vivo self-disproportionation of enantiomers. Upon stirring a 20% ee solution of thalidomide in a given solvent, significant enantiomeric enrichment of up to 98% ee was observed reproducibly in solution. We hypothesize that a fraction of thalidomide enantiomers epimerizes in vivo, followed by precipitation of racemic thalidomide in (R/S)-heterodimeric form. Thus, racemic thalidomide is most likely removed from biological processes upon racemic precipitation in (R/S)-heterodimeric form. On the other hand, enantiomerically pure thalidomide remains in solution, affording the observed biological experimental results: the (S)-enantiomer is teratogenic, while the (R)-enantiomer is not.
Encountered-Type Haptic Displays (ETHDs) provide haptic feedback by positioning a tangible surface for the user to encounter. This permits users to freely eliciting haptic feedback with a surface during a virtual simulation. ETHDs differ from most of current haptic devices which rely on an actuator always in contact with the user. This survey paper intends to describe and analyze the different research efforts carried out in this field. In addition, this review analyzes ETHD literature concerning definitions, history, hardware, haptic perception processes involved, interactions and applications. The paper proposes a formal definition of ETHDs, a taxonomy for classifying hardware types, and an analysis of haptic feedback used in literature. Taken together the overview of this survey intends to encourage future work in the ETHD field.
Conference Paper
Drones have brought many benefits to our lives and their use is growing at a rapid rate. Many countries have drone flight restriction rules; however, the safety of drone operators and bystanders, and the protection of drones against damage require improvement. Here, we propose a novel design of deformable propellers inspired by dragonfly wings. The structure of these propellers includes a flexible segment similar to the nodus on a dragonfly wing. This flexible segment can bend, twist and even fold upon collision, absorbing force upon impact and protecting the propeller from damage. Part of the leading edge of the propeller consists of a pliable silicone rubber surface able to absorb impact forces and reducing blade sharpness. The propeller, which is approximately 10 inches long, can generate a thrust force of nearly 1.3 N at maximum velocity of about 3200 rpm. Results of blade sharpness tests showed that the deformable propeller was safer than a rigid propeller. After deformation upon collision, the propeller can return to its original form and work normally within 0.4 seconds.
Conference Paper
Contemporary AR/VR systems use in-air gestures or handheld controllers for interactivity. This overlooks the skin as a convenient surface for tactile, touch-driven interactions, which are generally more accurate and comfortable than free space interactions. In response, we developed ActiTouch, a new electrical method that enables precise on-skin touch segmentation by using the body as an RF waveguide. We combine this method with computer vision, enabling a system with both high tracking precision and robust touch detection. Our system requires no cumbersome instrumentation of the fingers or hands, requiring only a single wristband (e.g., smartwatch) and sensors integrated into an AR/VR headset. We quantify the accuracy of our approach through a user study and demonstrate how it can enable touchscreen-like interactions on the skin.
Conference Paper
Standard controllers for virtual reality (VR) lack sophisticated means to convey a realistic, kinesthetic impression of size, resistance or inertia. We present the concept and implementation of Drag:on, an ungrounded shape-changing VR controller that provides dynamic passive haptic feedback based on drag, i.e. air resistance, and weight shift. Drag:on leverages the airflow occurring at the controller during interaction. By dynamically adjusting its surface area, the controller changes the drag and rotational inertia felt by the user. In a user study, we found that Drag:on can provide distinguishable levels of haptic feedback. Our prototype increases the haptic realism in VR compared to standard controllers and when rotated or swung improves the perception of virtual resistance. By this, Drag:on provides haptic feedback suitable for rendering different virtual mechanical resistances, virtual gas streams, and virtual objects differing in scale, material and fill state.