Content uploaded by Yoshihiro Kuroda
Author content
All content in this area was uploaded by Yoshihiro Kuroda on Nov 09, 2017
Content may be subject to copyright.
A Non-grounded and Encountered-type Haptic Display
Using a Drone
Kotaro Yamaguchi
Graduate School of
Information Science and
Technology, Osaka University,
1-5 Yamadaoka, Suita, Osaka
565-0871, Japan
yamaguchi.koutarou
@lab.ime.cmc.osaka-
u.ac.jp
Ginga Kato
Graduate School of
Information Science and
Technology, Osaka University,
1-5 Yamadaoka, Suita, Osaka
565-0871, Japan
katou.ginga
@lab.ime.cmc.osaka-
u.ac.jp
Yoshihiro Kuroda
Graduate School of
Engineering Science, Osaka
University, 1-3
Machikaneyama, Toyonaka,
560-8531, Japana
ykuroda@bpe.es.osaka-
u.ac.jp
Kiyoshi Kiyokawa
Cybermedia Center, Osaka
University, 1-32
Machikaneyama, Toyonaka,
Osaka 560-0043, Japan
kiyo@ime.cmc.osaka-
u.ac.jp
Haruo Takemura
Cybermedia Center, Osaka
University, 1-32
Machikaneyama, Toyonaka,
Osaka 560-0043, Japan
takemura@ime.cmc.osaka-
u.ac.jp
ABSTRACT
Encountered-type haptic displays recreate realistic haptic
sensations by producing physical surfaces on demand for a
user to explore directly with his or her bare hands. However,
conventional encountered-type devices are fixated in the en-
vironment thus the working volume is limited. To address
the limitation, we investigate the potential of an unmanned
aerial vehicle (drone) as a flying motion base for a non-
grounded encountered-type haptic device. As a lightweight
end-effector, we use a piece of paper hung from the drone to
represent the reaction force. Though the paper is limp, the
shape of paper is held stable by the strong airflow induced
by the drone itself. We conduct two experiments to evaluate
the prototype system. First experiment evaluates the reac-
tion force presentation by measuring the contact pressure
between the user and the end-effector. Second experiment
evaluates the usefulness of the system through a user study
in which participants were asked to draw a straight line on
a virtual wall represented by the device.
Keywords
virtual reality; haptic display; encountered-type device; non-
grounded-type; unmanned aerial vehicle
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full cita-
tion on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.
SUI ’16, October 15-16, 2016, Tokyo, Japan
c
2016 ACM. ISBN 978-1-4503-4068-7/16/10. . . $15.00
DOI: http://dx.doi.org/10.1145/2983310.2985746
feedback
drone
virtual
creature
(a)
(b)
Figure 1: An example of haptic feedback by our
device in (a) the real and (b) virtual environments.
1. INTRODUCTION
Haptic feedback systems can be classified into three types
according to their mechanical grounding configuration; the
wearing type, the grip type and the target type. The wear-
ing type haptic devices represent reaction force by attaching
the device to the body of the user. The grip type haptic
devices perform as a haptic display by getting the user to
43
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for components of this work owned by others
than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from
permissions@acm.org or Publications Dept., ACM, Inc., fax +1 (212) 869-0481.
grip the tool-shaped devices such as a pen. Target type
haptic devices represent the target surface by changing the
shape, material properties, or/and location of the display
itself. The wearing type and grip type haptic devices may
cause the sense of incongruity in presence and the burden
because these devices are always in contact with the body
of the user. On the other hand, with the target type haptic
devices, the user rarely feels a burden. However, it is dif-
ficult to present a virtual object in the real world with an
arbitrary shape, material and position.
One of the approaches to realize a target type haptic de-
vice is called encountered-type. Encountered-type haptic
displays present haptic feedback by moving the end-effector
to the surface of the virtual object which the user is going to
touch. Encountered-type haptic devices have an advantage
that the user doesn’t feel the contact force unless the user
touch a virtual object because the devices themselves move.
However, there is a disadvantage that the work space is lim-
ited because conventional encountered-type haptic devices
need to be grounded.
We intend to develop a novel encountered-type haptic
device that does not limit a work space. In this study,
we propose an attachment of a light-weight flat object to
an unmanned aerial vehicle (drone) and a force generation
mechanism using its own airflow that would realize the non-
grounded encountered-type haptic display. Figure 1 shows
an example of haptic feedback by our haptic device in the
real environment (RE, (a)) and a virtual environment (VE,
(b)).
2. RELATED WORK
In this section, we will introduce the conventional grounded
type haptic displays and some important studies. In addi-
tion, we will also introduce a work support device using a
drone. When haptic devices apply forces to a user to repre-
sent the sensation of external force, the reaction force is also
generated. Most conventional haptic devices are grounded
to the environment to resist the reaction force from the user.
However, the grounded haptic displays have a disadvantage
of a limited work space due to the fixed position of the de-
vice. Therefore, in recent years, numerous non-grounded
haptic devices have been developed[1][2]. Rekimoto et al.
developed Traxion, which represents a virtual force by con-
trolling an asymmetric waveform of vibration with an elec-
tromagnetic coil supported by leaf springs[3]. The size of
Traxion is 7.5 mm ×35.0 mm ×5.0mm and the weight is
5.2 g. Yoshie et al. developed a device performing non-
grounded torque display using a gyro effect[4]. When the
force to change the rotation axis is applied to a disk that
rotates at a high speed, the force in the direction perpen-
dicular to the applied force is generated in the rotation axis.
These devices can display a virtual force or an actual torque
with a large work space. However, they cannot display ac-
tual translation forces.
Even in the case of no contact with a virtual object, most
haptic devices excluding encountered-type devices cannot
avoid displaying forces, because a part of the haptic dis-
play, e.g. an end-effector, is grasped or worn by the user.
In contrast, an encountered-type haptic device realizes an
ideal non-contact state, because no force arises unless a user
touches a virtual object[5][6]. When the user is going to
touch the object, the device moves toward the position of
the virtual object surface in a forestalling manner so that a
force is displayed to the user upon actual touch. Yokokoji
et al. developed the trajectory planning system with the
encountered-type haptic device to represent multiple vir-
tual objects in the three-dimensional space[7][8].Yafune et
al. proposed displaying a couple of virtual objects at the
same time using an encountered-type haptic device[9]. The
above-mentioned conventional encountered-type haptic de-
vices are grounded in space. Therefore, the work space of
those devices is limited.
A drone is an unmanned aerial vehicle that can fly and
move by a remote control. It is attractive not only for aerial
photography and transport, but also for an interaction me-
dia with a human. Some devices using a drone have been
developed to support a user’s work. Agrawal et al. devel-
oped a drone to assist a daily life of a person in a room[10].
A table attached to the top surface of the drone supports
writing text, and the drone’s bottom surface controls illu-
mination in a room. Weise et al. used a commercial head
mounted display (HMD), Oculus Rift, to enable the user to
control the drone remotely from the first person view [11].
The user can watch the remote scene captured by the camera
attached to the drone through the HMD, and can operate a
drone using a controller. These studies focus on positioning
and the operability of the drone.
We focus on developing an encountered-type haptic dis-
play using a drone. The non-grounded mechanism may dis-
play a limited force in magnitude due to a lack of fixation.
However, we address this issue by taking advantage of the
drone’s own strong airflow to represent the reaction force
from a virtual object such as a virtual creature illustrated
in Fig. 1.
3. DRONE-BASED NON-GROUNDED HAP-
TIC DISPLAY
A light-weight flat object is attached to a drone as an
end-effector, as shown in Fig. 2(a). The reaction force is
displayed parallel to the ground when a user touches the
end-effector with a grasping object, as shown in Fig. 2(b).
The strong airflow induced by the drone itself enables to
display force even with a light-weight and flexible object,
e.g. a sheet of paper. Then, the user can feel the contact
from the virtual object.
When the user is going to touch the virtual wall, the pro-
posed haptic display moves towards the corresponding posi-
tion in a forestalling manner and displays the reaction force
to the user upon actual touch. In our system, a user touches
a virtual object with a grasping object (see Fig. 1(a) and
Fig. 3) for safety. With a safer drone design, it is possible to
directly touch the end-effector. The system requires the po-
sition and orientation information of the haptic display and
that of the grasping object. In the current implementation,
a motion capture system is used to acquire these informa-
tion, and to synchronize their positions in VE. The process
flow is as follows:
1. Acquire the positions of the drone and the grasping
object from a motion capture system.
2. Update their positions in VE.
3. Estimate the contact position on the virtual wall where
the virtual grasping object might touch.
4. Move the drone to the corresponding estimated posi-
tion in RE. The user receives the reaction force when
44
drone
end-effector
(a)
airflow
drone
end-effector of
haptic display
grasping
object
haptic
feedback
(b)
Figure 2: Design and a principle of our haptic dis-
play: (a) an end-effector attached to a drone, (b)
force display using drone’s own airflow.
the collision between the grasping object and the end-
effector of the drone occurs.
AR Drone 2.0 by Parrot is used for the drone in the pro-
posed device. Unity 4 by Unity Technologies is used to con-
struct the VE and control the drone. The stereo images
of VE are sent to the HMD device, Oculus Rift DK1 by
Oculus VR, through Oculus Unity Integration. The posi-
tions of the drone and the grasping object are measured by
the motion capture system, Motive and OptiTrack by Nat-
uralPoint. The measured positions are broadcasted to local
hosts, and are transmitted to Unity. The system controls
the drone position using a Unity project, RiftDrone [11].
4. MEASUREMENT
We investigated whether or not the proposed haptic de-
vice can produce a perceptible force to a user. Specifically,
the effect of the drone’s airflow is examined because the
proposed mechanism uses the airflow to display the reaction
force.
Triaxial force sensor MFS20-025 by Nippon Liniax was
used to measure the force. Its maximum measurable force
is 50 N. The obtained signal was amplified 20 times with
an inversion amplification circuit because the range of the
measured force is less than 1.0 N. The amplified signal was
converted to digital signal and recorded in PC. The force
sensor was attached to the grasping object. The drone is
controlled at a fixed position and the experimenter (one of
the authors) pushes the end-effector of the proposed haptic
device with the grasping object. The pushing distance was
about 100 mm in depth from the initial surface position of
the end-effector. The force perpendicular to the surface of
the end-effector was measured at 100 Hz for a short period.
The force was measured with and without the presence of
the airflow. In the case of latter, the drone was not flying
Table 1: Average and standard deviation of the mea-
sured forces with and without airflow
airflow average (N) standard deviation
ON 1.18 ×10−13.6×10−2
OFF 3.6×10−21.3×10−2
drone
end-effector of
haptic display
grasping
object
HMD
Figure 3: Experimental setup.
and placed on a tripod. We performed the pushing task 10
times in each condition.
For calibration, the average of 100 consecutive samples
was set as a baseline before each trial. A moving average
filter with a span of 5 samples was applied to the measured
data. Table 1 shows the average forces of the 10 trials. The
result shows that the measured reaction force by the airflow
induced by the proposed haptic device is 0.118 ±0.036 N,
which is weak but well perceptible. In contrast, the mea-
sured force without the airflow is 0.036 ±0.012 N which
is too weak to perceive. A one-tailed t-test found that the
force with the presence of airflow was significantly greater
than the force without the airflow (t(11)=6.80, p0.001).
5. SUBJECTIVE STUDY
5.1 Experimental conditions
We investigated whether or not a user can stably draw
a straight line in midair using the proposed haptic display.
Four participants were recruited (all right-handed male, M=
24.0, SD=2.16) to take part in the experiment. They wore
the HMD and held the grasping object. They were requested
to move the grasping object following a marker on a hori-
zontal straight line of 1 m in length drawn on a virtual wall
which was presented about 1 m in front of the user. The
marker was moving at a constant speed (10 cm/sec) from
left to right, or from right to left. In a session they repeated
this line drawing task 10 times for each direction in a ran-
domized order, and they repeated the session twice, with and
without the proposed haptic display in a randomized order.
The position and orientation of the grasping object are re-
flected at 60 frames per second in VE, and its tip position
is used to draw a line on the virtual wall. These positions
were recorded at 10 times a second for analysis.
45
0
0.02
0.04
0.06
(a) (b) (c) (d)
The position from
virtual wall (m)
**
***
***
***
Figure 4: Average and standard error of deviation of
the tip of the grasping object from the surface of the
virtual wall; rightward lines with (a) and without
(b) the haptic display, leftward lines with (c) and
without (d) the haptic display. (**: p<0.01, ***:
p<0.001)
5.2 Results and Discussions
Figure 4 shows the average and standard error of the de-
viation of the tip of the grasping object from the surface
of the virtual wall for each of four combinations of condi-
tions. A two-way ANOVA found the main effects of both
the haptic display condition (F(1,15996)=215.18, p0.001)
and the line direction (F(1,15996)=143.58, p0.001). A
post-hoc analysis using Bonferroni correction revealed that
the haptic display was helpful in drawing a straight line in
midair regardless of the direction (rightward, p0.001, left-
ward, p<0.01). Also found was that rightward lines were
far more accurate than leftward (p0.001) when the hap-
tic display was used. This is presumably because drawing a
rightward line is more natural considering our writing skills
and the fact that all the participants were right-handed, thus
the physical support of the tool tip on the virtual wall was
more helpful. In the case of drawing a leftward line, it was
probably difficult even with the physical support. It is also
worth noting that the precision of positioning the drone was
insufficient which may also have contributed to the amount
of deviation when the haptic display was used.
6. CONCLUSIONS
In this study, we investigated the potential of an un-
manned aerial vehicle (drone) as a flying motion base for a
non-grounded encountered-type haptic device. We proposed
a force generation mechanism by using a sheet of paper and
the airflow of the drone itself as a lightweight end-effector.
The results of force measurement showed that perceptible
force was successfully presented with the drone’s airflow by
the proposed method. In addition, the force was increased
by the drone’s airflow statistically. We confirmed the effec-
tiveness of the our proposed device and the force generation
mechanism for an encountered-type haptic display. The re-
sults of the subjective study of line drawing suggested that
the haptic display was effective to support drawing a line
in midair. Future work includes improving the drone’s po-
sition control for more stable and accurate haptic feedback,
and conducting further user studies involving more compli-
cated drawing tasks. Self localization of the drone by using
the visual simultaneous localization and mapping (SLAM)
methods will also be investigated which will eliminate the
necessity of a dedicated tracking system.
Acknowledgments
This work was supported by JSPS KAKENHI Grant Num-
bers JP15K12082 and JP15K12083.
7. REFERENCES
[1] S. Kamuro, K. Minamizawa, N. Kawakami, and S. Tachi. An
Ungrounded Pen-Shaped Kinesthetic Display: Device
construction and Applications. In Proceedings of the IEEE
World Haptics, pages 557–562, 2011.
[2] M. Hirose, K. Hirota, T. Ogi, N. Kakehi, M. Saito, and
M. Nakashige. Hapticgear: the Development of a Wearable
Force Display System for Immersive Projection Displays. In
Proceedings of the IEEE Virtual Reality Conference 2001,
pages 123–129, 2001.
[3] J. Rekimoto. Traxion: a Tactile Interaction Device with
Virtual Force Sensation. In Proceedings of the ACM
SIGGRAPH 2014 Emerging Technologies, page 25, 2014.
[4] H.Yano, M.Yoshie, and H.Iwata. Development of a
Non-Grounded Haptic Interface Using the Gyro Effect. In
Proceedings of the 11th Symposium on Haptic Interfaces for
Virtual Environment and Teleoperator Systems (HAPTICS
’03), pages 32–39, 2003.
[5] T. Furukawa, K. Inoue, T. Takubo, and T. Arai.
Encountered-type Visual Haptic Display Using Flexible Sheet.
In Proceedings of the IEEE International Conference on
Robotics and Automation 2007, pages 479–484, 2007.
[6] S. Nakagawara, H. Kajimoto, N. Kawakami, S. Tachi, and
I. Kawabuchi. An Encounter-type Multi-Fingered Master Hand
Using Circuitous Joints. In Proceedings of the IEEE
International Conference on Robotics and Automation 2005,
pages 2667–2672, 2005.
[7] Y. Yokokohji, N. Muramori, Y. Sato, and T. Yoshikawa.
Designing an Encountered-type Haptic Display for Multiple
Fingertip Contacts based on the Observation of Human
Grasping Behaviors. In Proceedings of the 12th International
Symposium on Haptic Interfaces for Virtual Environment
and Teleoperator Systems 2004 (HAPTICS ’04), pages 66–73,
March 2004.
[8] Y. Yokokohi, J. Kinoshita, and T. Yoshikawa. Path Planning
for Encountered-type Haptic Devices That Render Multiple
Objects in 3D Space. In Proceedings of the IEEE Virtual
Reality 2001, pages 271–278, 2001.
[9] M. Yafune and Y. Yokokohji. Haptically Rendering Different
Switches Arranged on a Virtual Control Panel by Using an
Encountered-type Haptic Device. In Proceedings of the IEEE
World Haptics Conference (WHC) 2011, pages 551–556, June
2011.
[10] H. Agrawal, S. Leigh, and P. Maes. L’evolved: Autonomous
and Ubiquitous Utilities as Smart Agents. In Proceedings of
the ACM UbiComp 2015, pages 487–491, 2015.
[11] M. Weise. RiftDrone.
https://github.com/scopus777/RiftDrone. accessed 1 July
2016.
46