Content uploaded by Gözel Shakeri
Author content
All content in this area was uploaded by Gözel Shakeri on May 17, 2019
Content may be subject to copyright.
Content uploaded by Gözel Shakeri
Author content
All content in this area was uploaded by Gözel Shakeri on May 17, 2019
Content may be subject to copyright.
Three-in-one: Levitation, Parametric
Audio, and Mid-Air Haptic Feedback
Gözel Shakeri
Euan Freeman
Glasgow Interactive Systems Section,
University of Glasgow
g.shakeri.1@research.gla.ac.uk
euan.freeman@glasgow.ac.uk
William Frier
Michele Iodice
Benjamin Long
Orestis Georgiou
Ultrahaptics Ltd.
first.last@ultrahaptics.com
Carl Andersson
Chalmers University of Technology
carl.andersson@chalmers.se
ABSTRACT
Ultrasound enables new types of human-computer interfaces, ranging from auditory and haptic
displays to levitation (visual). We demonstrate these capabilities with an ultrasonic phased array that
allows users to interactively manipulate levitating objects with mid-air hand gestures whilst also
receiving auditory feedback via highly directional parametric audio, and haptic feedback via focused
ultrasound onto their bare hands. Therefore, this demo presents the first ever ultrasound rig which
conveys information to three dierent sensory channels and levitates small objects simultaneously.
CCS CONCEPTS
•Human-centered computing →Haptic devices;Auditory feedback;Gestural input.
Figure 1: A portable and self-contained ar-
rangement of ultrasonic transducers held
together by a laser cut perspex and 3D
printed parts. This rig is used to demon-
strate levitation, parametric audio and
mid-air haptic feedback simultaneously,
and can receive user input through a Leap
Motion controller.
KEYWORDS
Levitation; Ultrasound; Gestural controllers; In-
terface design; CHI’19 Extended Abstracts, May 4–9, 2019, Glasgow, Scotland Uk
©2019 Copyright held by the owner/author(s).
This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version
of Record was published in CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI’19 Extended
Abstracts), May 4–9, 2019, Glasgow, Scotland Uk, hps://doi.org/10.1145/3290607.3313264.
INTRODUCTION
Since the advent of computers, scientists and Sci-Fi enthusiasts envisioned the fusion of virtual and
physical world in an “Ultimate Display” [
7
]. This display will ideally be “a room within which the
computer can control the existence of maer”, shaping and reshaping maer to help us understand
shapes of objects, enable see-through and grasp-through objects, and provide multi-sensory feedback
to enhance the experience. One possible approach towards such a display is facilitated by ultrasound.
Ultrasound enables levitation of multiple dierent particles (e.g. polystyrene beads, liquid drops)
which are computer-manipulated to display dierent shapes in mid-air [
5
]. Further, using similar
ultrasonic phased arrays, highly directional and steerable auditory feedback (i.e. parametric audio)
[6] as well as haptic feedback [1] can be generated.
DEMO CONTRIBUTION
This demo paper presents for the first time ever a multimodal interaction (visual, auditory, and tactile)
where the user controls the in-air levitating particles with hand gestures. Specifically, the ultrasonic
levitation rig shown in Figure 1 presents directional auditory feedback to the gesture interaction and
tactile feedback to enhance the sense of agency and improve user experience while enabling a much
broader area for applications than was possible until now.
Figure 2: A levitating bead traced a heart-
shaped-path along a user-defined hand
gesture. The images of a video were added
together to produce this LeviPainting.
Figure 3: A levitating bead traced a leer-
A-shaped-path along a user-defined hand
gesture. The images of a video were added
together to produce this LeviPainting.
BACKGROUND
Ultrasound can be used for a multitude of interactive applications and is becoming more accessible to
designers and researchers through projects like Ultraino [4] and companies like Ultrahaptics Ltd.
Levitation.
Acoustic waves can levitate particles of a wide range of sizes and materials [
2
]. There are
many ways of achieving this, including the generation of acoustic standing waves such that particles
can ’sit’ on the nodes of the waves, or acoustic traps which apply radiation forces to particles enabling
them to levitate. The laer requires an electronically controlled phased array of ultrasonic transducers
which allows for more stable and advanced manipulation of the levitated particles [4].
Parametric Audio.
This is achieved by appropriately pre-distorting and modulating an audio signal
onto an ultrasonic carrier [
6
]. Propagation in air causes demodulation of the compound signal which
’spills out’ as audible sound along an ultrasound beam. It is possible to electronically steer the ultra-
sonic beam and thus the audible signal in a desired direction.
Ultrasonic Haptics.
Focused sound can also exert pressure against the skin, enabling non-contact
haptic feedback [
1
,
3
]. This pressure is too weak to be perceived, but feels like vibration when the
amplitude is modulated at a frequency within the range of vibrotactile perception (e.g., turning it on
and o at 200Hz), or when the focus is moved along a lateral or closed path at high speeds.
Multimodal. There have been numerous example use cases of the above technologies in HCI, most
oen using 40kHz ultrasound. Recent advancements in soware and hardware have allowed combi-
nations of the three, typically in pairs of two, to be demonstrated simultaneously. Here, we present
an example of all three technologies working in parallel to create a multimodal interactive experience.
DEMO SET-UP
Two Ultrahaptics UHEV2 devices comprising of 9
×
28 ultrasonic transducers each have been placed
in a sandwich arrangement with transducers facing each other to create the desired acoustic fields
and cancel any undesirable opposing pressure forces (Figure 1). Everything is held together by a laser
cut perspex and 3D printed frame (
L
39
×D
18
×H
31 cm) that includes additional USB-powered fans
for cooling and has adjustable height. The two transducer boards are spaced about 16.5 cm from one
another and are cabled together such that synchronization of top and boom boards is achieved,
which is necessary for stable levitation. The enclosing volume of about
L
28
.
8
×D
9
.
1
×H
16
.
5cm is
what we will refer to as the levitation space.
Figure 4: The ultrasonic rig is divided into
dierent parts to support the three func-
tionalities. The right half is the levitation
area where small objects can be manipu-
lated in space via acoustic traps (orange).
The top le transducer array is used to
project a beam of parametric audio (red)
onto the table such that it can get reflected
into the user’s direction. Boom le trans-
ducer array produces the haptic feedback
(blue). The user’s hand is gesturing above
the Leap Motion device (infront of the rig)
and receives feedback about interaction
onto the index finger.
We track the user gesture with a Leap Motion Controller (www.leapmotion.com). The path of the
index finger executed inside of the Leap Motion’s interaction area is translated into the levitation
space and the input signal is further smoothed out by a moving average filter. The sides of the rig
are fied with small platforms made of acoustically transparent material (Saati Acoustex B003HYD)
enabling the easy loading of the to-be-levitated polystyrene beads (approx. 2 mm).
To achieve simultaneous levitation, parametric audio and haptic feedback, we dedicate dierent
parts of the rig to particular purposes (Figure 4). Namely, while the right half (top and boom) of the
rig is dedicated to levitation (eectively halving the levitation space), the top le is used for parametric
audio, and the boom le for haptic feedback. The top le 9
×
14 transducer array projects a beam of
parametric audio downwards and at an angle of 30 degrees onto the table on which the rig stands.
The beam then reflects and can be heard by the user if they are standing at the right height and
position such that their head is in the reflected parametric audio beam. Meanwhile, the boom le
9
×
14 transducer array focuses ultrasound onto the users hand which is being tracked by the Leap
controller. In this way, we minimize interference and corruption between the dierent acoustic fields.
It is easy to switch le for right functions dynamically.
INTERACTIVE DEMO
To exemplify the capabilities aorded by the multimodal levitation rig described previously, we present
an interactive function we call LeviPaint that uses a levitating particle as a paint brush in air. To
accomplish this, the levitating bead follows a user-defined path, and if this path is captured through
a long exposure image, a LeviPainting is created (see for example Figures 2 and 3).
In order to create a LeviPainting, users will be guided through the three phases of the planned
interaction. Phase 1 consists of the user drawing a shape in mid-air using their index finger above the
Leap Motion hand tracker. On entrance of the hand into the interaction area, the rig produces a short
high-pitched beep sound therefore prompting the user that their hand is being tracked. This indicates
the system is ready for input. On hearing that tone, the user starts drawing and the system records
their motion. Moreover, the rig projects haptic feedback onto the user’s index finger. Once the user’s
hand exits the interaction region and the Leap loses the hand, the system progresses to phase 2. In
phase 2, the rig transports the bead from the loading platform towards the centre of the levitation
space, and then replays the recorded motion with the bead tracing out the same path as the user’s
hand. The bead motion is then traced back in reverse and then the bead is dropped. During levitation
flight, an audio track is also played from the top le part of the rig, however is perceived as if coming
from under the table due to the reflection described previously. A DSLR camera is used to take a long
exposure photograph (approx. 30 seconds) of the moving levitating particle on a dark background
to capture its trail. Finally, in phase 3 the LeviPainting is produced and displayed on an LCD screen
nearby. The whole interaction takes about 1 minute and requires 1 more to reset for the next user.
CONCLUSION
Ultrasound oers a wide range of multimodal (audio-visual-haptic) opportunities for information
displays. Not only can it levitate and manipulate small objects in real-time to draw dierent shapes
in mid-air, but it can also present haptic and audio feedback to the user through the same ultrasonic
hardware apparatus. Our demo presents the first ever compact and self-contained ultrasonic rig
capable of these three technologies (levitation, mid-air haptics, parametric audio) simultaneously and
therefore can encourage broad discussion about new HCI application areas. For instance, how can we
scale this technology up to room-sized deployments; what kind of immersive applications are possible;
can we communicate science in a new way; and can we create new and exciting art installations?
ACKNOWLEDGEMENTS
This research is funded by the European Union’s Horizon 2020 research and innovation programme
(#737087).
REFERENCES
[1]
T. Carter, S. A. Seah, B. Long, B. Drinkwater,
and S. Subramanian. 2013. UltraHaptics: Multi-
Point Mid-Air Haptic Feedback for Touch Sur-
faces. UIST (2013), 505–514. hps://doi.org/10.
1145/2501988.2502018
[2]
E. Freeman, J. Williamson, P. Kourtelos, and S.
Brewster. 2018. Levitating Object Displays with
Interactive Voxels. In PerDis ’18. ACM Press, Ar-
ticle 15. hps://doi.org/10.1145/3205873.3205878
[3]
T. Hoshi, M. Takahashi, T. Iwamoto, and H. Shin-
oda. 2010. Noncontact Tactile Display Based on
Radiation Pressure of Airborne Ultrasound. IEEE
Transactions on Haptics 3, 3 (July 2010), 155–165.
hps://doi.org/10.1109/TOH.2010.4
[4]
A. Marzo, T. Corke, and B. W. Drinkwater.
2018. Ultraino: An Open Phased-Array System
for Narrowband Airborne Ultrasound Transmis-
sion. IEEE Transactions on Ultrasonics, Ferro-
electrics, and Frequency Control (2018). hps:
//doi.org/10.1109/TUFFC.2017.2769399
[5]
Y. Ochiai, T. Hoshi, and J. Rekimoto. 2014. Pixie
Dust: Graphics Generated by Levitated and
Animated Objects in Computational Acoustic-
potential Field. ACM Trans. Graph. 33, 4, Article
85 (July 2014), 13 pages. hps://doi.org/10.1145/
2601097.2601118
[6]
F. J. Pompei. 1995. Sound From Ultrasound: The
Parametric Array as an Audible Sound Source i-A.
Technical Report. hp://sound.media.mit.edu/
%7Ebv/
[7]
I. Sutherland. 2001. The Ultimate Display. Pro-
ceedings of the IFIPS Congress 65(2):506-508. New
York: IFIP 2 (01 2001).