Content uploaded by Johannes Schöning
Author content
All content in this area was uploaded by Johannes Schöning on Nov 04, 2017
Content may be subject to copyright.
WatchThru: Expanding Smartwatch Displays with Mid-air
Visuals and Wrist-worn Augmented Reality
Dirk Wenig1,2Johannes Sch¨
oning2Alex Olwal3Mathias Oben4Rainer Malaka1,2
1Digital Media Lab (TZI);
2Computer Science Faculty
University of Bremen
{dwenig, malaka}@tzi.de,
schoening@uni-bremen.de
3Interaction Lab
Google, Inc.
olwal@google.com
4Computer Science Faculty
Hasselt University
mathias.oben@student.uhasselt.be
ABSTRACT
We introduce WatchThru, an interactive method for extended
wrist-worn display on commercially-available smartwatches.
To address the limited visual and interaction space, WatchThru
expands the device into 3D through a transparent display. This
enables novel interactions that leverage and extend smartwatch
glanceability. We describe three novel interaction techniques,
Pop-up Visuals,Second Perspective and Peek-through, and dis-
cuss how they can complement interaction on current devices.
We also describe two types of prototypes that helped us to
explore standalone interactions, as well as, proof-of-concept
AR interfaces using our platform.
ACM Classification Keywords
H.5.1 Multimedia Information Systems: Artificial, augmented,
and virtual realities; H.5.2 User Interfaces: Graphical user
interfaces, Input devices and strategies; I.3.6 Methodology
and Techniques: Interaction techniques
Author Keywords
Smartwatches; Micro-Interaction; Wearable Devices
INTRODUCTION
Smartwatches are designed to support micro-interactions.
Their small screens enable compact form factors, but result in
a limited visual and interaction space. While most work on ad-
dressing these limitations explores additional input techniques
or modalities, this work focuses on the output challenges that
arise from small screen sizes of currently 1.5–1.8
00
(10–25%
the size of typical 5 00 smartphone screens).
WatchThru extends the output capabilities of current smart-
watches into 3D with an additional 1.8
00
transparent screen
that complements the main touchscreen with graphics that can
be displayed floating in mid-air (see Figure 1). WatchThru
can show different information on the two screens, based on
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the Owner/Author.
Copyright is held by the owner/author(s).
CHI 2017, May 6-11, 2017, Denver, CO, USA.
ACM ISBN 978-1-4503-4655-9/17/05.
http://dx.doi.org/10.1145/3025453.3025852
Figure 1: WatchThru is a lightweight and easy-to-build display
extension for existing smartwatches.
the user’s viewing direction. Additionally, by registering the
display with the world (e.g., with an external tracking system
or camera-based tracking on the watch), WatchThru enables
lightweight, wrist-worn augmented reality (AR) capabilities.
Our contributions with WatchThru are the following:
•
The concept, design and implementation of a smartwatch
that is extended with a transparent display to increase the
output capacity with mid-air visuals and wrist-worn AR.
•
An exploration of novel interaction techniques enabled by
our introduced concept that show how they can improve
future interactions with smartwatches.
•
Two prototypes of which we share the instructions how to
replicate them, to enable others to extend our WatchThru
concept. The first, demonstrating capabilities with a self-
contained smartwatch. The second, using external tracking
for a proof-of-concept implementation of techniques that we
anticipate will be available with miniaturization of current
inside-out tracking technology.
RELATED WORK
Fitzmaurice introduced the concept of spatially-aware dis-
plays [4], which use motion sensing to enable viewports, or
peepholes [21], into a virtual information space. WatchThru
leverages recent advancements, where wearable devices em-
bed a variety of sensors that can track device motion and
orientation. WatchThru is also inspired by early mobile AR
concepts like NaviCam from Rekimoto and Nagao [16], as
well as, wearable magic lens interfaces [2]. This work builds
on an extensive body of research into mobile AR [17, 12], but
focuses on the capabilities enabled by a wrist-mounted optical
see-through display.
More recently, researchers have investigated wrist-mounted
displays and displays around the user’s body [3]. Pohl et
al. [15] used LEDs mounted at the bottom of a watch for
notifications through the user’s skin. Grubert el al. [6] inves-
tigated interaction with multiple displays; in one variant a
HMD combined with a smartwatch. Doppio [18] is a smart-
watch with two screens; one of them is reconfigurable to allow
tangible interaction.
Peripheral displays has been explored with multiple displays
in Facet [10] and as edge notifications on recent smartphones
(e.g., Samsung Galaxy S Edge). Skin Buttons [9] use direct
projection to extend the display, whereas ScatterWatch uses
indirect skin illumination [15]. The WatchThru display confi-
guration is similar to the Sonic Flashlight [19, 20], which used
a half-silvered mirror to combine an ultrasound image with
the view of the patient’s body.
WATCHTHRU INTERACTIONS
In this section we illustrate the interaction possibilities that
arise from WatchThru. As WatchThru is focused on output,
the interactions rely on known input techniques.
Pop-up Visuals: Extended Display for Mid-Air Graphics
Our concept of Pop-up Visuals enables information that vi-
sually extends out from the watch face. For notifications and
incoming calls, the advantage is that the users do not necessa-
rily need to lift up their hand and twist their wrist to look at
the information on their smartwatch. Because of the additional
WatchThru screen, the users notice them out of the corner of
their eyes while having their arms in a relaxed position, e.g.,
arms hanging down.
When the user is directly looking at the screen, the WatchThru
prototype naturally allows for more complex information, such
as numbers (e.g., remaining minutes until the next bus), short
text or symbols. Figure 1 (top, right) shows a conceptual app
using the WatchThru screen to display static and animated
notifications.
Second Perspective: Alternative Presentation
The Second Perspective enables users to quickly access alter-
native views on the WatchThru display. The main advantage
is that additional information, e.g., navigational directions,
become implicitly accessible to the user. Figure 2 shows a
prototypical app with a 2D map on the main screen and an
oriented 3D arrow on the WatchThru screen. It allows the user
to navigate along a route on the main screen and, using the
built-in compass, it shows the current direction on the Watch-
Thru screen. Users simply orient their wrist to view the screen.
When combined with pop-up visuals on the WatchThru screen,
the main screen could display contextual information, e.g.,
for information about incoming calls. During a video call dis-
played on the WatchThru screen, the main screen could show
the text chat window for sharing links and media.
Figure 2: Second Perspective navigation with a 2D map (left)
and directional arrows (right).
The second perspective could also be used to display indepen-
dent information. In the mobile context, the user often has to
perform multiple tasks at the same time. For example, with
the second perspective, the WatchThru screen could display
navigational directions while the main screen is used for noti-
fications. In addition, the transparent screen has the advantage
that it also enables the combination of additional information
on the WatchThru screen with the real world. This means that,
e.g., the arrow shown on the WatchThru display is aligned
with the real world target.
Peek-through: Optical See-through Wrist-worn AR
The see-through display makes it possible to augment objects
by overlaying registered graphics using the device. We call
the interaction Peek-through. With peek-through, a navigation
arrow on the WatchThru screen could be fully integrated in
3D into the environment. In contrast to most AR displays, this
device does not obstruct the user’s face, nor requires the user
to bring it out and hold it (like a smartphone). It therefore has
interesting potential as a wearable, unobtrusive and always-
accessible AR device on the user’s wrist.
For peek-through, we prototypically implemented several app-
lications for different scenarios. Figure 3 (c, d) show a smart
home scenario where objects in a room, e.g., power sockets,
are augmented with information shown on the WatchThru
screen. In Figure 3 (e), a learning scenario, a virtual globe
is presented in the middle of a room. It could allow school
children to explore the earth, including the orbiting moon,
from different perspectives. Additionally, WatchThru could
help children and adults learn board games, such as chess.
With peek-through, the board could be overlayed with game
strategies—not to cheat, but to help the player grow.
Figure 3 (f) shows an assembly scenario; WatchThru shows the
user how to connect electronic components. Similarly, Watch-
Thru could help technicians with maintenance, e.g. to repair
elevators. WatchThru with a foldable display (see section on
Limitations) could be a lightweight and unobtrusive add-on to
(smart)watches that technicians usually wear and would not
require an additional device. Additionally, similar to the Sonic
Flashlight [19, 20], WatchThru could be used in a medical
context to overlay the patient’s body with information such
as ultrasound imagery. Furthermore, we see WatchThru ha-
ving the potential as an entertainment device, which could be
used for location-based games with AR, such as the recently
popular Pok´
emon GO [13].
IMPLEMENTATION OF PROTOTYPES
We developed two interactive prototypes by extending com-
mercial smartwatches with additional display and tracking
capabilities. The first was designed to support Pop-up Visuals
and Second Perspective with minimal modification, while the
other allowed us to explore the Peek-through wrist-worn AR
experiences with 3D position and orientation tracking.
Prototype 1: Pop-up Visuals and Second Perspective
For the first WatchThru prototype screen we used a speci-
al acrylic glass (3 mm thick) with one reflective side and
one transparent side, developed to enable projections on see-
through materials
1
. The screen was fabricated with a laser
cutter, including a mount, and bent into the right configuration
while hot (
∼135◦
). Given that only one side of the acrylic
glass is laminated with the reflective layer, the image quality is
barely affected when viewing through the WatchThru screen.
We built several versions for different Android/Android Wear
smartwatches, e.g., the LG G Watch and the Samsung Gear
Live (Figure 1, bottom). Modifying the design for other smart-
watches usually only requires scaling of the CAD drawings so
that the WatchThru screen has the same size as the main screen
of the smartwatch and, if necessary, to bend the mounting part
so that it fits the back of the smartwatch.
For correct display on the WatchThru screen, graphics need
to be mirrored on the main screen. For the second perspective,
we used the built-in inertial sensors to infer whether the user
is looking at the WatchThru screen or at the main screen. In
our proof-of-concept implementation, we activate WatchThru
content when the smartwatch is held parallel to the ground
and display main screen content otherwise. Using the built-in
digital compass, we display orientation-sensitive information
on the second screen.
Prototype 2: Peek-through AR
While current smartwatches are equipped with a great variety
of motion sensors, they are not sufficiently sophisticated yet to
track absolute position. Therefore, to explore the peek-trough
interactions, we developed a second prototype that was tracked
with an external tracking system.
We used a 12-camera OptiTrack System by NaturalPoint to
accurately track position and orientation of the smartwatch
in our lab using infrared illumination. Five retro-reflective
markers were attached to the smartwatch and to the user’s head
(using a cap with attached markers) to track their pose in the
room (Figure 3 a,b). The 12 cameras continuously broadcast
UDP packets with position and orientation of the markers at
200Hz. Packets are directly sent to the smartwatch via WiFi to
1http://www.holocube.eu
(a) Optical markers on device... (b) ...and for head-tracking
(c) Wall power socket... (d) ...with registered visuals
(e) Virtual globe (f) Electronics assembly
Figure 3: The Peek-through prototype uses external cameras
to track retroreflective markers on the (a) device and (b) user,
which enabled the several implemented AR scenarios (c–f).
reduce latency and processed by a Unity3D application that is
running on the device (Simvalley Mobile AW-414 Go). The
field of view (FOV) is calculated and dynamically updated
using the distance between the user’s eye(s) and the watch.
Preliminary User Feedback
While developing the prototypes we continuously explored the
WatchThru interactions in informal user tests with computer
science students (representing early adopters) not involved in
the project. For the second perspective interaction, the view
switching angle of
12.5◦
was determined in an experiment,
where participants experienced different angles of
45◦
,
22.5◦
,
and
12.5◦
. The results indicate that when looking at the Watch-
Thru screen, participants hold the device parallel to ground
to see through the screen, such that switching to the first per-
spective at a flat angle is best suited. In general, participants
commented that the WatchThru screen works well and is easy
to recognize except under strong illumination. However, some
of them criticized that the main screen can be distracting when
looking at the WatchThru screen.
LIMITATIONS AND FUTURE WORK
World-registered graphics
For AR experiences, absolute and accurate 6DOF tracking is
required. While techniques like PTAM [8], KinectFusion [7],
and Google’s project Tango [5] could be used, they currently
require more computational resources and power than what is
currently available in wrist-worn form factors. As these tech-
niques have moved from desktop workstations, to tablets and
smartphones, we expect future availability in smaller devices.
Figure 4: In the future for WatchThru, we envision a transpa-
rent screen that can be folded out when needed.
Tracking user’s perspective
Given that the display is not co-located with the user’s eyes,
any AR experience need to account for the user’s viewpoint.
This makes WatchThru more challenging to calibrate than
video see-through systems, where camera, tracking and display
are combined. For peek-through AR, tracking of device and
user’s head are not yet solved in a practical way. In addition
to perspective tracking with on-device sensing, tracking the
user’s head to calculate the field of view could be realized with
front-facing cameras, similar to Amazon’s Fire Phone [1].
Form factor
We chose the current transparent screen as a first step to im-
plement the WatchThru concept, which can also be easily
replicated by other researchers to explore the concept further.
We are already working on iterations of the technical prototype
to address some shortcomings. The current prototype requires
the secondary screen to extend from the device at a
45◦
angle.
While other display configurations are possible, e.g., using
holographic optical elements, transparent LCDs or transpa-
rent OLEDs, it may still be impractical or undesirable with
display components that extend out. As alternative, we could
also envision an extendable screen with a mechanical folding
mechanism (see Figure 4) such that the WatchThru screen
could be pulled out (manually or automatically) when needed.
A thin touch-friendly WatchThru screen, or increased projec-
ted capacitance sensitivity for touch through the WatchThru
screen, could allow the user to interact with the smartwatch in
a traditional way when the WatchThru screen is not expanded.
Dual content—one display source
Our current implementations are based on a passive opti-
cal combiner, which has advantages of simplicity, ease-of-
fabrication, low-cost, optical see-through qualities and ease-
of-integration. It works under the assumption that the user
focuses on either the main or WatchThru screen, and that
the device can adequately detect the user’s focus to switch
contents on the screen in a sufficiently responsive manner.
Our prototype showed the feasibility of this idea, but also the
time-sharing limitations when both displays are visible. Future
implementations could address this with active translucent
secondary displays, which would trade some simplicity for the
ability to display simultaneous content on both screens.
Interaction requires both hands
While smartwatches do not occupy the hands when not in use,
interacting with them requires both hands, in contrast to most
current head-worn displays (e.g., Google Glass).
Usage in complex lighting conditions
As with other optical see-through displays, the display will
have limited capability to function under strong illuminati-
on. This could be addressed with wavelength-selective and
directional selective materials, such as holographic diffusers
and optical elements, that only reflect light from a specific
wavelength and incident angle, similar to ASTOR [14].
Focal planes
The semi-transparent display creates a virtual image
90◦
rota-
ted from the wrist. This makes it challenging to fuse overlaid
graphics with distant real-world objects. Additional optics
could allow placing the virtual image at infinity, for objects
that are further than a few meters away. We are also interes-
ted in close-up use cases as well as AR overlays onto paper
documents and handheld equipment and tools.
User input techniques
The goal of WatchThru is to expand smartwatch displays, with
focus on output. Therefore, in the presented use cases we used
common techniques (e.g., touch and peepholes). To enable
interaction with the WatchThru screen, we could use IR-based
edge-lit touch-sensing (e.g., ZeroTouch [11]) or add a touch
layer on top of the reflective layer (with added complexity).
In the future, we will also explore input alternatives such as
mid-air and around the device interaction.
CONCLUSIONS
We introduced WatchThru, a novel interactive method for
wrist-worn display using smartwatches. Based on the small
display of smartwatches and the limited interaction space on
the device, WatchThru extends the visualization space into the
third dimension, beyond the flat watch screen, and extends the
range of interaction possibilities. We have shown how such a
transparent WatchThru screen could allow for Pop-up Visuals
for ambient notifications, Second Perspectives that extend the
display configuration, and for Peek-through AR scenarios.
Pop-up Visuals extend the passive character of the watch to an
ambient notification device. We plan to do deeper explorations
into additional interaction techniques, application scenarios,
and user-dependent factors that are relevant for the usability
and user experience of WatchThru. In the Second Perspec-
tive mode, turning the wrist allows switching between the
displays to leverage tilt-based interaction with smartwatches,
to complement gestures, buttons and on-screen widgets. With
Peek-through AR, we show that WatchThru has the potential
to act as an always-available, wrist-worn AR device.
We are currently working on a self-contained version to allow
peek-through interactions also in non-instrumented environ-
ments and prototypes that allow different viewports. Thus, our
minimal addition to a smartwatch opens up a whole spectrum
of new interactive capabilities for wearable devices.
ACKNOWLEDGMENTS
This work is partially founded by the Volkswagen Foundation
through a Lichtenberg professorship and by a Google Faculty
Research Award.
REFERENCES
1. Amazon. 2017. Fire Phone. (3 January 2017). Retrieved
January 3, 2017 from https://amazon.com/Fire-Phone/.
2. Eric A. Bier, Maureen C. Stone, Ken Pier, William
Buxton, and Tony D. DeRose. 1993. Toolglass and Magic
Lenses: The See-through Interface. In Proceedings of the
20th Annual Conference on Computer Graphics and
Interactive Techniques (SIGGRAPH ’93). ACM, New
York, NY, USA, 73–80. DOI:
http://dx.doi.org/10.1145/166117.166126
3.
Jesse Burstyn, Paul Strohmeier, and Roel Vertegaal. 2015.
DisplaySkin: Exploring Pose-Aware Displays on a
Flexible Electrophoretic Wristband. In Proceedings of the
9th International Conference on Tangible, Embedded,
and Embodied Interaction (TEI ’15). ACM, New York,
NY, USA, 165–172. DOI:
http://dx.doi.org/10.1145/2677199.2680596
4. George W. Fitzmaurice. 1993. Situated Information
Spaces and Spatially Aware Palmtop Computers.
Commun. ACM 36, 7 (July 1993), 39–49. DOI:
http://dx.doi.org/10.1145/159544.159566
5. Google. 2017. Tango. (3 January 2017). Retrieved
January 3, 2017 from https://get.google.com/tango/.
6. Jens Grubert, Matthias Heinisch, Aaron Quigley, and
Dieter Schmalstieg. 2015. MultiFi: Multi Fidelity
Interaction with Displays On and Around the Body. In
Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems (CHI ’15). ACM,
New York, NY, USA, 3933–3942. DOI:
http://dx.doi.org/10.1145/2702123.2702331
7. Shahram Izadi, David Kim, Otmar Hilliges, David
Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie
Shotton, Steve Hodges, Dustin Freeman, Andrew
Davison, and Andrew Fitzgibbon. 2011. KinectFusion:
Real-time 3D Reconstruction and Interaction Using a
Moving Depth Camera. In Proceedings of the 24th
Annual ACM Symposium on User Interface Software and
Technology (UIST ’11). ACM, New York, NY, USA,
559–568. DOI:
http://dx.doi.org/10.1145/2047196.2047270
8. Georg Klein and David Murray. 2007. Parallel Tracking
and Mapping for Small AR Workspaces. In Proceedings
of the 6th IEEE/ACM International Symposium on Mixed
and Augmented Reality (ISMAR ’07). IEEE, Washington,
DC, USA, 225–234. DOI:
http://dx.doi.org/10.1109/ISMAR.2007.4538852
9. Gierad Laput, Robert Xiao, Xiang ’Anthony’ Chen,
Scott E. Hudson, and Chris Harrison. 2014. Skin Buttons:
Cheap, Small, Low-powered and Clickable Fixed-icon
Laser Projectors. In Proceedings of the 27th Annual ACM
Symposium on User Interface Software and Technology
(UIST ’14). ACM, New York, NY, USA, 389–394. DOI:
http://dx.doi.org/10.1145/2642918.2647356
10. Kent Lyons, David Nguyen, Daniel Ashbrook, and Sean
White. 2012. Facet: A Multi-segment Wrist Worn System.
In Proceedings of the 25th Annual ACM Symposium on
User Interface Software and Technology (UIST ’12).
ACM, New York, NY, USA, 123–130. DOI:
http://dx.doi.org/10.1145/2380116.2380134
11. Jon Moeller and Andruid Kerne. 2012. ZeroTouch: An
Optical Multi-touch and Free-air Interaction Architecture.
In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’12). ACM, New
York, NY, USA, 2165–2174. DOI:
http://dx.doi.org/10.1145/2207676.2208368
12. Alessandro Mulloni, Hartmut Seichter, and Dieter
Schmalstieg. 2011. Handheld Augmented Reality Indoor
Navigation with Activity-based Instructions. In
Proceedings of the 13th International Conference on
Human Computer Interaction with Mobile Devices and
Services (MobileHCI ’11). ACM, New York, NY, USA,
211–220. DOI:
http://dx.doi.org/10.1145/2037373.2037406
13.
Niantic. 2017. Pok
´
emon GO. (3 January 2017). Retrieved
January 3, 2017 from https://pokemongo.com/.
14. Alex Olwal, Christoffer Lindfors, Jonny Gustafsson,
Torsten Kjellberg, and Lars Mattsson. 2005. ASTOR: An
Autostereoscopic Optical See-through Augmented
Reality System. In Proceedings of the 4th IEEE/ACM
International Symposium on Mixed and Augmented
Reality (ISMAR ’05). IEEE, Washington, DC, USA,
24–27. DOI:http://dx.doi.org/10.1109/ISMAR.2005.15
15.
Henning Pohl, Justyna Medrek, and Michael Rohs. 2016.
ScatterWatch: Subtle Notifications via Indirect
Illumination Scattered in the Skin. In Proceedings of the
18th International Conference on Human-Computer
Interaction with Mobile Devices and Services
(MobileHCI ’16). ACM, New York, NY, USA, 7–16.
DOI:http://dx.doi.org/10.1145/2935334.2935351
16. Jun Rekimoto and Katashi Nagao. 1995. The World
Through the Computer: Computer Augmented Interaction
with Real World Environments. In Proceedings of the 8th
Annual ACM Symposium on User Interface and Software
Technology (UIST ’95). ACM, New York, NY, USA,
29–36. DOI:http://dx.doi.org/10.1145/215585.215639
17. Dieter Schmalstieg and Daniel Wagner. 2007.
Experiences with Handheld Augmented Reality. In
Proceedings of the 6th IEEE/ACM International
Symposium on Mixed and Augmented Reality (ISMAR
’07). IEEE, Washington, DC, USA, 3–18. DOI:
http://dx.doi.org/10.1109/ISMAR.2007.4538819
18. Teddy Seyed, Xing-Dong Yang, and Daniel Vogel. 2016.
Doppio: A Reconfigurable Dual-Face Smartwatch for
Tangible Interaction. In Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems
(CHI ’16). ACM, New York, NY, USA, 4675–4686.
DOI:
http://dx.doi.org/10.1145/2858036.2858256
19. Damion Shelton, George Stetten, and Wilson Chang.
2002. Ultrasound Visualization with the Sonic Flashlight.
In ACM SIGGRAPH 2002 Conference Abstracts and
Applications (SIGGRAPH ’02). ACM, New York, NY,
USA, 82–82. DOI:
http://dx.doi.org/10.1145/1242073.1242117
20. Georg Stetten, Vikram Chib, Daniel Hildebrand, and
Jeannette Bursee. 2001. Real Time Tomographic
Reflection: Phantoms for Calibration and Biopsy. In
Proceedings of the IEEE/ACM International Symposium
on Augmented Reality (ISAR ’01). 11–19. DOI:
http://dx.doi.org/10.1109/ISAR.2001.970511
21. Ka-Ping Yee. 2003. Peephole Displays: Pen Interaction
on Spatially Aware Handheld Computers. In Proceedings
of the SIGCHI Conference on Human Factors in
Computing Systems (CHI ’03). ACM, New York, NY,
USA, 1–8. DOI:
http://dx.doi.org/10.1145/642611.642613