Conference PaperPDF Available

WatchThru: Expanding Smartwatch Displays with Mid-air Visuals and Wrist-worn Augmented Reality

Authors:
  • University of St.Gallen

Abstract and Figures

We introduce WatchThru, an interactive method for extended wrist-worn display on commercially-available smartwatches. To address the limited visual and interaction space, WatchThru expands the device into 3D through a transparent display. This enables novel interactions that leverage and extend smartwatch glanceability. We describe three novel interaction techniques, Pop-up Visuals, Second Perspective and Peek-through, and discuss how they can complement interaction on current devices. We also describe two types of prototypes that helped us to explore standalone interactions, as well as, proof-of-concept AR interfaces using our platform.
Content may be subject to copyright.
WatchThru: Expanding Smartwatch Displays with Mid-air
Visuals and Wrist-worn Augmented Reality
Dirk Wenig1,2Johannes Sch¨
oning2Alex Olwal3Mathias Oben4Rainer Malaka1,2
1Digital Media Lab (TZI);
2Computer Science Faculty
University of Bremen
{dwenig, malaka}@tzi.de,
schoening@uni-bremen.de
3Interaction Lab
Google, Inc.
olwal@google.com
4Computer Science Faculty
Hasselt University
mathias.oben@student.uhasselt.be
ABSTRACT
We introduce WatchThru, an interactive method for extended
wrist-worn display on commercially-available smartwatches.
To address the limited visual and interaction space, WatchThru
expands the device into 3D through a transparent display. This
enables novel interactions that leverage and extend smartwatch
glanceability. We describe three novel interaction techniques,
Pop-up Visuals,Second Perspective and Peek-through, and dis-
cuss how they can complement interaction on current devices.
We also describe two types of prototypes that helped us to
explore standalone interactions, as well as, proof-of-concept
AR interfaces using our platform.
ACM Classification Keywords
H.5.1 Multimedia Information Systems: Artificial, augmented,
and virtual realities; H.5.2 User Interfaces: Graphical user
interfaces, Input devices and strategies; I.3.6 Methodology
and Techniques: Interaction techniques
Author Keywords
Smartwatches; Micro-Interaction; Wearable Devices
INTRODUCTION
Smartwatches are designed to support micro-interactions.
Their small screens enable compact form factors, but result in
a limited visual and interaction space. While most work on ad-
dressing these limitations explores additional input techniques
or modalities, this work focuses on the output challenges that
arise from small screen sizes of currently 1.5–1.8
00
(10–25%
the size of typical 5 00 smartphone screens).
WatchThru extends the output capabilities of current smart-
watches into 3D with an additional 1.8
00
transparent screen
that complements the main touchscreen with graphics that can
be displayed floating in mid-air (see Figure 1). WatchThru
can show different information on the two screens, based on
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the Owner/Author.
Copyright is held by the owner/author(s).
CHI 2017, May 6-11, 2017, Denver, CO, USA.
ACM ISBN 978-1-4503-4655-9/17/05.
http://dx.doi.org/10.1145/3025453.3025852
Figure 1: WatchThru is a lightweight and easy-to-build display
extension for existing smartwatches.
the user’s viewing direction. Additionally, by registering the
display with the world (e.g., with an external tracking system
or camera-based tracking on the watch), WatchThru enables
lightweight, wrist-worn augmented reality (AR) capabilities.
Our contributions with WatchThru are the following:
The concept, design and implementation of a smartwatch
that is extended with a transparent display to increase the
output capacity with mid-air visuals and wrist-worn AR.
An exploration of novel interaction techniques enabled by
our introduced concept that show how they can improve
future interactions with smartwatches.
Two prototypes of which we share the instructions how to
replicate them, to enable others to extend our WatchThru
concept. The first, demonstrating capabilities with a self-
contained smartwatch. The second, using external tracking
for a proof-of-concept implementation of techniques that we
anticipate will be available with miniaturization of current
inside-out tracking technology.
RELATED WORK
Fitzmaurice introduced the concept of spatially-aware dis-
plays [4], which use motion sensing to enable viewports, or
peepholes [21], into a virtual information space. WatchThru
leverages recent advancements, where wearable devices em-
bed a variety of sensors that can track device motion and
orientation. WatchThru is also inspired by early mobile AR
concepts like NaviCam from Rekimoto and Nagao [16], as
well as, wearable magic lens interfaces [2]. This work builds
on an extensive body of research into mobile AR [17, 12], but
focuses on the capabilities enabled by a wrist-mounted optical
see-through display.
More recently, researchers have investigated wrist-mounted
displays and displays around the user’s body [3]. Pohl et
al. [15] used LEDs mounted at the bottom of a watch for
notifications through the user’s skin. Grubert el al. [6] inves-
tigated interaction with multiple displays; in one variant a
HMD combined with a smartwatch. Doppio [18] is a smart-
watch with two screens; one of them is reconfigurable to allow
tangible interaction.
Peripheral displays has been explored with multiple displays
in Facet [10] and as edge notifications on recent smartphones
(e.g., Samsung Galaxy S Edge). Skin Buttons [9] use direct
projection to extend the display, whereas ScatterWatch uses
indirect skin illumination [15]. The WatchThru display confi-
guration is similar to the Sonic Flashlight [19, 20], which used
a half-silvered mirror to combine an ultrasound image with
the view of the patient’s body.
WATCHTHRU INTERACTIONS
In this section we illustrate the interaction possibilities that
arise from WatchThru. As WatchThru is focused on output,
the interactions rely on known input techniques.
Pop-up Visuals: Extended Display for Mid-Air Graphics
Our concept of Pop-up Visuals enables information that vi-
sually extends out from the watch face. For notifications and
incoming calls, the advantage is that the users do not necessa-
rily need to lift up their hand and twist their wrist to look at
the information on their smartwatch. Because of the additional
WatchThru screen, the users notice them out of the corner of
their eyes while having their arms in a relaxed position, e.g.,
arms hanging down.
When the user is directly looking at the screen, the WatchThru
prototype naturally allows for more complex information, such
as numbers (e.g., remaining minutes until the next bus), short
text or symbols. Figure 1 (top, right) shows a conceptual app
using the WatchThru screen to display static and animated
notifications.
Second Perspective: Alternative Presentation
The Second Perspective enables users to quickly access alter-
native views on the WatchThru display. The main advantage
is that additional information, e.g., navigational directions,
become implicitly accessible to the user. Figure 2 shows a
prototypical app with a 2D map on the main screen and an
oriented 3D arrow on the WatchThru screen. It allows the user
to navigate along a route on the main screen and, using the
built-in compass, it shows the current direction on the Watch-
Thru screen. Users simply orient their wrist to view the screen.
When combined with pop-up visuals on the WatchThru screen,
the main screen could display contextual information, e.g.,
for information about incoming calls. During a video call dis-
played on the WatchThru screen, the main screen could show
the text chat window for sharing links and media.
Figure 2: Second Perspective navigation with a 2D map (left)
and directional arrows (right).
The second perspective could also be used to display indepen-
dent information. In the mobile context, the user often has to
perform multiple tasks at the same time. For example, with
the second perspective, the WatchThru screen could display
navigational directions while the main screen is used for noti-
fications. In addition, the transparent screen has the advantage
that it also enables the combination of additional information
on the WatchThru screen with the real world. This means that,
e.g., the arrow shown on the WatchThru display is aligned
with the real world target.
Peek-through: Optical See-through Wrist-worn AR
The see-through display makes it possible to augment objects
by overlaying registered graphics using the device. We call
the interaction Peek-through. With peek-through, a navigation
arrow on the WatchThru screen could be fully integrated in
3D into the environment. In contrast to most AR displays, this
device does not obstruct the user’s face, nor requires the user
to bring it out and hold it (like a smartphone). It therefore has
interesting potential as a wearable, unobtrusive and always-
accessible AR device on the user’s wrist.
For peek-through, we prototypically implemented several app-
lications for different scenarios. Figure 3 (c, d) show a smart
home scenario where objects in a room, e.g., power sockets,
are augmented with information shown on the WatchThru
screen. In Figure 3 (e), a learning scenario, a virtual globe
is presented in the middle of a room. It could allow school
children to explore the earth, including the orbiting moon,
from different perspectives. Additionally, WatchThru could
help children and adults learn board games, such as chess.
With peek-through, the board could be overlayed with game
strategies—not to cheat, but to help the player grow.
Figure 3 (f) shows an assembly scenario; WatchThru shows the
user how to connect electronic components. Similarly, Watch-
Thru could help technicians with maintenance, e.g. to repair
elevators. WatchThru with a foldable display (see section on
Limitations) could be a lightweight and unobtrusive add-on to
(smart)watches that technicians usually wear and would not
require an additional device. Additionally, similar to the Sonic
Flashlight [19, 20], WatchThru could be used in a medical
context to overlay the patient’s body with information such
as ultrasound imagery. Furthermore, we see WatchThru ha-
ving the potential as an entertainment device, which could be
used for location-based games with AR, such as the recently
popular Pok´
emon GO [13].
IMPLEMENTATION OF PROTOTYPES
We developed two interactive prototypes by extending com-
mercial smartwatches with additional display and tracking
capabilities. The first was designed to support Pop-up Visuals
and Second Perspective with minimal modification, while the
other allowed us to explore the Peek-through wrist-worn AR
experiences with 3D position and orientation tracking.
Prototype 1: Pop-up Visuals and Second Perspective
For the first WatchThru prototype screen we used a speci-
al acrylic glass (3 mm thick) with one reflective side and
one transparent side, developed to enable projections on see-
through materials
1
. The screen was fabricated with a laser
cutter, including a mount, and bent into the right configuration
while hot (
135
). Given that only one side of the acrylic
glass is laminated with the reflective layer, the image quality is
barely affected when viewing through the WatchThru screen.
We built several versions for different Android/Android Wear
smartwatches, e.g., the LG G Watch and the Samsung Gear
Live (Figure 1, bottom). Modifying the design for other smart-
watches usually only requires scaling of the CAD drawings so
that the WatchThru screen has the same size as the main screen
of the smartwatch and, if necessary, to bend the mounting part
so that it fits the back of the smartwatch.
For correct display on the WatchThru screen, graphics need
to be mirrored on the main screen. For the second perspective,
we used the built-in inertial sensors to infer whether the user
is looking at the WatchThru screen or at the main screen. In
our proof-of-concept implementation, we activate WatchThru
content when the smartwatch is held parallel to the ground
and display main screen content otherwise. Using the built-in
digital compass, we display orientation-sensitive information
on the second screen.
Prototype 2: Peek-through AR
While current smartwatches are equipped with a great variety
of motion sensors, they are not sufficiently sophisticated yet to
track absolute position. Therefore, to explore the peek-trough
interactions, we developed a second prototype that was tracked
with an external tracking system.
We used a 12-camera OptiTrack System by NaturalPoint to
accurately track position and orientation of the smartwatch
in our lab using infrared illumination. Five retro-reflective
markers were attached to the smartwatch and to the user’s head
(using a cap with attached markers) to track their pose in the
room (Figure 3 a,b). The 12 cameras continuously broadcast
UDP packets with position and orientation of the markers at
200Hz. Packets are directly sent to the smartwatch via WiFi to
1http://www.holocube.eu
(a) Optical markers on device... (b) ...and for head-tracking
(c) Wall power socket... (d) ...with registered visuals
(e) Virtual globe (f) Electronics assembly
Figure 3: The Peek-through prototype uses external cameras
to track retroreflective markers on the (a) device and (b) user,
which enabled the several implemented AR scenarios (c–f).
reduce latency and processed by a Unity3D application that is
running on the device (Simvalley Mobile AW-414 Go). The
field of view (FOV) is calculated and dynamically updated
using the distance between the user’s eye(s) and the watch.
Preliminary User Feedback
While developing the prototypes we continuously explored the
WatchThru interactions in informal user tests with computer
science students (representing early adopters) not involved in
the project. For the second perspective interaction, the view
switching angle of
12.5
was determined in an experiment,
where participants experienced different angles of
45
,
22.5
,
and
12.5
. The results indicate that when looking at the Watch-
Thru screen, participants hold the device parallel to ground
to see through the screen, such that switching to the first per-
spective at a flat angle is best suited. In general, participants
commented that the WatchThru screen works well and is easy
to recognize except under strong illumination. However, some
of them criticized that the main screen can be distracting when
looking at the WatchThru screen.
LIMITATIONS AND FUTURE WORK
World-registered graphics
For AR experiences, absolute and accurate 6DOF tracking is
required. While techniques like PTAM [8], KinectFusion [7],
and Google’s project Tango [5] could be used, they currently
require more computational resources and power than what is
currently available in wrist-worn form factors. As these tech-
niques have moved from desktop workstations, to tablets and
smartphones, we expect future availability in smaller devices.
Figure 4: In the future for WatchThru, we envision a transpa-
rent screen that can be folded out when needed.
Tracking user’s perspective
Given that the display is not co-located with the user’s eyes,
any AR experience need to account for the user’s viewpoint.
This makes WatchThru more challenging to calibrate than
video see-through systems, where camera, tracking and display
are combined. For peek-through AR, tracking of device and
user’s head are not yet solved in a practical way. In addition
to perspective tracking with on-device sensing, tracking the
user’s head to calculate the field of view could be realized with
front-facing cameras, similar to Amazon’s Fire Phone [1].
Form factor
We chose the current transparent screen as a first step to im-
plement the WatchThru concept, which can also be easily
replicated by other researchers to explore the concept further.
We are already working on iterations of the technical prototype
to address some shortcomings. The current prototype requires
the secondary screen to extend from the device at a
45
angle.
While other display configurations are possible, e.g., using
holographic optical elements, transparent LCDs or transpa-
rent OLEDs, it may still be impractical or undesirable with
display components that extend out. As alternative, we could
also envision an extendable screen with a mechanical folding
mechanism (see Figure 4) such that the WatchThru screen
could be pulled out (manually or automatically) when needed.
A thin touch-friendly WatchThru screen, or increased projec-
ted capacitance sensitivity for touch through the WatchThru
screen, could allow the user to interact with the smartwatch in
a traditional way when the WatchThru screen is not expanded.
Dual content—one display source
Our current implementations are based on a passive opti-
cal combiner, which has advantages of simplicity, ease-of-
fabrication, low-cost, optical see-through qualities and ease-
of-integration. It works under the assumption that the user
focuses on either the main or WatchThru screen, and that
the device can adequately detect the user’s focus to switch
contents on the screen in a sufficiently responsive manner.
Our prototype showed the feasibility of this idea, but also the
time-sharing limitations when both displays are visible. Future
implementations could address this with active translucent
secondary displays, which would trade some simplicity for the
ability to display simultaneous content on both screens.
Interaction requires both hands
While smartwatches do not occupy the hands when not in use,
interacting with them requires both hands, in contrast to most
current head-worn displays (e.g., Google Glass).
Usage in complex lighting conditions
As with other optical see-through displays, the display will
have limited capability to function under strong illuminati-
on. This could be addressed with wavelength-selective and
directional selective materials, such as holographic diffusers
and optical elements, that only reflect light from a specific
wavelength and incident angle, similar to ASTOR [14].
Focal planes
The semi-transparent display creates a virtual image
90
rota-
ted from the wrist. This makes it challenging to fuse overlaid
graphics with distant real-world objects. Additional optics
could allow placing the virtual image at infinity, for objects
that are further than a few meters away. We are also interes-
ted in close-up use cases as well as AR overlays onto paper
documents and handheld equipment and tools.
User input techniques
The goal of WatchThru is to expand smartwatch displays, with
focus on output. Therefore, in the presented use cases we used
common techniques (e.g., touch and peepholes). To enable
interaction with the WatchThru screen, we could use IR-based
edge-lit touch-sensing (e.g., ZeroTouch [11]) or add a touch
layer on top of the reflective layer (with added complexity).
In the future, we will also explore input alternatives such as
mid-air and around the device interaction.
CONCLUSIONS
We introduced WatchThru, a novel interactive method for
wrist-worn display using smartwatches. Based on the small
display of smartwatches and the limited interaction space on
the device, WatchThru extends the visualization space into the
third dimension, beyond the flat watch screen, and extends the
range of interaction possibilities. We have shown how such a
transparent WatchThru screen could allow for Pop-up Visuals
for ambient notifications, Second Perspectives that extend the
display configuration, and for Peek-through AR scenarios.
Pop-up Visuals extend the passive character of the watch to an
ambient notification device. We plan to do deeper explorations
into additional interaction techniques, application scenarios,
and user-dependent factors that are relevant for the usability
and user experience of WatchThru. In the Second Perspec-
tive mode, turning the wrist allows switching between the
displays to leverage tilt-based interaction with smartwatches,
to complement gestures, buttons and on-screen widgets. With
Peek-through AR, we show that WatchThru has the potential
to act as an always-available, wrist-worn AR device.
We are currently working on a self-contained version to allow
peek-through interactions also in non-instrumented environ-
ments and prototypes that allow different viewports. Thus, our
minimal addition to a smartwatch opens up a whole spectrum
of new interactive capabilities for wearable devices.
ACKNOWLEDGMENTS
This work is partially founded by the Volkswagen Foundation
through a Lichtenberg professorship and by a Google Faculty
Research Award.
REFERENCES
1. Amazon. 2017. Fire Phone. (3 January 2017). Retrieved
January 3, 2017 from https://amazon.com/Fire-Phone/.
2. Eric A. Bier, Maureen C. Stone, Ken Pier, William
Buxton, and Tony D. DeRose. 1993. Toolglass and Magic
Lenses: The See-through Interface. In Proceedings of the
20th Annual Conference on Computer Graphics and
Interactive Techniques (SIGGRAPH ’93). ACM, New
York, NY, USA, 73–80. DOI:
http://dx.doi.org/10.1145/166117.166126
3.
Jesse Burstyn, Paul Strohmeier, and Roel Vertegaal. 2015.
DisplaySkin: Exploring Pose-Aware Displays on a
Flexible Electrophoretic Wristband. In Proceedings of the
9th International Conference on Tangible, Embedded,
and Embodied Interaction (TEI ’15). ACM, New York,
NY, USA, 165–172. DOI:
http://dx.doi.org/10.1145/2677199.2680596
4. George W. Fitzmaurice. 1993. Situated Information
Spaces and Spatially Aware Palmtop Computers.
Commun. ACM 36, 7 (July 1993), 39–49. DOI:
http://dx.doi.org/10.1145/159544.159566
5. Google. 2017. Tango. (3 January 2017). Retrieved
January 3, 2017 from https://get.google.com/tango/.
6. Jens Grubert, Matthias Heinisch, Aaron Quigley, and
Dieter Schmalstieg. 2015. MultiFi: Multi Fidelity
Interaction with Displays On and Around the Body. In
Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems (CHI ’15). ACM,
New York, NY, USA, 3933–3942. DOI:
http://dx.doi.org/10.1145/2702123.2702331
7. Shahram Izadi, David Kim, Otmar Hilliges, David
Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie
Shotton, Steve Hodges, Dustin Freeman, Andrew
Davison, and Andrew Fitzgibbon. 2011. KinectFusion:
Real-time 3D Reconstruction and Interaction Using a
Moving Depth Camera. In Proceedings of the 24th
Annual ACM Symposium on User Interface Software and
Technology (UIST ’11). ACM, New York, NY, USA,
559–568. DOI:
http://dx.doi.org/10.1145/2047196.2047270
8. Georg Klein and David Murray. 2007. Parallel Tracking
and Mapping for Small AR Workspaces. In Proceedings
of the 6th IEEE/ACM International Symposium on Mixed
and Augmented Reality (ISMAR ’07). IEEE, Washington,
DC, USA, 225–234. DOI:
http://dx.doi.org/10.1109/ISMAR.2007.4538852
9. Gierad Laput, Robert Xiao, Xiang ’Anthony’ Chen,
Scott E. Hudson, and Chris Harrison. 2014. Skin Buttons:
Cheap, Small, Low-powered and Clickable Fixed-icon
Laser Projectors. In Proceedings of the 27th Annual ACM
Symposium on User Interface Software and Technology
(UIST ’14). ACM, New York, NY, USA, 389–394. DOI:
http://dx.doi.org/10.1145/2642918.2647356
10. Kent Lyons, David Nguyen, Daniel Ashbrook, and Sean
White. 2012. Facet: A Multi-segment Wrist Worn System.
In Proceedings of the 25th Annual ACM Symposium on
User Interface Software and Technology (UIST ’12).
ACM, New York, NY, USA, 123–130. DOI:
http://dx.doi.org/10.1145/2380116.2380134
11. Jon Moeller and Andruid Kerne. 2012. ZeroTouch: An
Optical Multi-touch and Free-air Interaction Architecture.
In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’12). ACM, New
York, NY, USA, 2165–2174. DOI:
http://dx.doi.org/10.1145/2207676.2208368
12. Alessandro Mulloni, Hartmut Seichter, and Dieter
Schmalstieg. 2011. Handheld Augmented Reality Indoor
Navigation with Activity-based Instructions. In
Proceedings of the 13th International Conference on
Human Computer Interaction with Mobile Devices and
Services (MobileHCI ’11). ACM, New York, NY, USA,
211–220. DOI:
http://dx.doi.org/10.1145/2037373.2037406
13.
Niantic. 2017. Pok
´
emon GO. (3 January 2017). Retrieved
January 3, 2017 from https://pokemongo.com/.
14. Alex Olwal, Christoffer Lindfors, Jonny Gustafsson,
Torsten Kjellberg, and Lars Mattsson. 2005. ASTOR: An
Autostereoscopic Optical See-through Augmented
Reality System. In Proceedings of the 4th IEEE/ACM
International Symposium on Mixed and Augmented
Reality (ISMAR ’05). IEEE, Washington, DC, USA,
24–27. DOI:http://dx.doi.org/10.1109/ISMAR.2005.15
15.
Henning Pohl, Justyna Medrek, and Michael Rohs. 2016.
ScatterWatch: Subtle Notifications via Indirect
Illumination Scattered in the Skin. In Proceedings of the
18th International Conference on Human-Computer
Interaction with Mobile Devices and Services
(MobileHCI ’16). ACM, New York, NY, USA, 7–16.
DOI:http://dx.doi.org/10.1145/2935334.2935351
16. Jun Rekimoto and Katashi Nagao. 1995. The World
Through the Computer: Computer Augmented Interaction
with Real World Environments. In Proceedings of the 8th
Annual ACM Symposium on User Interface and Software
Technology (UIST ’95). ACM, New York, NY, USA,
29–36. DOI:http://dx.doi.org/10.1145/215585.215639
17. Dieter Schmalstieg and Daniel Wagner. 2007.
Experiences with Handheld Augmented Reality. In
Proceedings of the 6th IEEE/ACM International
Symposium on Mixed and Augmented Reality (ISMAR
’07). IEEE, Washington, DC, USA, 3–18. DOI:
http://dx.doi.org/10.1109/ISMAR.2007.4538819
18. Teddy Seyed, Xing-Dong Yang, and Daniel Vogel. 2016.
Doppio: A Reconfigurable Dual-Face Smartwatch for
Tangible Interaction. In Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems
(CHI ’16). ACM, New York, NY, USA, 4675–4686.
DOI:
http://dx.doi.org/10.1145/2858036.2858256
19. Damion Shelton, George Stetten, and Wilson Chang.
2002. Ultrasound Visualization with the Sonic Flashlight.
In ACM SIGGRAPH 2002 Conference Abstracts and
Applications (SIGGRAPH ’02). ACM, New York, NY,
USA, 82–82. DOI:
http://dx.doi.org/10.1145/1242073.1242117
20. Georg Stetten, Vikram Chib, Daniel Hildebrand, and
Jeannette Bursee. 2001. Real Time Tomographic
Reflection: Phantoms for Calibration and Biopsy. In
Proceedings of the IEEE/ACM International Symposium
on Augmented Reality (ISAR ’01). 11–19. DOI:
http://dx.doi.org/10.1109/ISAR.2001.970511
21. Ka-Ping Yee. 2003. Peephole Displays: Pen Interaction
on Spatially Aware Handheld Computers. In Proceedings
of the SIGCHI Conference on Human Factors in
Computing Systems (CHI ’03). ACM, New York, NY,
USA, 1–8. DOI:
http://dx.doi.org/10.1145/642611.642613
... To the best of our knowledge, no prior research has investigated the text entry performance of joint OST HMD-smartphone systems in depth. However, joint systems have been studied before (e.g., [1], [66], [67], [68], [69], [70], [71]), not focusing on text entry. The closest prior works are probably by Wolf et al. [72], which studied the performance of pointing, crossing, and steering tasks in a joint OST HMD-smartwatch system but importantly did not investigate text entry and Grubert et al. [1] who proposed a joint OST HMD-smartphone text entry system but did not report on performance. ...
Preprint
Optical see-through head-mounted displays (OST HMDs) are a popular output medium for mobile Augmented Reality (AR) applications. To date, they lack efficient text entry techniques. Smartphones are a major text entry medium in mobile contexts but attentional demands can contribute to accidents while typing on the go. Mobile multi-display ecologies, such as combined OST HMD-smartphone systems, promise performance and situation awareness benefits over single-device use. We study the joint performance of text entry on mobile phones with text output on optical see-through head-mounted displays. A series of five experiments with a total of 86 participants indicate that, as of today, the challenges in such a joint interactive system outweigh the potential benefits.
... To the best of our knowledge, no prior research has investigated the text entry performance of joint OST HMD-smartphone systems in depth. However, joint systems have been studied before (e.g., [1], [66], [67], [68], [69], [70], [71]), not focusing on text entry. The closest prior works are probably by Wolf et al. [72], which studied the performance of pointing, crossing, and steering tasks in a joint OST HMD-smartwatch system but importantly did not investigate text entry and Grubert et al. [1] who proposed a joint OST HMD-smartphone text entry system but did not report on performance. ...
Article
Optical see-through head-mounted displays (OST HMDs) are a popular output medium for mobile Augmented Reality (AR) applications. To date, they lack efficient text entry techniques. Smartphones are a major text entry medium in mobile contexts but attentional demands can contribute to accidents while typing on the go. Mobile multi-display ecologies, such as combined OST HMD-smartphone systems, promise performance and situation awareness benefits over single-device use. We study the joint performance of text entry on mobile phones with text output on optical see-through head-mounted displays. A series of five experiments with a total of 86 participants indicate that, as of today, the challenges in such a joint interactive system outweigh the potential benefits.
... Also in this landscape, smart wearables, such as smartwatches, smartglasses, and digital jewellery, represent the fastest-growing technological innovation for mobile users (Menear, 2020), providing new functionality and services complementary to the prevalent smartphone (Chen, Chen, Liu, Chen, and Li, 2021;Sani, Boos, Yun, and Zhong, 2015), but also likely to replace smartphones in the foreseeable future. Mobile and wearable devices enable a diversity of output modalities for users, from primarily visual feedback on smartphones and smartwatches (Matulic, Ganeshan, Fujiwara, and Vogel, 2021;Wenig, Schöning, Olwal, Oben, and Malaka, 2017) to aural feedback for devices worn on the head (Brezolin, Santos, de Lima, Zanella, Rieder, CONTACT Radu-Daniel Vatavu. Email: radu.vatavu@usm.ro ...
Article
Full-text available
We introduce VIREO, a web-based software tool for graphical authoring of vibrotactile feedback for mobile and wearable applications. VIREO enables flexible specification of vibrotactile patterns with model-based and free-draw input, and is compatible with devices that run JavaScript, either natively or in a web browser. We demonstrate VIREO with applications developed for smartphones, smartwatches, armbands, and smartglasses, and we present the results of a usability evaluation study with sixteen participants represented by coders with various programming experience. We discuss our contributions in the context of the results of a Systematic Literature Review conducted on the topic of software tools, editors, and platforms developed in the scientific community for authoring vibrotactile feedback. Given that one finding of our review is the little availability of such contributions, we release VIREO as a free resource on the web for researchers and practitioners to author and integrate vibrotactile feedback in mobile and wearable applications.
... With a display strap, the watch can either stay in place and the content on screen be adapted [82,145] or include the rotation of the device around the arm as an additional metaphor for interaction [99]. The main watch display can also be extended in to a holographic representation, making it easier to follow, e.g., navigation instructions [165]. The wearable display can also be a companion to other wearable displays, either to be able to keep track of content outside the field-of-view an HMD can augment, or to make parts of an HMD view accessible to others [126]. ...
... However, with the advent of consumer mixed reality devices, research has begun to include 3D-capable devices such as HMDs into this stream of research. In this context, researchers have explored desktops (e.g., [40,56]), smartphones (e.g., [12,39,44]), smartwatches (e.g., [12,25,71]), pens (e.g., [16,70]), tablets (e.g., [1,28,68]), and tabletops (e.g., [9,24]) for interacting with 3D content in mixed reality environments. More specific to this work, Büschel et al. [7] demonstrated that using smartphone-based interaction for navigating 3D data spaces in AR outperforms mid-air gestures and voice input. ...
Conference Paper
Full-text available
Mobile intervention studies employ mobile devices to observe participants' behavior change over several weeks. Researchers regularly monitor high-dimensional data streams to ensure data quality and prevent data loss (e.g., missing engagement or malfunctions). The multitude of problem sources hampers possible automated detection of such irregularities - providing a use case for interactive dashboards. With the advent of untethered head-mounted AR devices, these dashboards can be placed anywhere in the user's physical environment, leveraging the available space and allowing for flexible information arrangement and natural navigation. In this work, we present the user-centered design and the evaluation of IDIAR: Interactive Dashboards in AR, combining a head-mounted display with the familiar interaction of a smartphone. A user study with 15 domain experts for mobile intervention studies shows that participants appreciated the multimodal interaction approach. Based on our findings, we provide implications for research and design of interactive dashboards in AR.
Article
Mobile devices such as smartphones and tablets have limited display size and input capabilities that make a variety of tasks challenging. Coupling the mobile device with Augmented Reality eyewear such as smartglasses can help address some of these challenges. In the specific context of digital content annotation tasks, this combination has the potential to enhance the user experience on two fronts. First, annotations can be offloaded into the air around the mobile device, freeing precious screen real-estate. Second, as smartglasses often come equipped with a variety of sensors including a camera, users can annotate documents with pictures or videos of their environment, captured on the spot, hands-free, and from the wearer's perspective. We present AnnotAR, a prototype that we use as a research probe to assess the viability of this approach to digital content annotation. We use AnnotAR to gather users' preliminary feedback in a laboratory setting, and to showcase how it could support real-world use cases.
Conference Paper
Full-text available
Display devices on and around the body such as smartwatches, head-mounted displays or tablets enable users to interact on the go. However, diverging input and output fidelities of these devices can lead to interaction seams that can inhibit efficient mobile interaction, when users employ multiple devices at once. We present MultiFi, an interactive system that combines the strengths of multiple displays and overcomes the seams of mobile interaction with widgets distributed over multiple devices. A comparative user study indicates that combined head-mounted display and smartwatch interfaces can outperform interaction with single wearable devices.
Conference Paper
Full-text available
Mobile devices can provide people with contextual information. This information may benefit a primary activity, assuming it is easily accessible. In this paper, we present DisplaySkin, a pose-aware device with a flexible display circling the wrist. DisplaySkin creates a kinematic model of a user's arm and uses it to place information in view, independent of body pose. In doing so, DisplaySkin aims to minimize the cost of accessing information without being intrusive. We evaluated our pose-aware display with a rotational pointing task, which was interrupted by a notification on DisplaySkin. Results show that a pose-aware display reduces the time required to respond to notifications on the wrist.
Conference Paper
Full-text available
Display devices on and around the body such as smart-watches, head-mounted displays or tablets enable users to interact on the go. However, diverging input and output fidelities of these devices can lead to interaction seams that can inhibit efficient mobile interaction, when users employ multiple devices at once. We present Mul-tiFi, an interactive system that combines the strengths of multiple displays and overcomes the seams of mobile interaction with widgets distributed over multiple devices. A comparative user study indicates that combined head-mounted display and smartwatch interfaces can outper-form interaction with single wearable devices.
Conference Paper
Full-text available
We present Facet, a multi-display wrist worn system consisting of multiple independent touch-sensitive segments joined into a bracelet. Facet automatically determines the pose of the system as a whole and of each segment individually. It further supports multi-segment touch, yielding a rich set of touch input techniques. Our work builds on these two primitives to allow the user to control how applications use segments alone and in coordination. Applications can expand to use more segments, collapses to encompass fewer, and be swapped with other segments. We also explore how the concepts from Facet could apply to other devices in this design space.
Article
Full-text available
ZeroTouch (ZT) is a unique optical sensing technique and architecture that allows precision sensing of hands, fingers, and other objects within a constrained 2-dimensional plane. ZeroTouch provides tracking at 80 Hz, and up to 30 concurrent touch points. Integration with LCDs is trivial. While designed for multi-touch sensing, ZT enables other new modalities, such as pen+touch and free-air interaction. In this paper, we contextualize ZT innovations with a review of other flat-panel sensing technologies. We present the modular sensing architecture behind ZT, and examine early diverse uses of ZT sensing.
Article
Full-text available
Toolglass™ widgets are new user interface tools that can appear, as though on a transparent sheet of glass, between an application and a traditional cursor. They can be positioned with one hand while the other positions the cursor. The widgets provide a rich and concise vocabulary for operating on application objects. These widgets may incorporate visual filters, called Magic Lens™ filters, that modify the presentation of application objects to reveal hidden information, to enhance data of interest, or to suppress distracting information. Together, these tools form a see-through interface that offers many advantages over traditional controls. They provide a new style of interaction that better exploits the user's everyday skills. They can reduce steps, cursor motion, and errors. Many widgets can be provided in a user interface, by designers and by users, without requiring dedicated screen space. In addition, lenses provide rich context-dependent feedback and the ability to view details and context simultaneously. Our widgets and lenses can be combined to form operation and viewing macros, and can be used over multiple applications.
Conference Paper
With the increasing popularity of smartwatches over the last years, there has been a substantial interest in novel input methods for such small devices. However, feedback modalities for smartwatches have not seen the same level of interest. This is surprising, as one of the primary function of smartwatches is their use for notifications. It is the interrupting nature of current notifications on smartwatches that has also drawn some of the more critical responses to them. Here, we present a subtle notification mechanism for smartwatches that uses light scattering in a wearer's skin as a feedback modality. This does not disrupt the wearer in the same way as vibration feedback and also connects more naturally with the user's body.
Conference Paper
Doppio is a reconfigurable smartwatch with two touch sensitive display faces. The orientation of the top relative to the base and how the top is attached to the base, creates a very large interaction space. We define and enumerate possible configurations, transitions, and manipulations in this space. Using a passive prototype, we conduct an exploratory study to probe how people might use this style of smartwatch interaction. With an instrumented prototype, we conduct a controlled experiment to evaluate the transition times between configurations and subjective preferences. We use the combined results of these two studies to generate a set of characteristics and design considerations for applying this interaction space to smartwatch applications. These considerations are illustrated with a proof-of-concept hardware prototype demonstrating how Doppio interactions can be used for notifications, private viewing, task switching, temporary information access, application launching, application modes, input, and sharing the top.
Article
Smartwatches are a promising new interactive platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches that expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smartwatch to render icons on the user's skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these "skin buttons" can have high touch accuracy and recognizability, while being low cost and power-efficient.
Article
From the discovery of X-rays over a century ago, clinicians have been presented with a wide assortment of imaging modalities yielding maps of localized structure and function within the patient. Some imaging modalities are tomographic, meaning that the data are localized into voxels, rather than projected along lines of sight as with conventional X-ray images. Tomographic modalities include magnetic resonance (MR), computerized tomography (CT), ultrasound, and others. Tomographic images, with their spatially distinct voxels, are essential to our present work.