Conference PaperPDF Available

FlexCase: Enhancing Mobile Interaction with a Flexible Sensing and Display Cover

Authors:

Abstract and Figures

FlexCase is a novel flip cover for smartphones, which brings flexible input and output capabilities to existing mobile phones. It combines an e-paper display with a pressure- and bendsensitive input sensor to augment the capabilities of a phone. Due to the form factor, FlexCase can be easily transformed into several different configurations, each with different interaction possibilities. We can use FlexCase to perform a variety of touch, pressure, grip and bend gestures in a natural manner, much like interacting with a sheet of paper. The secondary e-paper display can act as a mechanism for providing user feedback and persisting content from the main display. In this paper, we explore the rich design space of FlexCase and present a number of different interaction techniques. Beyond, we highlight how touch and flex sensing can be combined to support a novel type of gestures, which we call Grip & Bend. We also describe the underlying technology and gesture sensing algorithms. Numerous applications apply the interaction techniques in real-world examples, including enhanced e-paper reading and interaction, a new copy-and-paste metaphor, high degree of freedom 3D and 2D manipulation, and the ability to transfer content and support input between displays in a natural and flexible manner.
Content may be subject to copyright.
FlexCase: Enhancing Mobile Interaction with a
Flexible Sensing and Display Cover
Christian Rendl1, David Kim2, Patrick Parzer1, Sean Fanello2, Martin Zirkl3, Gregor Scheipl3,
Michael Haller1, Shahram Izadi2
1Media Interaction Lab, University of Applied Sciences Upper Austria
2Microsoft Research, Redmond, WA, USA
3Institute of Surface Technologies and Photonics, Joanneum Research
Figure 1. FlexCase is a novel flip cover for smartphones, which combines a flexible sensor and e-paper display to enhance interaction with existing
mobile phones. The cover can be used in a variety of configurations and it augments phone’s existing input and output capabilities with a variety of new
interaction techniques, such as bimanual and back-of-device interaction or the incorporation of different grips for mode-switching.
ABSTRACT
FlexCase is a novel flip cover for smartphones, which brings
flexible input and output capabilities to existing mobile phones.
It combines an e-paper display with a pressure- and bend-
sensitive input sensor to augment the capabilities of a phone.
Due to the form factor, FlexCase can be easily transformed
into several different configurations, each with different inter-
action possibilities. We can use FlexCase to perform a variety
of touch, pressure, grip and bend gestures in a natural manner,
much like interacting with a sheet of paper. The secondary
e-paper display can act as a mechanism for providing user
feedback and persisting content from the main display. In
this paper, we explore the rich design space of FlexCase and
present a number of different interaction techniques. Beyond,
we highlight how touch and flex sensing can be combined
to support a novel type of gestures, which we call Grip &
Bend. We also describe the underlying technology and ges-
ture sensing algorithms. Numerous applications apply the
interaction techniques in real-world examples, including en-
hanced e-paper reading and interaction, a new copy-and-paste
metaphor, high degree of freedom 3D and 2D manipulation,
and the ability to transfer content and support input between
displays in a natural and flexible manner.
Author Keywords
Flexible, input, output, sensor, deformation, grip detection
ACM Classification Keywords
H.5.2. [Information interfaces and presentation]: User Inter-
faces: Input Devices and Strategies, Interaction Styles.
INTRODUCTION
The smartphone has revolutionized computing, making inter-
acting with digital information as simple as reaching into your
pocket. Although the hardware continually improves, many of
the interactions are still reduced to touching and sliding our
fingers on a rigid piece of glass.
To remove these bounds on 2D input on the front of the display
and to avoid occluding (often valuable) pixels on screen, re-
searchers have explored interaction on the back [6, 53], above
[29], or around the device [9]. Further, researchers have looked
at using flexible sensors [41] or even the combination of flexi-
ble display and input [15, 19, 22, 27, 28, 47, 48] to bring more
paper-like physical interactions to mobile devices.
Another trend extends the output space of smartphones by us-
ing electronic paper (e-paper) displays in conjunction with the
phone’s regular screen [5, 35, 38, 55]. These e-paper displays
are very power-efficient as they only need to be powered while
the screen content is being updated, and exhibit a paper-like
reading experience, which can remove aspects of eye strain.
However, none of the aforementioned systems support rich
flexible and gestural interactions between the two screens or
make use of the combined extended screen space.
In this paper, we present FlexCase, a flexible interactive flip
cover that augments the input and output capabilities of smart-
phones with novel paper-like and gestural interactions. As
shown in Figure 1, a cover housing both our novel sensor and
e-paper display is attached to the phone without compromising
the form factor. We utilize a custom-made flexible touch, pres-
sure and bend sensor, combined with a flexible e-paper display
that turns the phone cover into a versatile input and output
device. The cover provides interactive mechanisms to move
content between displays. It allows the secondary display to
be used as a peripheral surface for input and output. It also
enables rich physical paper-like interactions with the cover
using touch, pressure and various flex gestures. The ability
to simultaneously detect touch and complex deformation of
the surface allows for gestures, which we call Grip & Bend,
that provide a variety of novel paper-like interactions. We
explore the design space for FlexCase in detail, demonstrating
a variety of new applications.
While we consider the novel form factor and the resulting
interaction techniques as the main contributions of this paper,
we also present technical novelty with a sensor that combines
multiple modes of input. The flexible input sensor is built on
a touch and pressure-sensitive printed piezoelectric film [40,
56], which recently demonstrated its ability to sense complex
deformations [41]. In contrast to this prior work that utilizes
a sparse arrangement of sensors around the periphery, we
use a matrix-like sensor layout to sense touch, pressure, and
complex deformations with only a single sensor. Summarizing,
the main contributions of this paper are:
A concept of a novel flip cover form factor that augments
existing mobile interaction with flexible output and input,
including touch and bend input.
Exploration of the design space for our system with its
reconfigurable multi-display and multi-input form factor.
Novel interaction techniques, which arise from the combi-
nation of touch, pressure, grip, and bend sensing.
A custom input sensor, which enables bi-directional bend-
ing and touch on both sides; and a new-machine learning
algorithm for leveraging grip sensing for richer bend input.
A set of application scenarios demonstrating novel interac-
tion possibilities with our proposed design.
We will firstly discuss the design rationales for the FlexCase
form factor and its resulting design space. We will then in-
troduce the hardware architecture, the sensing principles and
the algorithms of our system. Further, we will present a com-
pelling set of novel application scenarios arising from our
proposed design and its input modalities. Finally, we will
conclude with a discussion, limitations and future work.
RELATED WORK
Our work relates to many areas of HCI research, including
work on flexible displays and mobile interaction. In this sec-
tion, we will introduce the most important work for each field,
starting with a review of devices combining traditional touch-
screens and e-paper displays. Then we discuss work dealing
with flexible display interaction and interactions with multiple
displays. Finally, we discuss possibilities of extending input
around and behind the device.
Combination of Touchscreen & E-Paper Display
Nook [5] was one of the first commercial E-Readers, which
combined an electronic paper display with an additional
smaller LCD touchscreen. The secondary display was mainly
used for more dynamic interactions such as animated menus.
More recent products, like Yotaphone [55], InkCase [35], or
PopSlate [38], use e-paper displays as a secondary, more
power-efficient ambient display. Although these products
show compelling scenarios, the secondary display is not used
as an additional input device nor as direct extension of the
main display, thus both screens are mostly used in isolation.
In contrast, Dementyev et al. [14] showed wirelessly powered
display tags, which can serve as mobile phone companion
displays. However, they were not attached to a smartphone.
Flexible Display Interaction
Bendable interfaces, such as ShapeTape [13], Gummi [42],
PaperPhone [28], or MorePhone [18] demonstrate many novel
interaction scenarios based on making a mobile device with
flexible sensing or input and output capabilities.
ShapeTape is a thin long tape subdivided into a series of
fiber-optic bend sensors, each detecting bend and twist.
Balakrishnan
et al. [3] demonstrated modeling 3D curves
with this technology. PaperPhone [28] was one of the first
prototypes that coupled flexible input and output into one
self-contained device. PaperTab [49] is a self-contained elec-
tronic reader with a flexible touchscreen and two bi-directional
bend sensors for input. In contrast, the Kinetic Phone [27]
demonstrates a full-color bendable phone, providing both gen-
tle twist and bend gestures. In a follow-up study Ahmaniemi
et al. [1] demonstrated the utility of bidirectional continuous
bend gestures. More recently, FlexView [8] augmented pre-
dominant touch input on flexible devices with a set of bending
interaction techniques for navigation in 3D (along the Z-axis).
Researchers have also used external devices, generally in form
of cameras and projectors to prototype flexible interactions.
Examples include high-fidelity input [47], foldable [25, 30],
and rollable displays [26]. These however are limited to a
non-self-contained form-factor, requiring heavyweight input
and output capabilities to be placed in the environment.
Multi-Display Interaction
Both Hinkley et al. [20] and Chen et al. [10] have shown
multi-display interaction in the context of handheld tablet
devices. However, none of them utilized bending as additional
input metaphor. Recently, Duet [11] explored new interaction
possibilities on mobile phones by coupling a mobile phone
with a smartwatch. The watch acts as an active element in the
interaction, where its orientation and acceleration are used to
control the interaction on the mobile phone.
Several researchers explored paper-like interaction with flex-
ible interactive displays. Holman et al. [22] presented a
desktop-windowing environment, which projects desktop win-
dows on flexible sheets of paper. By tracking their motions,
simple paper sheets become fully interactive. DisplayStacks
[16] allows the physical stacking of e-paper displays. These
displays are tracked relative to each other, enabling cross-
device interaction for moving content or displaying contex-
tual information. More recently, Paperfold [19] presents a
configurable multi-device concept based on e-paper displays.
The detachable displays can be arranged to a wide variety of
different device configurations. Although, Paperfold shows
multi-display interaction for e-paper based mobile devices,
our work explores the use of novel flexible input and output
combined with a regular phone.
Input Around and Behind the Device
One common problem is that interactions on handheld devices
are limited due to the small form factor. Researchers have sys-
tematically explored the space around and behind the device
for further input possibilities.
SideSight [9], for example, uses optical sensors embedded
into the side of a device to track multi-touch around the device.
Besides interactions around the periphery of the device, our
flip cover form factor also enables input behind the device. Lu-
cidTouch [53] introduced the idea of a tablet-sized see-through
display, where some interactions are performed at the back of
the device. A user study showed that many prefer this tech-
nique over touching on the front due to reduced occlusion and
higher precision. Baudisch and Chu took this concept further
and explored the possibility of back-of-device interaction on
very small displays [6]. More recently, L
¨
ochtefeld et al. [32]
explored the combination of front- and back-of-device interac-
tion with one hand. An evaluation showed, that although it is
slower compared to traditional front-of-device input, it allows
for accurate input. Finally, [46] explore the 3D space behind
the smartphone leveraging the existing camera. Unlike prior
work, FlexCase allows for continuous flexible input behind
the display, in combination with touch. This adds tangible and
physical qualities to these types of back-of-device interactions.
Utilizing Grips and Bimanual Input on Mobile Devices
Another interesting approach for extending input is to utilize
grips, e.g. for implicit mode switching. This idea was shown
for mobile phones [12, 17, 54], multi-touch pens [45] and
mice [51]. Recently, Ansara and Girouard [2] proposed the
idea of overloading bend gestures with different modes by
pressing pressure-sensitive keys with the non-dominant hand.
In contrast, FlexCase extracts grip information from how a
user is touching the device before bending and utilizes this
information to assign different meanings to bend gestures.
Prior work has also shown that desktop-based bimanual in-
teraction techniques increase both performance and accuracy
[4, 24]. More recently, this idea was also applied to handheld
devices [34, 52] and flexible devices [8]. Wagner et al. [52]
used touches of the thumb or fingers of the grasping hand
for implicit mode-switching while Burstyn et al. [8] used the
thumb of the grasping hand for scrolling on flexible devices.
FlexCase tries to take the idea of bimanual interaction fur-
ther, and combines touch interaction with the one hand with
continuous bend input with the other hand.
DESIGN RATIONALE
As highlighted, our work overlaps with many areas of interac-
tion research, and this is because of the richness, flexibility,
and diverse capabilities that our flip cover configuration af-
fords. In this section, we give insights on the design process
leading to this kind of form factor.
There is clearly compelling work in the area of flexible sensors
and displays, with a large focus on mobile interaction. The
LCD touchscreen
Interactive, flexible
e-paper display flexible bond
Figure 2. The FlexCase form factor. The usage of a flip cover-like form
factor allows for different configurations with only one additional device.
majority of this work has, however, looked at replacing the
smartphone with a flexible display or leveraging an e-paper dis-
play as a peripheral display with limited interaction. Our work
preserves the interactive qualities of existing smartphones and
augments their capabilities with additional input and output,
in the form of a highly interactive, bendable flip cover.
The following high level design goals underpinned the devel-
opment of FlexCase:
Compactness:
Preserving the compact form factor of cur-
rent mobile devices - while enhancing interaction.
Versatility:
Allow the device to adapt to various usage
scenarios - through simple adjustment of the form factor.
Expressiveness:
Enable richer input - through touch and
other direct physical manipulations.
Mobility and Portability:
The flip cover should be an
unobtrusive companion of the smartphone, which should
not significantly decrease battery life or increase weight.
In order to fulfill the specified design goals, we evaluated a
number of different form factors. We experimented with fixed
displays at the backside of the smartphone (as other e-paper
products do) as well as displays, which were attachable to the
side of the smartphone. Detachability is an interesting concept
since it increases the ability to reuse the flexible display for
different smartphones and it allows even more different form
factors. However, early user tests revealed that this also raises
a number of issues, particularly mounting and subsequent
stability issues. We therefore chose to focus on integrating the
flexible display into an existing phone accessory rather than
requiring two separate devices. By integrating an additional,
flexible device into a flip cover (see Figure 2), various physical
configurations are possible while being folded and enable the
smart cover to be used as a versatile input and output device
for existing mobile phones (cf. Compactness & Versatility).
By further embedding a thin-film sensor into the device, we
can enable flexible and continuous input for existing phones.
In choosing our configuration, we developed a new iteration
of a printed piezoelectric sensor film that combines shape
deformation [41] with touch and pressure [40, 56]. A major
advantage of this technology based on PyzoFlex sensors [23]
is, that we are able to achieve both touch [40] and complex
bend sensing [41] with one single sensor (cf. Expressiveness).
In contrast to multiple sensor layers, the advantage of a single
layer is that it greatly reduces the rigidity of the layer stack
and therefore improves flexibility for bend interaction.
Figure 3. The final layout was evaluated with early mockups. For er-
gonomic reasons, given the hand position and hand mobility, we decided
for arranging the display at the top left side of the flip cover (Right).
For output, we decided to focus on slower framerate, but ubiq-
uitous e-paper displays rather than emerging OLED displays.
Whilst work on flexible OLEDs is incredibly promising, it is
still not broadly available due to ongoing research on improv-
ing durability and robustness [31, 36]. Furthermore, e-paper
displays have already begun to appear on mainstream phone
products, have demonstrated long lifetime and power effi-
ciency, and also provide an interesting challenge of combining
higher framerate input with slower output. For instance, the
main phone display has the advantage of being able to render
high-fidelity colors with a high framerate, but the secondary
e-paper display offers good readability while preserving power
and reducing eye strain (cf. Mobility and Portability). Instead
of merely using the e-paper display for displaying ‘offline’
and ambient content, the flip cover form factor gives us the
possibility to actively use the second screen in conjunction
with the main display of the phone for interactive scenarios to
combine the best of the two worlds. For example, the phone
screen can be used to give instant visual feedback for contin-
uous flex interactions, while the flexible e-paper output can
actively guide and control the users’ interactions.
Flexible displays suffer from inflexible parts, such as the lam-
ination with the e-paper display’s rigid driving electronics,
which usually requires one edge to be rigid. In our case, it
is the short display edge, as this affected the flexibility least.
Furthermore, we decided to detach the long edge from the
flip cover’s bond to additionally increase flexibility and to
allow rich deformations (otherwise only one corner would be
available for flex interactions). The final form factor including
some design considerations can be found in Figure 3.
DESIGN SPACE
In this section, we explore the different physical configurations
and interactive modalities the flip cover form factor affords.
Configurations
FlexCase enables the following configurations (cf. Figure 4):
Book. This configuration provides extended screen space, al-
lowing specific content to be offloaded to the second screen.
Since the e-paper display has a lower resolution and a lower
refresh-rate, it makes sense to offload static content to the
secondary display, providing instant access to otherwise hid-
den menu items and reducing clutter on the main screen. The
secondary display can be used for displaying persistent infor-
mation without battery drain but can also act as a high-fidelity
input device for continuous interaction.
Laptop. By rotating the display into a landscape configuration,
the interactive display can be seen as a keyboard-like input
LAPTOPBOOK BACKSIDE
CONFIGURATIONS
CLOSED
Figure 4. The design space derives from the combination of various dif-
ferent physical configurations and interactive modalities. The e-paper
display can be used next to (Book), below (Laptop), behind (Backside),
and above the smartphone (Closed).
SWIPESTOUCH & PRESSURE BENDING
GRIPPING
INPUT MODALITIES
Figure 5. From an interaction point of view, the device allows for touch
and pressure interaction, swipe gestures, recognition of grips, and high
fidelity bending.
device. For specific input, we can also turn the secondary
display into a versatile direct input device with dynamically
changeable visual feedback.
Backside. By flipping the cover behind the main screen, we
support a configuration that allows for tap and slide gestures
behind the mobile phone’s screen. Beyond this, the interactive
display enables continuous bend input for the main display.
The affordance of having a flexible, interactive display behind
the device helps to reduce existing problems of today’s mobile
phones (e.g. fat-finger problem [44]) but also gives the pos-
sibility to interact with, for example, zoomable or browsable
user interfaces in a more natural way.
Closed. When closed, the backside of the flip cover still pro-
vides interaction possibilities. From an interaction point of
view, this configuration can be used for very explicit interac-
tions, e.g. copying content between two screens or putting the
phone into silent mode.
Input Modalities
The sensor foil within the FlexSense flip cover can detect
pressure and touch location, allowing simple tapping,sliding
or rubbing gestures, as well as detecting how the device is
being gripped. This is combined with the ability to reconstruct
continuous bending of the surface (cf. Figure 5).
Touch & Pressure. Users can perform taps on the interactive
display. Furthermore, the piezoelectric input sensor gives
the possibility to detect the strength of a certain tap, both
continuously or for distinguishing different pressure levels.
Beyond this, the input sensor can detect whether the user is
tapping on the display or on the back of the display. Users can
also perform basic swipe gestures in four directions.
Gripping. Given the matrix-like layout of the input sensor,
the sensor can detect where, how and how strong a user is
gripping the device. This becomes an important modality for
input and is described later.
Bending. Users are able to perform complex deformations
in both directions. The strength of bending gestures can be
used for continuous operations (e.g. zooming) but also for
triggering specific actions (e.g. corner bend for page turning).
Output Modality
The interactive e-paper display provides also a possibility to
enhance interactions with appropriate feedback. This can
be used to either show persistent information without battery
drain, or can accompany input with changeable output to guide
the users’ interactions (e.g. changeable soft keyboard layout).
Compatibility of configurations and modalities
Although the form factor offers a wide variety of different con-
figurations and modalities, not every modality is compatible
and useful in each configuration. Table 1 shows a detailed
correlation of configurations and modalities.
Front Touch
Back Touch
Pressure
Swipes
Gripping
Bending
Output
Closed - + + + o o -
Backside + - o + + + +
Laptop + - + + - o +
Book + o o o + + +
Table 1. Designers should bare in mind that not every combination of
configurations and modalities is compatible and useful.
For instance, it is inconvenient to use different grip gestures
when the interactive e-paper display rests on a table in laptop
mode, as it lacks the affordance of grasping in this configura-
tion. In contrast, given the flexible cover, it is also important
to consider that interaction in mid-air would make some inter-
actions, such as accurate pressure input, impractical. When
creating and designing new interactions, it is important to keep
this compatibility of configurations and modalities in mind.
INTERACTION TECHNIQUES
In the previous section, we described various device configu-
rations that FlexCase can be used in and we also described a
number of different interaction modalities, such as swiping,
gripping, bending and touch. As it can be seen in Table 1,
each device configuration affords a certain set of interaction
modalities better than other device configurations. There is no
one form factor that could serve all usage scenarios equally
well. However, a unique feature of FlexCase is that it allows
the user to quickly combine and switch between a number of
different configurations and modalities. In this section, we
will describe the interactive modalities in more detail. Later
in the paper, these will be showcased in the context of fully
working application scenarios.
Force-sensitive & Dual-sided Touch
The sensor capabilities can be utilized to enrich traditional
touch interaction. The addition of pressure information along-
side touch can be used for quick actions, e.g. for switching
between upper- and lowercase typing [33]. The touch sen-
sitivity on the backside can be further utilized for indirect
manipulations, e.g. Rubbing by sliding over the backside of
the cover for pinning content.
Continuous Bend Input
Given the flexibility of the input sensor, the flip cover can act
as a novel input device for existing smartphones. Existing
smartphones can be enhanced with continuous bend input,
which could be used for all sorts of linear navigational tasks
in existing phone applications, e.g. zooming a map [1]. The
smartphone display provides instant feedback, which is neces-
sary to ensure smooth user interactions.
These types of bend gestures can naturally map to 3D navi-
gation tasks [8], bringing these capabilities to existing (rigid)
smartphones. Beyond this, FlexCase can map paper-like ac-
tions to the digital world, e.g. making a Dog Ear by bending
the top right corner to memorize a page.
Back-of-device & Background Interaction
When the interactive and flexible display is folded back to the
rear of the smartphone, it allows for back-of-device [6, 53]
interactions, all without requiring any other modifications to
the phone or additional sensors. This can be used for one-
dimensional navigation tasks, such as list scrolling, with the
benefit of not occluding the screen while browsing. Another
possibility is to use discrete gestures at the backside to perform
background tasks without needing to change the foreground
app (e.g. performing swipes to switch music). The combi-
nation of bend gestures with back-of-device interaction can
enable the user to change or manipulate UI layers or enable
zoomable user interfaces [37], just by bending the cover.
Bi-manual Interaction
The possibility to have two screens next to each other also
enables bi-manual interaction using the dominant and non-
dominant hand simultaneously. Researchers have shown the
usefulness of such an interaction both for speed and accuracy
[4, 24]. For instance, users can use the thumb of the grasping
hand to pan on the smartphone while performing continuous
bend gestures with the other hand. Bending affords an easy
way for changing a linear parameter [8] and greatly reduces
the clutching issue [43] and screen occlusions due to pinching.
Interaction Across Screens
The output capability of FlexCase creates the possibility of
sharing interactions, UI, and content between displays. This
can be, for example, utilized to transfer static or temporary
content from the main screen to the e-paper display and vice
versa. One example is a clipboard, where the secondary dis-
play can reduce the clutter on the main screen while keeping
the user informed about what is stored on the clipboard.
Grip & Bend
The touch and bending fidelity of our input sensor allows for a
novel gesture where the user first grips the flexible display in
different ways prior to bending in order to perform different
actions. The e-paper display can act as a mechanism to preview
the gesture or provide feedback to the user prior to when
bending begins. This type of gesture is only feasible given the
touch and bending fidelity of our sensor, where different grips
can be detected robustly and combined with different types
of bending deformations. This allows for a single edge of the
display to be used for different gestures simply by adjusting
10 mm 6.3 mm
87 mmDisplay Active Area
104 mm
Display
Figure 6. The layer stack of FlexCase consists of (I) an electrophoretic
display on top, (II) a piezoelectric input sensor and (III) a self-adhesive
protection layer behind (left). The input sensor layout consists of a
3
×
5
matrix of piezoelectric sensors, which is capable of sensing high-fidelity
bending, pressure-sensitive touch and grip detection (right).
the user’s grasp. Researchers have shown the usefulness of
incorporating grasping for input on styluses and tablets [17,
21, 45], which is also enabled in FlexCase by combining grip
detection with bend gestures to support rich mobile interactive
possibilities.
The initial grip (comprising touch and pressure information)
can be used to overload bend gestures with different mean-
ings, such as mode switching, depending on which initial grip
gesture was used. This concept is particularly interesting for
flexible devices with a smaller form factor, where the variety
of possible bend gestures is limited due to physical constraints.
IMPLEMENTATION DETAILS
The hardware implementation of the FlexCase prototype con-
sists of a flexible, electrophoretic display and a piezoelectric
input sensor on its backside (cf. Figure 6).
Display
We use an electrophoretic ink display (EPD) in our FlexCase
prototype. These displays suffer from slow screen update rates,
however, their power consumption is low since only refreshing
the displayed content consumes energy. For FlexCase, we
chose a flexible, monochrome 4 inch electrophoretic display
manufactured by FlexEnable. The active area of the display is
87
×
54 mm and it provides a resolution of 400
×
240 pixel,
which corresponds to a pixel density of 85 ppi. The display
features 16 gray levels and performs a complete full-screen
refresh in <900 ms.
Sensor
For input, we used a low-resolution 3
×
5matrix of printed
piezoelectric sensors (cf. Figure 6). When deformed, a piezo-
electric sensor creates a surface charge, which correlates with
the applied mechanical deformation. This sensing principle
has been used for touch and pressure input [40] as well as
complex bi-directional bend [41]. FlexCase brings these capa-
bilities together, in one single sensor. In contrast to multiple
sensor layers or technologies, this is particularly beneficial for
reducing the rigidity of the layer stack.
During our experiments with deformation of piezoelectric
sensors, we found pressure induced by touch creates small,
local deformations triggering one or a few adjacent sensors (cf.
Figurin 7, left). In contrast, bending the sensor usually results
in global deformations, triggering a greater number of sensors
across a larger surface area (cf. Figure 7, right). Given this fact
Figure 7. FlexCase utilizes one single sensor for both touch (left) & bend
(right) sensing. The distinction is based upon a simple function incorpo-
rating the number of active sensors and the standard deviation across
all readings.
and in contrast to prior work [41], we were able to use a sensor
matrix for both touch & bend sensing without any additional
optimization of the layout. A distinction between touch and
bend can be easily identified based on a simple function that
incorporates the number of active sensors and the standard
deviation across all readings:
θ=N·v
u
u
tn
P
k=1
(skµ)2
n(1)
where
N
is the number of activated sensors,
s
the set of all
integrated sensor signals,
n
the total number of sensors, and
µ
the population mean. The smaller the value of
θ
, the less
variance and active sensors, indicating a touch. A high value
indicates a large variance with a high number of activated
sensors, which corresponds to a bend gesture. The threshold
for robust distinction between touch and bend can be then
empirically defined.
Grip Detection
In the interaction techniques section we introduced a new
concept called Grip & Bend. In the following, we will explain
the learning-based algorithm we used for detecting different
grips in detail, to aid reproducibility.
Figure 8 shows the temporal signature for grip and bend. Note
a strong peak when performing bending across all sensors.
At a high-level our algorithm detects that a bending gesture
has started by identifying these peaks, and then backtracks
to recognize the grip, by analyzing a sliding window across
previous sensor readings. Given the range of different ges-
tures we wished to support, and to better deal with changes
in the temporal sequence due to different execution speed of
users, we decided to use a learning-based algorithm to achieve
high recognition accuracy (as opposed to heuristic-based ap-
proaches). Note that our algorithm incorporates both location
of the grip as well as polarity (touched side).
We can assume that we have to infer the grip label
y
from
a small sequence
XR15×T
, where
T
=50, which is the
number of observations (in ms) used for the grip classification.
-1100
-900
-700
-500
-300
-100
100
300
500
700
900
0123456789
Voltage [mV]
Time [s]
Sensor 1
Sensor 2
Sensor 3
Sensor 4
Sensor 5
Sensor 6
Sensor 7
Sensor 8
Sensor 9
Sensor 10
Sensor 11
Sensor 12
Sensor 13
Sensor 14
Sensor 15
SIGNAL PHASES
GRIP BENDING GRIP BENDING
Figure 8. The chart highlights the two signal phases of Grip & Bend. If
a bend gesture is recognized the grip gets evaluated based on the last set
of observed sensor signals. Colors denote each of the sensors.
Single frames
xtR15
were too noisy to be used directly
as a feature descriptor: instead, we use an average pooling
operator to aggregate frames of
P
descriptors. Consequently,
our feature vector ftR15 for the frame tbecomes:
ft=1
P
t
X
p=tP
xp.(2)
Given the set of tuples
{
(
ft,yt
)
}N
t=1
we want to learn the class
model
WR15×n
minimizing the following objective func-
tion:
E=1
N
N
X
t=1LW,ft,yt+R(W),(3)
which represents the sum of a loss function
L
(
)and a regu-
larization term
R
(
)that gives a tradeoff between the accuracy
and the complexity of the model. Since we want to detect grip
gestures with a certain probability, our loss function and regu-
larizer are defined using a standard softmax model [7] and the
optimization is carried out using Stochastic Gradient Descent
(SGD), which could be also employed for online learning of
new grip gestures.
The learned model
W
is only able to describe linear dependen-
cies in the data, however grip gestures can share non-linear
relations and a better classifier must be used. Non-linear classi-
fiers are usually expensive from the computational viewpoint,
thus we use Random Features to approximate a non-linear
kernel [50, 39]. The idea of random features consists of using
a non-linear mapping Φ(
,f
)to transform a non-linear sepa-
rable problem into a linear separable one. After this mapping
is computed, simple linear classifiers can solve the problem
efficiently. Following [39] we define:
Φ(,f)=1
F[exp(iω1f),...,exp(iωFf)],(4)
where the parameters
RF×15
(
F
=128 total number
of random features) are sampled from a random distribution.
Finally, the objective function
E
=
1
NPN
t=1LW,
Φ(
,ft
)
,yt
+
λkWk2is optimized using SGD.
Given the current observations
XR15×T
, to recognize the
grip
y
we first compute the descriptors
ft
with
t
=
P,...,T
.
These
TP
descriptors are then mapped into a non-linear
space using the random features
as described before. Finally,
we compute the label
ˆy
using the probability scores as follow:
ˆy=arg max
iPtexp(Φ(ft,)wi)
Ct
,(5)
where Ct=Piexp(Φ(ft,)wi)is a normalization constant.
Technical Evaluation
To be able to evaluate the precision of our grip detection, we
performed an elicitation study with the primary goal to extract
common grip gestures that are both ergonomically feasible
and comfortable to users. The goal of the study was not to find
the most natural grips without any constraints, but a distinctive
set of common grips, which can be used to train and test our
learning-based algorithm.
We asked eight unpaid participants aged between 25-36 years
(
M
=27
.
5) from a local academic research lab (two female;
all right-handed) to grip several regions of the device as nat-
urally as possible. The task was performed in two different
configurations, namely Book and Backside configurations, for
four obvious regions of interest (top left corner,top right cor-
ner,side edge and whole display), resulting from the physical
limitations of the flip cover. The grips were video-taped from
two perspectives and later transcribed for analysis.
Results revealed that although the available area for gripping is
limited, participants found a large variety of different gestures
(on average 10.1 different grips per condition,
SD
=3
.
79).
To extract a distinctive set of grips from this large number
of candidates, we decided to nominate the two grips with the
highest number of occurrences and diversity for each condition
(see Figure 9).
Figure 9. The resulting subset of grip gestures of the elicitation study
was used to evaluate our learning-based grip detection algorithm. The
Acc values indicate accuracy and FN
+
FP the miss classifications and
false positives per class with non-linear feature mapping.
The extracted set of grips was then used for evaluating our
learning-based algorithm. The
n
=8grip gestures for each
configuration were performed by different participants, result-
ing in an overall of 50 examples per condition. We evaluated
the algorithm on 10 new sequences of 25 gestures. For the
Book set of grips, the linear method achieved 70% accuracy
on the test set, while the non-linear feature mapping approach
reached 92
.
5% accuracy on the same test set. For the Backside
grips we reached 69% with the linear classifier, whereas the
random feature method achieved 89% of accuracy. Figure
9 shows the accuracy and false positives/negatives for each
condition in detail. These results clearly confirm the benefit of
using non-linear classifiers and in particular a random features
approach as used in our approach.
Interoperability
Both the input sensor as well as the display are connected to
external driver electronics, which both offer a custom-made
communication API via TCP/IP for smartphone applications.
On the input side, an application receives raw input sensor data
as well as pre-processed gesture information such as tap, grip,
and bend gestures. For displaying content onto the secondary
display, an application can use the display API to either render
parts of the phone UI or upload images to the e-paper display.
Power consumption
In the current configuration, our sensor electronics consumes
81
.
3
mW
with a refresh rate of 100
Hz
. The typical update
energy for refreshing the EPD display is 206
mJ
(power con-
sumption for microprocessor excluded).
APPLICATIONS
In this section and supplementary video, we present a number
of novel application scenarios that show the benefits of our
interactive flip cover for enhancing interaction with traditional
mobile phones. The demonstrated applications try to combine
the high-level interaction techniques, which were presented
in this paper, in convincing real-world scenarios. Beyond
scenarios where each of the two screens is used in isolation
(e.g. the e-paper display for displaying persistent data such as a
boarding pass), we were more interested in applications where
the two screens actively support and enrich each other. Further,
the system is shown in different physical configurations.
Document Reading
While e-paper displays are optimized for reading with low
eye-strain and outdoor conditions, LCDs are well-suited for
displaying dynamic media. For document readers, the benefits
of the two worlds can be combined by using the secondary
display for text and the smartphone screen for media content
such as videos or animations (cf. Figure 10).
Grip & Bend. In order to switch to the next or previous page,
the user can bend the top left corner forward or backward. To
quickly browse through a greater number of pages, the user
can - like with a book - bend the entire left edge of the cover.
The LCD shows a live preview to overcome the slow update
rate of e-paper displays and thus ensure a fluid interaction.
Bi-Manual Interaction. While flipping through the virtual
pages, the user can tap onto the desired page on the LCD to
select and to display it on the e-paper display.
Interaction Across Screens. If rich media is linked to the
displayed page on the e-paper display, the resolution and color
fidelity of the LCD display can be utilized to display it directly
on the smartphone screen instead. For videos, another Grip
& Bend gesture can be used to continuously navigate within
a video figure, avoiding the use of small touch-based widgets
on the main screen.
Figure 10. Document Reader. Left: The user can flip through the pages
of a document on the LCD by bending the flexible display. Middle: A
page gets selected by touch on the LCD and gets pushed over to the
secondary e-paper display. Right: The fidelity of the LCD screen can be
utilized to show rich media linked in text pages on the e-paper display.
Content Transfer
We also see a lot of potential in simplifying tasks on the main
screen by using the two screens in unison, for instance for
copy & paste tasks.
Interactions Across Screens. The secondary screen can be
used as a visual clipboard where snapshots and texts can be
easily copied and pasted (cf. Figure 11).
Bi-Manual Interaction. The clipboard is rendered on the e-
paper display and contains several slots where information
can be stored. By simultaneously tapping an information asset
on the main screen and a clipboard slot a user can copy the
information to the clipboard. To paste data, a user first double
taps a clipboard item and then taps on the target location on
the main screen. This not only provides visual feedback, it
also allows for multiple clipboard entries to be stored unlike
standard copy and paste mechanisms.
Figure 11. Clipboard. Left: A recommender app is temporarily opened.
Middle: Content in the recommender app is copied onto the secondary
screen by simultaneously tapping the content and the secondary screen.
Right: Text and image are pasted into an e-mail app.
Maps
Another possible application area are maps (cf. Figure 12).
Continuous Bend Input. Here, we utilize the possibility to
vary the amount of bending in both directions for smooth and
natural manipulation of linear parameters, such as zoom factor,
rotation angle and tilt angles. This type of high degree-of-
freedom 3D navigation is often challenging to perform on a
touchscreen without numerous overloaded gestures. With our
proposed method, these interactions can be performed using
different Grip & Bend gestures.
Grip & Bend. The user can zoom the map in and out by
gripping and bending the whole display backward or forward,
or can change the rotation angle by bending the left corner, or
changing the tilt of the map by bending the right corner.
Interaction Across Screens. Additionally, the map allows for
easy transfer of information to the secondary display, which
is used as an extended visual clipboard. In our example, the
user is able to store routes from the map onto the secondary
display for quick previews of the stored routes.
Dual-Sided Touch. To store a route, the user can close the flip
cover to put the two displays above each other and can, like
with traditional blueprint paper, perform a rubbing gesture on
the backside of the secondary display to transfer the content
between the displays.
Figure 12. Maps. Different Grip & Bend gestures can be used to contin-
uously control zoom (Left), tilt (Middle) and rotation (Right) in a map.
Photos
Another mode of operation is to use the secondary screen
mainly for visualizing icons and widgets, which provides ex-
tended space on the primary display. This allows users to have
faster access to the phone functions by not having to navigate
through clutter. This can be applied for instance to media
players or camera applications, where the user often wants to
see the full image of the phone display (cf. Figure 13).
Continuous Bend Input. Various camera parameters can be
selected, and the bending angle is used to input linear settings
(e.g. zoom, aperture, exposure).
Figure 13. Camera. Capture settings are visualized on the secondary
screen, leaving the primary display free of clutter. Functions, such as
flash settings (Left), can be activated via touch or linear parameters,
such as aperture (Middle) or exposure (Right), can be selected and con-
trolled with different grips.
Grip & Bend. Grip & Bend gestures can be used, for example,
to control different functions. By bending the corner of the
display with a single finger, the user can control the zoom
level, while different grips for the corners can accordingly
adjust aperture or brightness settings. Speed can be controlled
by varying the amount of bending.
Pressure Input & Gaming
In Laptop mode, the secondary display can also be used as a
customizable, pressure-sensitive keyboard or gamepad.
Force-sensitive Touch. In our example, we use this configura-
tion for PIN number input, where the user uses a combination
of touch and pressure to enter a PIN (cf. Figure 14, left). This
could also be extended to support pressure sensitive keyboard
input for text entry [33].
Continuous Bend Input. This configuration is also useful in
gaming scenarios, where the secondary display can be used
as a novel input controller. Besides the possibility to show
gamepad-like touch controls on the display, bending could be
used as an additional input modality to enable more degrees
of freedom (cf. Figure 14, right). In Jump&Run games, bend-
ing could be mapped to the jump of the character, while the
magnitude of bending defines the height of a jump, allowing
other buttons to be used for shooting or other actions.
Figure 14. Left: Input of a combination of digits and pressure informa-
tion for increasing security in public areas. Right: Showing a flexible
gamepad layout with touch-enabled buttons and bend sensing.
Background Interaction
Backside mode is particularly interesting for performing back-
ground tasks, which are not related to the foreground appli-
cation (e.g. controlling a music player). For example, a user
can use swipe gestures on the back of the device for changing
to the previous or next song, while bending the corners either
decrease or increase the music player’s volume. This allows
these settings to be set without even switching the application
or interacting with the phone screen (cf. Figure 15).
Figure 15. Left: A swipe gesture on the back of the device plays the next
song in a music player running in the background. Right: The volume
is adjusted with a corner Grip & Bend gesture.
Navigation
Another possibility of the Backside mode is adding a richer,
more physical way for navigating through content on rigid
displays or zoomable user interfaces [37].
Back-of-device Interaction & Continuous Bend Input. The
backside configuration allows the user to literally push back
the user interface to gain an overview of running applications
(cf. Figure 16, left). A similar approach uses bending for
enhancing digital interaction with metaphors known from the
real world, such as digital flip-book animations. Although
the main display is completely rigid and provides only touch
input, FlexCase adds a physical interaction layer which is very
similar to an analog flip-book (cf. Figure 16, right).
Figure 16. Left: Zooming out of the user interface by bending the cover.
Left: Flipbook animation on the LCD with back-of-device bend input.
DISCUSSION & LIMITATIONS
In this paper, we have focused on a new mobile interaction con-
cept called FlexCase. Although we gained promising feedback
from informal evaluation sessions with FlexCase, a quantita-
tive comparison is clearly needed in the future, but was simply
out of scope for this paper, where we focus on the interactive
capabilities of our system. However, this is an important area
to address in future work.
From an interaction point of view, we see novelty in incorpo-
rating grips for overloading continuous bend gestures and see
this concept as a good avenue to further extend the interaction
space when bend gestures are physically limited due to the
compact mobile form factor. However, one issue of the Grip &
Bend technique is that the mapping of different grips to their
related digital task can be unintuitive. Ideally, the grips would
show a strong physical correlation to their related digital task,
but would also allow more abstract mappings to be memorized
by the users, especially if different regions are incorporated
(e.g. zoom, tilt, rotate in the map scenario). Moreover, the
e-paper output gives the possibility to guide those interactions,
e.g. by displaying mappings. However, we acknowledge that
this requires the user to learn mappings. One possible improve-
ment of the technique could be to let the user assign grips to
different actions to improve memorability.
If the cover is interactive at all times, one problem could
be false activations. An additional protective cover for the
flexible display could decrease unwanted interactions, but may
also decrease the flexibility of the display. Another solution
could involve optimizing and activating touch & bend sensing
for specific use cases. For example, if an application supports
slide gestures on the backside, the cover should refuse all other
input such as taps, grips, or bends. While false activations
were not a big issue in our preliminary user studies, it is worth
to explore this topic more systematically in future work.
The current prototype was wired to an external driver box for
rapid prototyping and evaluation of the different interaction
concepts. While this was sufficient for this paper, mobility
is till a crucial part of this interaction concept. Further, the
mechanical mounting of the e-paper driver electronics limits
the use of one side of the flip cover. This could be addressed in
the future by embedding the electronics directly between the
FlexCase cover and phone (much like a spine of a book). One
shortcoming of the FlexCase form factor is the flexible bond,
which requires some fixation with the hand to hold the device
in a certain configuration. This could be solved by using a
shape-retaining material in the bond, which is easily bendable
but keeps the device in the intended form.
Our e-paper display clearly suffers from a low refresh rate.
However, as demonstrated in our application scenarios, we see
how the combination of LCD and e-paper display can be used
to exploit the best of both worlds. Therefore, while flexible
OLED displays, of course, will provide additional value and
different possibilities for FlexCase, we already see a great
deal of potential in the inclusion of the e-paper display, which
makes this not just an interim solution until fast full-color
flexible displays are bistable, robust and lower-power.
In the previous section and accompanying video, we have
hopefully demonstrated numerous compelling applications
and interaction techniques for FlexCase. The richness of the
design space is illustrated by the large body of application and
interactive scenarios, and we feel this is just the beginning,
and our hope is that practitioners and researchers will develop
these scenarios further.
CONCLUSIONS AND FUTURE WORK
In this paper, we presented a novel flip cover concept for smart-
phones, which combines a purpose-built touch, pressure and
flex sensor with an e-paper display. It is designed specifically
to enable a range of reconfigurable uses and modalities, where
our novel input and output surface augments the standard
touch and display capabilities on the phone. Users can use
our system to perform a variety of touch, pressure, grip and
bend gestures in a natural manner, much like interacting with
a sheet of paper, without occluding the main screen.
The secondary e-paper display can act as a mechanism for
providing user feedback and persisting content from the main
display. We provided insights about the design process for the
form factor as well as the interaction techniques and systemat-
ically explored the design space, resulting from the different
configurations and modalities. We have demonstrated many
interactive and application capabilities, and highlighted how
touch and flex sensing can be combined in a novel way, e.g.
the Grip & Bend technique. We feel this ability to combine
touch and bend interaction in one single interaction is com-
pelling and novel. The range of applications already shown
suggests a rich design space for researchers and practitioners
to explore in the future.
For future work, we currently work on a fully mobile version
and plan to experiment with different materials in order to
further improve the form factor, e.g. a shape-retaining bond.
On interaction side, we plan to conduct quantitative and quali-
tative evaluations to highlight the performance and benefit of
our system compared to more traditional forms of interaction.
ACKNOWLEDGEMENTS
We acknowledge Florian Perteneder, Andreas Tschepp and
Barbara Stadlober for their invaluable input. The research
leading to these results has received funding from the Euro-
pean Union, Seventh Framework Programme FP7/2007-2013
under grant agreement No611104.
REFERENCES
1. Teemu T. Ahmaniemi, Johan Kildal, and Merja Haveri.
2014. What is a Device Bend Gesture Really Good for?.
In Proceedings of the 32Nd Annual ACM Conference on
Human Factors in Computing Systems (CHI ’14). ACM,
New York, NY, USA, 3503–3512.
http://doi.acm.org/10.1145/2556288.2557306
2. Rufino Ansara and Audrey Girouard. 2014. Augmenting
Bend Gestures with Pressure Zones on Flexible Displays.
In Proceedings of the 16th International Conference on
Human-computer Interaction with Mobile Devices &#38;
Services (MobileHCI ’14). ACM, New York, NY, USA,
531–536.
http://doi.acm.org/10.1145/2628363.2634228
3. Ravin Balakrishnan, George Fitzmaurice, Gordon
Kurtenbach, and Karan Singh. 1999. Exploring
Interactive Curve and Surface Manipulation Using a
Bend and Twist Sensitive Input Strip. In Proceedings of
the 1999 Symposium on Interactive 3D Graphics (I3D
’99). ACM, New York, NY, USA, 111–118.
http://doi.acm.org/10.1145/300523.300536
4.
Ravin Balakrishnan and Ken Hinckley. 1999. The Role of
Kinesthetic Reference Frames in Two-handed Input
Performance. In Proceedings of the 12th Annual ACM
Symposium on User Interface Software and Technology
(UIST ’99). ACM, New York, NY, USA, 171–178.
http://doi.acm.org/10.1145/320719.322599
5. Barnes and Noble International LLC. 2015.
http://www.nook.com. (2015).
6. Patrick Baudisch and Gerry Chu. 2009. Back-of-device
Interaction Allows Creating Very Small Touch Devices.
In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’09). ACM, New
York, NY, USA, 1923–1932.
http://doi.acm.org/10.1145/1518701.1518995
7. Christopher M. Bishop. 2006. Pattern Recognition and
Machine Learning (Information Science and Statistics).
Springer-Verlag New York, Inc.
8. Jesse Burstyn, Amartya Banerjee, and Roel Vertegaal.
2013. FlexView: An Evaluation of Depth Navigation on
Deformable Mobile Devices. In Proceedings of the 7th
International Conference on Tangible, Embedded and
Embodied Interaction (TEI ’13). ACM, New York, NY,
USA, 193–200.
http://doi.acm.org/10.1145/2460625.2460655
9. Alex Butler, Shahram Izadi, and Steve Hodges. 2008.
SideSight: Multi-”Touch” Interaction Around Small
Devices. In Proceedings of the 21st Annual ACM
Symposium on User Interface Software and Technology
(UIST ’08). ACM, New York, NY, USA, 201–204.
http://doi.acm.org/10.1145/1449715.1449746
10. Nicholas Chen, Francois Guimbretiere, and Abigail
Sellen. 2012. Designing a Multi-slate Reading
Environment to Support Active Reading Activities. ACM
Trans. Comput.-Hum. Interact. 19, 3, Article 18 (Oct.
2012), 35 pages.
http://doi.acm.org/10.1145/2362364.2362366
11. Xiang ’Anthony’ Chen, Tovi Grossman, Daniel J.
Wigdor, and George Fitzmaurice. 2014. Duet: Exploring
Joint Interactions on a Smart Phone and a Smart Watch.
In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’14). ACM, New
York, NY, USA, 159–168.
http://doi.acm.org/10.1145/2556288.2556955
12. Lung-Pan Cheng, Meng Han Lee, Che-Yang Wu, Fang-I
Hsiao, et al. 2013. IrotateGrasp: Automatic Screen
Rotation Based on Grasp of Mobile Devices. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’13). ACM, New
York, NY, USA, 3051–3054.
http://doi.acm.org/10.1145/2470654.2481424
13. Lee A Danisch, Kevin Englehart, and Andrew Trivett.
1999. Spatially continuous six-degrees-of-freedom
position and orientation sensor. In Photonics East (ISAM,
VVDC, IEMB). International Society for Optics and
Photonics, 48–56.
14. Artem Dementyev, Jeremy Gummeson, Derek Thrasher,
Aaron Parks, Deepak Ganesan, Joshua R. Smith, and
Alanson P. Sample. 2013. Wirelessly Powered Bistable
Display Tags. In Proceedings of the 2013 ACM
International Joint Conference on Pervasive and
Ubiquitous Computing. ACM, New York, NY, USA,
383–386.
http://doi.acm.org/10.1145/2493432.2493516
15.
David T. Gallant, Andrew G. Seniuk, and Roel Vertegaal.
2008. Towards More Paper-like Input: Flexible Input
Devices for Foldable Interaction Styles. In Proceedings of
the 21st Annual ACM Symposium on User Interface
Software and Technology (UIST ’08). ACM, New York,
NY, USA, 283–286.
http://doi.acm.org/10.1145/1449715.1449762
16. Audrey Girouard, Aneesh Tarun, and Roel Vertegaal.
2012. DisplayStacks: Interaction Techniques for Stacks
of Flexible Thin-film Displays. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems (CHI ’12). ACM, New York, NY, USA,
2431–2440.
http://doi.acm.org/10.1145/2207676.2208406
17.
Mayank Goel, Jacob Wobbrock, and Shwetak Patel. 2012.
GripSense: Using Built-in Sensors to Detect Hand
Posture and Pressure on Commodity Mobile Phones. In
Proceedings of the 25th Annual ACM Symposium on User
Interface Software and Technology (UIST ’12). ACM,
New York, NY, USA, 545–554.
http://doi.acm.org/10.1145/2380116.2380184
18. Antonio Gomes, Andrea Nesbitt, and Roel Vertegaal.
2013. MorePhone: A Study of Actuated Shape
Deformations for Flexible Thin-film Smartphone
Notifications. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems (CHI ’13).
ACM, New York, NY, USA, 583–592.
http://doi.acm.org/10.1145/2470654.2470737
19. Antonio Gomes and Roel Vertegaal. 2015. PaperFold:
Evaluating Shape Changes for Viewport Transformations
in Foldable Thin-Film Display Devices. In Proceedings
of the Ninth International Conference on Tangible,
Embedded, and Embodied Interaction (TEI ’15). ACM,
New York, NY, USA, 153–160.
http://doi.acm.org/10.1145/2677199.2680572
20. Ken Hinckley, Morgan Dixon, Raman Sarin, Francois
Guimbretiere, and Ravin Balakrishnan. 2009. Codex: A
Dual Screen Tablet Computer. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems (CHI ’09). ACM, New York, NY, USA,
1933–1942.
http://doi.acm.org/10.1145/1518701.1518996
21. Ken Hinckley, Michel Pahud, Hrvoje Benko, Pourang
Irani, et al. 2014. Sensing Techniques for Tablet+Stylus
Interaction. In Proceedings of the 27th Annual ACM
Symposium on User Interface Software and Technology
(UIST ’14). ACM, New York, NY, USA, 605–614.
http://doi.acm.org/10.1145/2642918.2647379
22. David Holman, Roel Vertegaal, Mark Altosaar, Nikolaus
Troje, and Derek Johns. 2005. Paper Windows:
Interaction Techniques for Digital Paper. In Proceedings
of the SIGCHI Conference on Human Factors in
Computing Systems (CHI ’05). ACM, New York, NY,
USA, 591–599.
http://doi.acm.org/10.1145/1054972.1055054
23. Joanneum Research Forschungsgesellschaft mbH. 2016.
http://www.joanneum.at/en/materials/research-
areas/pyzoflexr.html.
(2016).
24.
Paul Kabbash, William Buxton, and Abigail Sellen. 1994.
Two-handed Input in a Compound Task. In Conference
Companion on Human Factors in Computing Systems
(CHI ’94). ACM, New York, NY, USA, 230–.
http://doi.acm.org/10.1145/259963.260425
25. Mohammadreza Khalilbeigi, Roman Lissermann,
Wolfgang Kleine, and J¨
urgen Steimle. 2012. FoldMe:
Interacting with Double-sided Foldable Displays. In
Proceedings of the Sixth International Conference on
Tangible, Embedded and Embodied Interaction (TEI ’12).
ACM, New York, NY, USA, 33–40.
http://doi.acm.org/10.1145/2148131.2148142
26. Mohammadreza Khalilbeigi, Roman Lissermann, Max
M¨
uhlh¨
auser, and J¨
urgen Steimle. 2011. Xpaaand:
Interaction Techniques for Rollable Displays. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’11). ACM, New
York, NY, USA, 2729–2732.
http://doi.acm.org/10.1145/1978942.1979344
27. Johan Kildal, Susanna Paasovaara, and Viljakaisa
Aaltonen. 2012. Kinetic Device: Designing Interactions
with a Deformable Mobile Interface. In CHI ’12 Extended
Abstracts on Human Factors in Computing Systems (CHI
EA ’12). ACM, New York, NY, USA, 1871–1876.
http://doi.acm.org/10.1145/2212776.2223721
28. Byron Lahey, Audrey Girouard, Winslow Burleson, and
Roel Vertegaal. 2011. PaperPhone: Understanding the
Use of Bend Gestures in Mobile Devices with Flexible
Electronic Paper Displays. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’11). ACM, New York, NY, USA, 1303–1312.
http://doi.acm.org/10.1145/1978942.1979136
29. Mathieu Le Goc, Stuart Taylor, Shahram Izadi, and Cem
Keskin. 2014. A Low-cost Transparent Electric Field
Sensor for 3D Interaction on Mobile Devices. In
Proceedings of the 32Nd Annual ACM Conference on
Human Factors in Computing Systems (CHI ’14). ACM,
New York, NY, USA, 3167–3170.
http://doi.acm.org/10.1145/2556288.2557331
30. Johnny C. Lee, Scott E. Hudson, and Edward Tse. 2008.
Foldable Interactive Displays. In Proceedings of the 21st
Annual ACM Symposium on User Interface Software and
Technology (UIST ’08). ACM, New York, NY, USA,
287–290.
http://doi.acm.org/10.1145/1449715.1449763
31. Flora M. Li, Sandeep Unnikrishnan, Peter van de Weijer,
Ferdie van Assche, et al. 2013. Flexible Barrier
Technology for Enabling Rollable AMOLED Displays
and Upscaling Flexible OLED Lighting. SID Symposium
Digest of Technical Papers 44, 1 (2013), 199–202.
http://stacks.iop.org/0268-1242/26/i=3/a=034001
32. Markus L¨
ochtefeld, Christoph Hirtz, and Sven Gehring.
2013. Evaluation of Hybrid Front- and Back-of-device
Interaction on Mobile Devices. In Proceedings of the
12th International Conference on Mobile and Ubiquitous
Multimedia (MUM ’13). ACM, New York, NY, USA,
Article 17, 4 pages.
http://doi.acm.org/10.1145/2541831.2541865
33. David C. McCallum, Edward Mak, Pourang Irani, and
Sriram Subramanian. 2009. PressureText: Pressure Input
for Mobile Phone Text Entry. In CHI ’09 Extended
Abstracts on Human Factors in Computing Systems (CHI
EA ’09). ACM, New York, NY, USA, 4519–4524.
http://doi.acm.org/10.1145/1520340.1520693
34. Ross McLachlan and Stephen Brewster. 2013. Can You
Handle It?: Bimanual Techniques for Browsing Media
Collections on Touchscreen Tablets. In CHI ’13 Extended
Abstracts on Human Factors in Computing Systems (CHI
EA ’13). ACM, New York, NY, USA, 3095–3098.
http://doi.acm.org/10.1145/2468356.2479619
35. Oaxis Inc. 2015. http://www.inkcase.com. (2015).
36. Jin-Seong Park, Heeyeop Chae, Ho Kyoon Chung, and
Sang In Lee. 2011. Thin film encapsulation for flexible
AM-OLED: a review. Semiconductor Science and
Technology 26, 3 (2011), 034001. http:
//dx.doi.org/10.1002/j.2168-0159.2013.tb06178.x
37. Ken Perlin and David Fox. 1993. Pad: An Alternative
Approach to the Computer Interface. In Proceedings of
the 20th Annual Conference on Computer Graphics and
Interactive Techniques (SIGGRAPH ’93). ACM, New
York, NY, USA, 57–64.
http://doi.acm.org/10.1145/166117.166125
38. popSLATE Media, Inc. 2015. http://www.popslate.com.
(2015).
39. Ali Rahimi and Benjamin Recht. 2007. Random features
for large-scale kernel machines. In Advances in neural
information processing systems. 1177–1184.
40. Christian Rendl, Patrick Greindl, Michael Haller, Martin
Zirkl, et al. 2012. PyzoFlex: Printed Piezoelectric
Pressure Sensing Foil. In Proceedings of the 25th Annual
ACM Symposium on User Interface Software and
Technology (UIST ’12). ACM, New York, NY, USA,
509–518.
http://doi.acm.org/10.1145/2380116.2380180
41.
Christian Rendl, David Kim, Sean Fanello, Patrick Parzer,
et al. 2014. FlexSense: A Transparent Self-sensing
Deformable Surface. In Proceedings of the 27th Annual
ACM Symposium on User Interface Software and
Technology (UIST ’14). ACM, New York, NY, USA,
129–138.
http://doi.acm.org/10.1145/2642918.2647405
42.
Carsten Schwesig, Ivan Poupyrev, and Eijiro Mori. 2004.
Gummi: A Bendable Computer. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems (CHI ’04). ACM, New York, NY, USA, 263–270.
http://doi.acm.org/10.1145/985692.985726
43. Andrew Sears and Ben Shneiderman. 1991. High
Precision Touchscreens: Design Strategies and
Comparisons with a Mouse. Int. J. Man-Mach. Stud. 34, 4
(April 1991), 593–613.
http://dx.doi.org/10.1016/0020-7373(91)90037- 8
44. Katie A. Siek, Yvonne Rogers, and Kay H. Connelly.
2005. Fat Finger Worries: How Older and Younger Users
Physically Interact with PDAs. In Human-Computer
Interaction - INTERACT 2005, MariaFrancesca Costabile
and Fabio Patern (Eds.). Lecture Notes in Computer
Science, Vol. 3585. Springer Berlin Heidelberg, 267–280.
http://dx.doi.org/10.1007/11555261_24
45.
Hyunyoung Song, Hrvoje Benko, Francois Guimbretiere,
Shahram Izadi, et al. 2011. Grips and Gestures on a
Multi-touch Pen. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’11). ACM, New York, NY, USA, 1323–1332.
http://doi.acm.org/10.1145/1978942.1979138
46.
Jie Song, G
´
abor S
¨
or
¨
os, Fabrizio Pece, Sean Ryan Fanello,
et al. 2014. In-air Gestures Around Unmodified Mobile
Devices. In Proceedings of the 27th Annual ACM
Symposium on User Interface Software and Technology
(UIST ’14). ACM, New York, NY, USA, 319–329.
http://doi.acm.org/10.1145/2642918.2647373
47. J¨
urgen Steimle, Andreas Jordt, and Pattie Maes. 2013.
Flexpad: A Highly Flexible Handheld Display. In CHI
’13 Extended Abstracts on Human Factors in Computing
Systems (CHI EA ’13). ACM, New York, NY, USA,
2873–2874.
http://doi.acm.org/10.1145/2468356.2479555
48.
Taichi Tajika, Tomoko Yonezawa, and Noriaki Mitsunaga.
2008. Intuitive Page-turning Interface of e-Books on
Flexible e-Paper Based on User Studies. In Proceedings
of the 16th ACM International Conference on Multimedia
(MM ’08). ACM, New York, NY, USA, 793–796.
http://doi.acm.org/10.1145/1459359.1459489
49. Aneesh P. Tarun, Peng Wang, Audrey Girouard, Paul
Strohmeier, et al. 2013. PaperTab: An Electronic Paper
Computer with Multiple Large Flexible Electrophoretic
Displays. In CHI ’13 Extended Abstracts on Human
Factors in Computing Systems (CHI EA ’13). ACM, New
York, NY, USA, 3131–3134.
http://doi.acm.org/10.1145/2468356.2479628
50. Andrea Vedaldi and Andrew Zisserman. 2012. Efficient
Additive Kernels via Explicit Feature Maps. IEEE
Transactions on Pattern Analysis and Machine
Intelligence 34, 3 (March 2012), 480–492.
51. Nicolas Villar, Shahram Izadi, Dan Rosenfeld, Hrvoje
Benko, et al. 2009. Mouse 2.0: Multi-touch Meets the
Mouse. In Proceedings of the 22Nd Annual ACM
Symposium on User Interface Software and Technology
(UIST ’09). ACM, New York, NY, USA, 33–42.
http://doi.acm.org/10.1145/1622176.1622184
52. Julie Wagner, St´
ephane Huot, and Wendy Mackay. 2012.
BiTouch and BiPad: Designing Bimanual Interaction for
Hand-held Tablets. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’12). ACM, New York, NY, USA, 2317–2326.
http://doi.acm.org/10.1145/2207676.2208391
53. Daniel Wigdor, Clifton Forlines, Patrick Baudisch, John
Barnwell, and Chia Shen. 2007. Lucid Touch: A
See-through Mobile Device. In Proceedings of the 20th
Annual ACM Symposium on User Interface Software and
Technology (UIST ’07). ACM, New York, NY, USA,
269–278.
http://doi.acm.org/10.1145/1294211.1294259
54. Raphael Wimmer and Sebastian Boring. 2009.
HandSense: Discriminating Different Ways of Grasping
and Holding a Tangible User Interface. In Proceedings of
the 3rd International Conference on Tangible and
Embedded Interaction (TEI ’09). ACM, New York, NY,
USA, 359–362.
http://doi.acm.org/10.1145/1517664.1517736
55. Yota Devices. 2015. http://yotaphone.com. (2015).
56. Martin Zirkl, Anurak Sawatdee, Uta Helbig, Markus
Krause, et al. 2011. An All-Printed Ferroelectric Active
Matrix Sensor Network Based on Only Five Functional
Materials Forming a Touchless Control Interface.
Advanced Materials 23, 18 (2011), 2069–2074.
http://dx.doi.org/10.1002/adma.201100054
... The researchers have used different deformable materials, including flexible plexiglass [3,4], paper [5,6], flexible plastic [7,8], ethylene-vinyl acetate (EVA) foam [8][9][10][11], polyvinyl chloride (PVC) [12,13], polycarbonate [14], and silicone [11,[14][15][16] to develop the prototype's body. The resistive bend (or flex) sensors with different lengths and directional capabilities [2-4, 11, 14, 16-18], optical bend sensors [8,19], piezoelectric sensors [20,21], and conductive foam-based sensors [22] are used to detect initiation, extension, and direction of deformation. Literature also offers different feedback techniques such as visual feedback with flexible display [6,17], rigid display [4,[23][24][25], and projected display [7,26,27] and audio [13,16] and vibrotactile [13,28] feedback. ...
... Optical sensors are also explored in the literature to detect device deformation [8,19]. Rendl et al. [20,21] proposed printed piezoelectric sensors that can detect complex deformations. Chien et al. [44] proposed a shape-sensing flexible sensor strip composed of an array of strain gauges. ...
Article
Full-text available
In human-computer interaction research, prototypes allow for communicating design ideas and conducting early user studies to understand user experience without developing the actual product. For investigating deformation-based interaction, functional prototyping becomes challenging due to the unavailability of commercial platforms and the marginal availability of flexible electronic components. During functional prototyping, incurred time and cost are essential factors that further depend on the ease of stiffness customization, reproduction, and upgrade. To offer these advantages, this work presents the fabrication workflow of Nāmya, a smartphone-sized flexible prototype that can detect bend gestures and touch-based inputs using off-the-shelf sensors and flexible materials. This do-it-yourself (DIY) approach to fabricating deformable prototypes focuses on addressing the challenges of selecting flexible material, type of sensor, and sensor positions. We also demonstrate that the proposed use of a flexible three-dimensional- (3D-) printed internal structure with sensor pockets and the one-part silicone cast allows the development of robust deformable prototypes. This fabrication process offers the opportunity to easily customize device stiffness, reproduce prototypes with similar physical properties, and upgrade existing prototypes.
Article
Previous user experience research emphasizes meaning in interaction design beyond conventional interactive gestures. However, existing exemplars that successfully reify abstract meanings through interactions are usually case-specific, and it is currently unclear how to systematically create or extend meanings for general gesture-based interactions. We present Metaphoraction, a creativity support tool that formulates design ideas for gesture-based interactions to show metaphorical meanings with four interconnected components: gesture , action , object , and meaning . To represent the interaction design ideas with these four components, Metaphoraction links interactive gestures to actions based on the similarity of appearances, movements, and experiences; relates actions to objects by applying the immediate association; bridges objects and meanings by leveraging the metaphor TARGET-SOURCE mappings. We build a dataset containing 588,770 unique design idea candidates through surveying related research and conducting two crowdsourced studies to support meaningful gesture-based interaction design ideation. Five design experts validate that Metaphoraction can effectively support creativity and productivity during the ideation process. The paper concludes by presenting insights into meaningful gesture-based interaction design and discussing potential future uses of the tool.
Conference Paper
This paper provides a materiality perspective to understanding lived experiences with a deformable domestic artefact, named transTexture lamp. The lamp is an interactive light with a deformable lampshade surface. We deployed transTexture lamps in the homes of three professional designers for two months with the aim of exploring possible interactions and engagements with the deformable lamp. Our findings show how participants experienced transTexture through pleasurable interactions and how they experienced deformation over time from reflections on these interactions. Analyzing the data through a materiality lens unpacked a creative process of drawing on the deformable lampshade surface, which results in the accumulation of substrates and transformations of deformations. These findings suggest opportunities for future material-centered interaction design research and practices in HCI.
Conference Paper
Full-text available
In this paper, we investigate the use of shape changes in a multi-segmented mobile device for triggering viewport transformations in its graphical interface. We study PaperFold, a foldable device with reconfigurable thin-film electrophoretic display tiles. PaperFold enables users to attach, reorient and fold displays in a mobile form factor that is thin and lightweight even when fully collapsed. We discuss how our design was informed by a participatory study that resulted in 14 preferred shape changes. In a subsequent study, we asked users to rank the utility of shape changes for triggering common view operations in map and text editing applications. Results suggest participants were able to attribute specific view operations as automated responses to folding, attaching, reorienting or detaching displays. Collated or full screen views were preferred when users collocated two displays. When adding a third display, alternative views such as toolbars or a list of apps were suggested. Showing 3D views was strongly associated with folding PaperFold segments into a three dimensional structure.
Article
Full-text available
We contribute a thin, transparent, and low-cost design for electric field sensing, allowing for 3D finger and hand tracking and gestures on mobile devices. Our approach requires no direct instrumentation of the hand or body, and is non-optical, allowing for a compact form-factor that is resilient to ambient illumination. Our simple driver electronics are based on an off-the-shelf chip that removes the need for building custom analog electronics. We describe the design of our transparent electrode array, and present a machine learning algorithm for mapping from signal measurements at the receivers to 3D positions. We demonstrate non-contact motion gestures, and precise 3D hand and finger localization. We conclude by discussing limitations and future work.
Conference Paper
Full-text available
We present FlexSense, a new thin-film, transparent sensing surface based on printed piezoelectric sensors, which can reconstruct complex deformations without the need for any external sensing, such as cameras. FlexSense provides a fully self-contained setup which improves mobility and is not affected from occlusions. Using only a sparse set of sensors, printed on the periphery of the surface substrate, we devise two new algorithms to fully reconstruct the complex deformations of the sheet, using only these sparse sensor measurements. An evaluation shows that both proposed algorithms are capable of reconstructing complex deformations accurately. We demonstrate how FlexSense can be used for a variety of 2.5D interactions, including as a transparent cover for tablets where bending can be performed alongside touch to enable magic lens style effects, layered input, and mode switching, as well as the ability to use our device as a high degree-of-freedom input controller for gaming and beyond.
Article
Flexible displays have paved the road for a new generation of interaction styles that allow users to bend and twist their devices. We hypothesize that bend gestures can be augmented with "hot-key" like pressure areas. This would allow single corner bends to have multiple functions. We created three pressure and bend interaction styles and compared them to bend-only gestures on two deformable prototypes. Users preferred the bend only prototype but still appreciated the pressure & bend prototype, particularly when it came to the lock/unlock application. We found that pressure interaction is a poor replacement for touch interaction, and present design suggestions to improve its performance.
Article
We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.
Article
Device deformation allows new types of gestures to be used in interaction. We identify that the gesture/use-case pairings proposed by interaction designers are often driven by factors relating improved tangibility, spatial directionality and strong metaphorical bonds. With this starting point, we argue that some of the designs may not make use of the full potential of deformation gestures as continuous, bipolar input techniques. In two user studies, we revisited the basics of deformation input by taking a new systematic look at the question of matching gestures with use cases. We observed comparable levels of UX when using bend input in different continuous bipolar interactions, irrespective of the choice of tangibility, directionality and metaphor. We concluded that device bend gestures use their full potential when used to control continuous bipolar parameters, and when quick reactions are needed. From our studies, we also identify relative strengths of absolute and relative mappings, and report a Fitts' law study for device bending input.
Article
The emergence of smart devices (e.g., smart watches and smart eyewear) is redefining mobile interaction from the solo performance of a smart phone, to a symphony of multiple devices. In this paper, we present Duet -- an interactive system that explores a design space of interactions between a smart phone and a smart watch. Based on the devices' spatial configurations, Duet coordinates their motion and touch input, and extends their visual and tactile output to one another. This transforms the watch into an active element that enhances a wide range of phone-based interactive tasks, and enables a new class of multi-device gestures and sensing techniques. A technical evaluation shows the accuracy of these gestures and sensing techniques, and a subjective study on Duet provides insights, observations, and guidance for future work.
Article
Touchscreen tablets present an interesting challenge to interaction design: they are not quite handheld like their smartphone cousins, though their form factor affords usage away from the desktop and other surfaces. This means that users will have to find ways to support the device that often require one hand to hold it, constraining their ability to use two hands on the touchscreen. Our ongoing work explores the possibility of using novel input modalities mounted on the tablet to enable simultaneous two-handed input while the user is holding the device. This paper presents a bimanual scrolling technique that splits the control of scrolling speed and scrolling direction across two hands using a combination of pressure, physical dial and touch input.
Article
This video demonstrates Flexpad, a highly flexible display interface. Flexpad introduces a novel way of interacting with flexible displays by using detailed deformations. Using a Kinect camera and a projector, Flexpad transforms virtually any sheet of paper or foam into a flexible, highly deformable and spatially aware handheld display. It uses a novel approach for tracking deformed surfaces from depth images very robustly, in high detail and in real time. As a result, the display is considerably more deformable than previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation. We illustrate these unique capabilities through three application examples: curved cross-cuts in volumetric images, deforming virtual paper characters, and slicing through time in videos.