Conference PaperPDF Available

Abstract and Figures

We present FlexSense, a new thin-film, transparent sensing surface based on printed piezoelectric sensors, which can reconstruct complex deformations without the need for any external sensing, such as cameras. FlexSense provides a fully self-contained setup which improves mobility and is not affected from occlusions. Using only a sparse set of sensors, printed on the periphery of the surface substrate, we devise two new algorithms to fully reconstruct the complex deformations of the sheet, using only these sparse sensor measurements. An evaluation shows that both proposed algorithms are capable of reconstructing complex deformations accurately. We demonstrate how FlexSense can be used for a variety of 2.5D interactions, including as a transparent cover for tablets where bending can be performed alongside touch to enable magic lens style effects, layered input, and mode switching, as well as the ability to use our device as a high degree-of-freedom input controller for gaming and beyond.
Content may be subject to copyright.
FlexSense: A Transparent Self-Sensing Deformable Surface
Christian Rendl1, David Kim2, Sean Fanello2, Patrick Parzer1, Christoph Rhemann2, Jonathan
Taylor2, Martin Zirkl3, Gregor Scheipl3, Thomas Rothl¨
ander3, Michael Haller1, Shahram Izadi2
1Media Interaction Lab, University of Applied Sciences Upper Austria
2Microsoft Research
3Institute of Surface Technologies and Photonics, Joanneum Research
Figure 1: FlexSense is a fully flexible, transparent, thin-film surface, comprising of sparse printed piezoelectric sensors (left).
Our main contribution is a new set of algorithms that takes these sparse sensor measurements (center), and reconstruct the
dense 3D shape of the device and complex deformations in real-time (right).
ABSTRACT
We present FlexSense, a new thin-film, transparent sensing
surface based on printed piezoelectric sensors, which can re-
construct complex deformations without the need for any ex-
ternal sensing, such as cameras. FlexSense provides a fully
self-contained setup which improves mobility and is not af-
fected from occlusions. Using only a sparse set of sensors,
printed on the periphery of the surface substrate, we devise two
new algorithms to fully reconstruct the complex deformations
of the sheet, using only these sparse sensor measurements.
An evaluation shows that both proposed algorithms are capa-
ble of reconstructing complex deformations accurately. We
demonstrate how FlexSense can be used for a variety of 2.5D
interactions, including as a transparent cover for tablets where
bending can be performed alongside touch to enable magic
lens style effects, layered input, and mode switching, as well
as the ability to use our device as a high degree-of-freedom
input controller for gaming and beyond.
Author Keywords
Flexible, transparent, sensor, deformation, reconstruction.
INTRODUCTION
There has been considerable interest in the area of flexible or
deformable input/output (IO) digital surfaces, especially with
recent advances in nano-technology, such as flexible transis-
tors, eInk & OLED displays, as well as printed sensors. The
promise of such devices is making digital interaction as simple
as interacting with a sheet of paper. By bending, rolling or
flexing areas of the device, a variety of interactions can be
enabled, in a very physical and tangible manner.
Whilst the vision of flexible IO devices has existed for some
time, there have been few self-contained devices that enable
rich continuous user input. Researchers have either created
devices with limited discrete bending gestures, or prototyped
interaction techniques using external sensors, typically camera-
based vision systems. Whilst demonstrating compelling results
and applications for bending-based interactions, the systems
suffer from practical and interactive limitations. For example,
the bend sensors used in [
7
,
17
] are limited to simple bending
of device edges, rather than the complex deformations one
expects when interacting naturally with a sheet of paper. In
contrast, vision-based systems e.g. [
31
] are not self-contained,
are more costly and bulky, and can suffer from occlusions,
particularly when the hand is interacting with the surface.
In this paper, we present FlexSense, a transparent thin input
surface that is capable of precisely reconstructing complex and
continuous deformations, without any external sensing infras-
tructure (see Figure 1). We build on prior work with printed
piezoelectric sensors (previously used for touch, gesture and
pressure sensing, [
23
,
41
]). Our new design uses only a sparse
set of piezoelectric sensors printed on the periphery of the sur-
face substrate. A novel set of algorithms fully reconstruct the
surface geometry and detailed deformations being performed,
purely by interpreting these sparse sensor measurements. This
allows an entirely self-contained setup, free of external vision-
based sensors and their inherent limitations. Such a device can
be used for a variety of applications, including a transparent
cover for tablets supporting complex 2.5D deformations for
enhanced visualization, mode switching and input, alongside
touch; or as a high degree-of-freedom (DoF) input controller.
In summary our contributions are as follows:
We present a new sensor layout based on the prior PyzoFlex
system [
23
], specifically for sensing precise and continuous
surface deformations. This previous work demonstrated the
use of printed piezoelectric sensors for touch and pressure
sensing. In contrast, we present the idea of using these
sensors to enable rich, bidirectional bending interactions.
This includes the design of a new layout, associated sensor
and driver electronics specifically for this purpose.
Our main contribution are two algorithms that can take
measurements from the sparse piezoelectric sensors and
accurately reconstruct the 3D shape of the surface. This
reconstructed shape can be used to detect a wide-range
of flex gestures. The complexity of the deformations af-
forded by our reconstruction algorithms have yet to be seen
with ‘self-sensing’ (i.e. self-contained) devices, and would
typically require external cameras and infrastructure.
We train and evaluate the different algorithms using a
ground-truth multi-camera rig, and discuss the trade-offs
between implementation complexity and reconstruction
accuracy. We also compare to a single camera baseline.
Finally, we demonstrate new interaction techniques and
applications afforded by such a novel sensor design and re-
construction algorithms. In particular as a cover for tablets
where IO is coupled, and as a high DoF input controller.
RELATED WORK
There is broad range of work on flexible sensing and displays.
[
31
] distinguishes this work into two categories. The first are
external input devices where the sensor is used to control a
remote display and UI, and output is typically non-flexible.
The second are deformable handheld devices, where input and
output is coupled, and the display can also be deformable. We
extend this categorization further, by identifying differences in
sensing approaches. Self-sensing systems contain onboard sen-
sors on the device, which can be used to directly estimate the
deformation (e.g. devices based on embedded bend sensors).
External sensing systems use sensors embedded in the envi-
ronment rather than the device to estimate deformations (e.g.
camera-based deformation systems). In this section we explore
prior work on bendable and deformable surfaces, based on this
broad taxonomy.
Perhaps the first example of a deformable external input de-
vice that supported self-sensing is the work by Balakrishnan et.
al. [
1
] which used the ShapeTape sensor [
3
], and a NURBS
(non-uniform rational basis spline) representation for 3D mod-
eling. ShapeTape is a thin long rubber tape subdivided into
a series of fiber optic bend sensors, each detecting bend and
twist (inspired by the sensors used in data gloves e.g. [
40
]).
This showed the potential for exploiting bimanual input for
modeling 3D curves. [
12
] demonstrated the use of piezoelec-
tric thin films to produce a device similar to ShapeTape. [
2
]
use external sensing in the form of camera and vision tech-
niques to reconstruct complex curves from long passive pieces
of wire. The form-factor of these devices make them ideal for
3D curve drawing, but more complex 2D and 2.5D flexing and
bending interactions are less natural.
The first conceptual deformable handheld device was the
Gummi system [
28
]. The project was motivated by innova-
tions in flexible displays and transistors. The prototype device
could be considered self-sensing but used bend sensors placed
behind a TFT display and a 2D trackpad at the rear. Only
small discrete bending gestures were supported, but the work
demonstrated some of the interactive possibilities that bending
interfaces afford. [
32
] use a similar prototype setup to Gummi,
for flicking through pages in an e-reader. Whilst IO is coupled
in these systems, the display remains rigid.
PaperPhone was one of the first devices that coupled flexible
IO into a single self-contained device [
17
]. A flexible PCB
housed five bend sensors, and a kNN-based classifier was used
to detect discrete bend gestures. A user study proposed a clas-
sification scheme that categorized bend gestures by location
(top corner, side, or bottom corner) and direction (up or down).
[
36
] also use a flexible PCB but with different bend sensor
layout, to extend this classification scheme to include bend size
and angle. The Kinectic Phone by Nokia [
15
] demonstrates a
full color bendable phone, a manifestation of the early Gummi
vision. Using this device, Kildal et al. [3] explore bending
and twisting, and propose design guidelines for deformable
devices. PaperTab [
33
] is a self-contained electronic-reader
with two bidirectional FlexPoint sensors for input. The system
demonstrates a progression of the digital desk concept [
9
,
38
],
which is a clear motivation for flexible, paper-like devices,
but for the first time uses self-contained IO, as opposed to
projectors and cameras.
Other self-sensing systems look purely at input sensing. Book-
isheet used two bend sensors on the back of acrylic sheets to
create both a dual and single sheet flexible input device [
37
].
Bend gestures were coarsely categorized into four discrete
classes, and mapped to turning pages in eBooks. FlexRemote
[
21
] consists of 16 flex sensors on the periphery of a thin
acrylic sheet which can recognize eight deformation gestures
for remote input. Twend [
8
] uses eight fiber optic bend sensors
embedded into a thicker substrate to detect 18 unique bends.
All these systems can be thought as self-sensing allowing for a
great deal of mobility and compactness, and avoiding occlu-
sions. These systems typically are not focused on accurate 3D
reconstructions of surface shape, and most detect discrete bend
gestures. This is partly because of the complexity of mapping
from raw sensor readings to precise continuous deformations.
An exploratory study by Lee et al. [
20
] captures data from
users deforming a variety of non-digital substrates including
paper, plastic as well as stretchable fabrics, demonstrating
the richness and complexity of deformations afforded. These
types of interactions can be difficult to capture purely using
discrete gestures.
To enable such types of reconstructions, systems have em-
ployed external sensors, generally in the form of cameras. [
6
]
use retro-reflective tags embedded into a deformable substrate
to support flexing and folding gestures. [
18
] use the Vicon
motion tracking system to reconstruct sparse 3D points and fit
a mesh for simple 3D modeling. [
16
] use a magnetic tracker
and spherical projection onto a deforming sheet of diffuse
acrylic. [
39
] use a combination of external WiiMote sensor
and projector, coupled with onboard pressure and bend sensors
to create a hybrid deformable projected display. [
31
] use a
Kinect and projector for advanced manipulations of a sheet of
paper with coupled output. A 25
×
25 vertex plane is deformed
to fit the observed Kinect data. Eight distinct poses can be
linearly combined using a weighted interpolation scheme to
form more complex shapes. Further, the system analyses the
Kinect dot pattern to disambiguate the user’s hand from the
deformation surface.
Related to this area, is work on projector-vision based fold-
able [
19
,
13
], rollable [
14
] and shape-changing displays [
25
].
Another related field is the shape sensing of malleable sur-
faces and materials. In Jamming User Interfaces [
5
], for
example, the authors use two different techniques, namely
structured light and electric field sensing for deriving shape
information. DeforMe [
22
] projects images realistically onto
deformed surfaces by tracking the deformation with an invis-
ible infrared-based dot pattern on the material. In contrast,
PhotoelasticTouch [
26
] detects changing optical properties of
a transparent deformable material, and Digital Foam [
29
] in-
troduces a deformable device with embedded pressure sensors
for 3D modeling.
Whilst interesting areas of research, our work is firmly focused
on reconstructing the 3D shape and continuous deformations
of a thin transparent surface. PrintSense demonstrates capaci-
tive touch and proximity sensing on a flexible substrate, and
allows for bend sensing using transmit and receive electrodes.
Flexible touch and pressure sensors have been demonstrated,
either using transparent piezoelectric sensors [
23
] or opaque
IFSR sensors [
24
]. Murata have developed a high-transparency
organic piezoelectric film for bend sensing on mobile devices
1
although details are currently limited.
Our work is inspired by and builds on these previous systems.
We support rich, continuous and accurate 3D reconstructions
of deformations of the surface, allowing high DoF input. Our
system is fully self-sensing, enabling a compact, mobile form-
factor, but without limiting the range of deformations. Our
sensor is semi-transparent, allowing for unique capabilities,
such as allowing placement at the front of a display rather than
embedded in the back. Our sensor therefore supports both,
the use as an external input device, or coupled closely with a
display to support novel application scenarios.
DESIGNING FLEXSENSE
Our work builds on the flexible, transparent PyzoFlex sensor
[
23
,
41
]. In this section we provide a brief introduction to
this work, which specifically relates to using these sensors for
bending. We refer the reader to [
23
] for further technical de-
tails regarding the underlying piezoelectric sensor. Whereas in
this prior work, the sensors were used for pressure sensing, our
main goal in this paper is to exploit these sensors for precise
3D reconstruction of surface shape and bending to facilitate
deformation-based interactions. This requires a rethinking of
the existing sensor layout, as detailed in this section.
Piezoelectric bend sensors
Piezoelectric sensors work as follows: deformations of a piezo
element causes a change in the surface charge density of the
material resulting in a charge appearing between the electrodes.
The amplitude and frequency of the signal is directly propor-
tional to the applied mechanical stress. Since piezoelectricity
reacts to mechanical stress, continuous bending is a well-suited
application for such sensors. We propose an entirely printed,
semi-transparent, and bidirectional bend sensor based on differ-
ent functional inks (see Figure 2). The active sensor material
is formed by a poled copolymer P(VDF-TrFE) which shows
a large piezoelectric coefficient (32 pC/N) and can be printed
as a 5
µ
m thick transparent layer [
41
]. The screen printing
process in general ensures low-cost and simple fabrication
without cleanroom requirements or evaporation.
All components of the sensor have relatively good transparency
values with low distracting absorbency (
85%
). Note that
the current sensors are optimized to provide a good trade-off
between signal strength and transparency. If even higher trans-
parency (
>90%
) is desired, metal nanowire inks can be used
as electrode material instead of PEDOT:PSS. For reading sen-
sor measurements printed, non-transparent conductive silver
wires are used. In order to limit occlusions caused by silver
1http://murata.com/new/news_release/2011/0921
plastic substrate
PET
shared bottom electrode
PEDOT:PSS
active ferroelectric material
P(VDF-TrFE)
top electrode
PEDOT:PSS / Carbon
Figure 2: Our piezoelectric bend sensors consist of differ-
ent, entirely printable functional inks. To have less (non-
transparent) conductive silver lines all bend sensors printed
on one substrate share one bottom electrode.
Figure 3: Layout of original PyzoFlex sensor (left) and our
deformable sensor (right).
lines, all sensors on one substrate share one bottom electrode,
which is separately wired to the driver electronics. The conduc-
tive traces can be placed freely across the surface, depending
on application scenarios e.g. on the periphery or more central.
Sensor Layout
Multiple sensors must be combined together in order to be able
to reconstruct the full deformations of the surface. We chose
an A4 like form-factor because this matches commercially
available flexible displays as well as common tablet screens,
but is also big enough to be used as standalone input sensor.
Figure 3 (left) shows the original PyzoFlex sensor layout. As
this layout was specifically designed for touch and pressure
sensing, a dense grid of
16 ×8
sensors was employed in an
active matrix arrangement. However, for bend sensing and
shape reconstruction, this design is non-optimal. First, there is
a large amount of sensor redundancy across the film. Second,
the active matrix requires a large part of the outer film to be
used for the printed silver wires. Finally, the active matrix
scheme can suffer from ambiguities when many sensor mea-
surements must be read at once, making such an arrangement
problematic when bending (for more details see [23]).
Instead for FlexSense, we sought to use a sparse set of sensors,
but placed in a more meaningful arrangement to optimize
for a wide variety of continuous and smooth deformations,
including extreme bends. The final design of our layout is
shown in Figure 3 (right). In the remainder of this section, we
provide more rationale for this sensor layout.
Existing Layouts in Related Work
From related work, we
identified the following, most essential requirements for our
sensor layout: In order to track complex deformations an opti-
mal bend sensor should consider properties such as location
(where does the bend action happens), direction (upwards
or downwards), size (involved surface area), angle (sharp or
round angle), speed, and duration of bend actions [36].
To infer an optimal layout, we investigated arrangements used
in prior work [
7
,
17
,
36
] in terms of the presented requirements
and their sensor alignment. We recreated those layouts using
Figure 4: Paper prototyping to evaluate designs further (left).
Our final layout (top right) and final printed clear surface
(bottom right) is an optimal trade-off between limited amount
of sensors and strong deformation sensing capabilities.
off-the-shelf bend sensor strips (as used in the papers) to be
able to understand their strengths and weaknesses. Hence, we
combined different concepts of existing sensor configurations
into one solution, which is capable of tracking the identified
requirements and therefore complex deformations.
Our Layout
To create and evaluate several different sensor
patterns, we used paper and Scotch tape for fast prototyping
(see Figure 4, left). This enabled us to quickly evaluate dif-
ferent designs “hands-on”. The flexibility of paper and tape
allowed us to bend the prototypes into all directions revealing
the pros and cons for each pattern, and a ruler was used to
follow bend directions/orientations and to identify spots which
are not covered by any sensors.
The final layout arranges 16 sensors at an outer ring of the
sheet (see Figure 4, right). Since we use a plastic foil as
substrate, it is not possible to deform the sheet only in the
center. Therefore, every deformation changes the shape of
the edges, which means that by tracking the edges accurately
nearly every shape deformation can be reconstructed. Another
strength of this layout is that each sensor overlaps with one or
multiple other sensors resulting in additional information as
to where a bend actually happens (as proposed in [
36
]). For
example, the corner areas which are particularly interesting
for bend interaction are covered by three sensors, one at a
45
angle and two at right angles. From this arrangement it
is possible to accurately detect how the corner is bent (bend
angle) as well as where the bend happens (bend position). Due
to the permanent overlap of sensors nearly every deformation
actuates multiple sensors, which enables the detection of a
wide variety of possible deformations. Those complex patterns
of sensor actuations allows for the reconstruction of complex
and continuous deformations.
Driver Electronics
As mentioned earlier, every piezoelectric sensor creates a sur-
face charge which correlates to the applied deformation. Since
the total number of sensors in our layout is small, we can con-
nect each individually using conductive silver ink to the driver
board. This removes the issues associated with the active ma-
trix described earlier. Each sensor is connected to a LMC6482
CMOS rail-to-rail amplifier. These are placed on a small PCB
board that is connected to the foil. Before the amplification the
signals run through a low pass filter, which protects it against
electrostatic discharge. After the amplification a second low
pass filter protects the signal from anti-aliasing issues. Each
signal gets measured through a MAX127 12-bit data acquisi-
tion system which sends the data via a two-wire serial interface
to a microcontroller board (Atmel SAM3X8E).
Signal Processing
The electric charges generated in a piezoelectric sensor decay
with a time constant determined by the dielectric constant,
the internal resistance of the active material and the input
impedance of the readout electronics. Due to this fact, the
raw signal of a piezoelectric sensor are not usable for absolute
measurements. However, once the parameters of the exponen-
tial signal discharge of the piezoelectric sensors are known it
is possible to predict the signal progression over time. Every
deformation applied to the sensor will cause a deviation of this
predicted signal, which is directly proportional to the applied
mechanical stress. Integration over these deviations leads to
absolute sensor measurements, which directly correlate with
the strength of the applied deformations [
23
]. Note there is a
trade-off with this approach, as integrated errors can persist
over time and lead to sensor drift. This can be eliminated with
a heuristic which resets the signal e.g. when there is no active
interaction on the sensor. In the next section, we describe two
reconstruction algorithms where one relies on the integrated
signal and one is able to bypass the described issues working
directly from the raw signal.
RECONSTRUCTING FLEXSENSE
Now that we have described the FlexSense hardware and sen-
sor layout, we turn our attention to how these sparse (raw or
integrated) sensor measurements can be used to accurately
recover the 3D shape of the surface during deformation.
Main Pipeline
Reconstructing the full 3D surface shape from a sparse set of
sensor measurements is clearly a challenging task. Each sensor
reading from our foil is an amplified voltage measurement,
and somehow we need to map these combined values to a
real-world reconstruction of the surface. In this section we
present two data-driven algorithms that tackle this problem.
Both of our methods are first trained using pairs of sensor
measurements and ground truth 3D shape measurements of
the foil. This pre-processing training phase is what enables
our algorithms to infer the shape of the foil from the sparse
sensor data at runtime.
To collect ground truth measurements of the shape of the foil
together with corresponding sensor measurements, we follow
the approach illustrated in Figure 5. We print an array of
markers on a sheet of paper covering our sensor foil. Then
we use a custom-built multi-camera rig (described later) to
track the 3D position of the markers with high accuracy. We
leverage multiple cameras in order to track as many markers
as possible despite occlusions due to the deforming foil and
interacting hands. To estimate the positions of the remaining
occluded markers (shown in red in Figure 5 and 7) we exploit
the prior knowledge that the foil is locally rigid to extrapolate
the reconstructed surface (described later).
We use this sequence of resulting ground truth shapes along
with the corresponding sparse sensor measurements to train
our two algorithms. In the upper left part of Figure 6 we show
the training for our linear interpolation based approach. It
clusters the training data into
K= 30
common shapes. The
mean shape of each cluster is called a blendshape and is stored
together with its averaged corresponding sensor measurements.
At runtime (see Figure 6, bottom left) the sensor data is used
to retrieve the
K= 7
blendshapes which match the input
sensor data (based on a distance metric described later). The
Flexible sensor
with fiducial markers
Sequence of N x 16
sensor measurements
(raw + integrated signal)
Sequence of N meshes
As-Rigid-As-Possible
based mesh fitting
Marker detection +
sparse 3D point triangulation
right camera left camera
stereo rig 1stereo rig 2
Figure 5: Processing pipeline for ground truth capture. We
place fiducial markers on the front and backside of the foil
and record their 3D position using a two stereo camera rig,
together with the recorded sensor measurements (see text
for more details).
final estimated shape is a weighted average of the retrieved
blendshapes. This scheme is similar to the approach of [
31
]
and its main benefit is that it is simple to implement, and can
yield compelling results. However, a shortcoming of this ap-
proach is that it does not generalize well to large sensor signal
variations. In practice, the measured sensor signal depends
on many variables such as bending speed, temperature and
whether the sensor is being grasped. In certain scenarios, this
can result in inaccuracies and accumulation of error over time
(see detailed discussion later).
To tackle these challenges we propose to use a more sophisti-
cated machine learning algorithm. We train this method using
the raw sensor data (see Figure 6, top right) to learn a (non-
linear) mapping function from the raw sensor measurements to
the mesh model. At runtime (see Figure 6, bottom right) it can
efficiently infer the geometry given the learned mapping func-
tion and the sparse sensor data. This method generalizes well
to unseen variations in the sensor signal and hence performs
superior to the linear interpolation approach. However, it is
more complex to implement. As shown later, there is value
in each approach and we describe both in detail in the next
section.
A tale of two algorithms
In this section we provide a deeper formulation of our two
algorithms to aid replication.
Preliminaries
Raw sensor measurements are represented as
a vector
xRF
and integrated measurements as
zRF
(
F= 16
for our sensor foil). Our goal is then to estimate the
3D “vertex” positions
V={v1, ..., vN} ⊆ R3
of
N= 96
canonical locations arranged in a
12 ×8
grid on the sheet.
When the sheet is at rest, these vertices take on their known
positions ¯
V={¯
v1, ..., ¯
vn} ⊆ R3.
In the following, it will sometimes be helpful to vectorize the
vertex positions
V={vn}N
n=1
into a column vector
VR3N
that we wish to estimate given the raw or
xRF
or integrated
signal
zRF
. Further, when we are required to deal with
a set of
J
instances of these variables, we will often label
these variables as
Vj={vj
n}N
n=1
,
Vj
and
xj
. In the case
of a temporal sequence of length
T
we will instead use
t
to
index these variable instances. We will, however, often leave
off these indices when speaking of a single instance of these
variables or when doing so makes equations more clear.
Ground Truth Capture
As mentioned we take a data-driven
approach for reconstructing the deformations of FlexSense
and therefore require training data in order to correlate the
sensor measurements of the flexible sensor with the 3D vertex
positions. In this section, we therefore explain how to obtain
ground truth for a variety of deformations.
In order to generate data of this form, we print an array of
N= 12×8
fiducial markers onto the front and backside of the
flexible sensor foil as to coincide with the
N
vertex positions
we wish to estimate (see Figure 5, top left). Although it is
possible to directly estimate the position (and even orientation)
of each marker with a single camera [
11
], this estimation is
unreliable when the markers are considerably deformed as in
our scenario. We therefore create a calibrated stereo rig (i.e.,
two cameras with known relative position and orientation)
that we label
A
. In rig
A
, we are able to detect some subset
CA⊆ {1, ..., N }
of the
N
markers in both of the rig’s cameras.
For such a marker
n∈ CA
, we use triangulation to estimate
ˆ
vA
nR3of vertex n.
Note that due to occlusions and strong deformations often only
a small number of the marker positions can be estimated with
stereo rig
A
. To increase the number of vertex positions that
we can estimate, we use a second stereo rig that we label
B
to
obtain estimates for a second set of markers
CB⊆ {1, ..., N }
.
The set of indices with estimations from both rigs
CA∩ CB
defines two input point clouds in one-to-one correspondence.
We obtain the optimal rigid transformation by using the Kabsch
algorithm [10] to minimize
Erigid(R, τ ) = X
n∈CA∩CB
kˆ
vA
n(Rˆ
vB
n+τ)k2(1)
where
RSO3
is a rotation matrix and
τR3
is a trans-
lation. Using this transformation, we can obtain a set
C=
CA∪ CB
in a common coordinate frame. To do this for
n∈ C
,
we estimate the corresponding vertex as
ˆ
vn=
1
2(ˆ
vA
n+Rˆ
vB
n+τ)n∈ CA∩ CB
ˆ
vA
n, n ∈ C − CB
Rˆ
vB
n+τ n ∈ C − CA
(2)
As-Rigid-As-Possible Refinement
Unfortunately, we are gen-
erally unable to estimate the locations of all
N
points even with
two stereo rigs. Therefore, we exploit our prior knowledge
that the sheet is locally rigid to extrapolate the reconstructed
surface and provide estimates for the remaining points. To
do this, we utilize the as-rigid-as-possible (ARAP) regularizer
[
30
] which measures how locally non-rigid a point cloud with
a neighborhood structure is. Here the local-rigidity for point
n
is measured in relation to the other points in a neighborhood
N(n)⊆ {1, ..., N }
, which in our case is simply the neighbors
of
n
in the
12 ×8
grid. The ARAP measure of deformation
with respect to the rest configuration ¯
Vis
EARAP(V) =
N
X
n=1
min
RX
n0∈N (n)
k(¯
vn0¯
vn)R(vn0vn)k2
Learning mapping function
from RAW data to geometry
RAW sensor data
+ GT meshes
Integrated sensor data +
GT meshes
Integrated sensor measurment KNN Blend Shapes
WEIGHTED LINEAR INTERPOLATION LEARNING BASED CONTINUOUS REGRESSION
Weighted average of Blend Shapes
K-Means clustering
of meshes
K Blend Shapes
(mean shape + mean data
of each cluster)
K x 16 feature vector
(mean data)
Retrieve geometry via
learned mapping function
RAW sensor
measurement
Test Time Training Time
Sensor measures
GT geometry
Figure 6: We present two reconstruction algorithms, both with different trade-offs. On the left: A weighted linear interpolation
scheme using kNN. On the right: A machine learning based approach (see text for more details).
Figure 7: Ground truth capture. Markers are detected in
each camera image (green and orange dots), and triangu-
lated as a 3D point across each stereo pair (green dots in
images and mesh). An ARAP refinement step regularizes
for unseen or occluded vertices (red dots on mesh).
where
RSO3
. This energy will be low for the smooth
deformations that we expect (see Figure 4) as a rigid rotation
can be used to approximate the deformation of the local defor-
mation of each neighborhood. When this is impossible (e.g.
sharp creases), however, the energy will be high. We therefore
formulate the following energy
E(V) = X
n∈C
kˆ
vnvnk2+λARAPEARAP (V)(3)
defined over the vertex positions
V
. By minimizing this en-
ergy, we are seeking to find a set of vertex positions
V
that
approximately match the estimates
{ˆ
vn:n∈ C}
while not
substantially deforming (in the ARAP sense) from the rest
configuration.
To minimize
(3)
, we follow the lead of [
34
] and label each
rotation matrix in
(3)
with latent variables
R={Rn}N
n=1
SO3. This allows one to define
E0
ARAP(V,R) =
N
X
n=1 X
n0∈N (n)
k(¯
vn0¯
vn)Rn(vn0vn)k2
with the property that
EARAP(V) = minRE0
ARAP(V,R)
. Plug-
ging this into
(3)
, we get that
E(V) = minRE0(V,R)
where
E0(V,R) = X
n∈C
kˆ
vnvnk2+λARAPE0
ARAP(V,R).(4)
We thus minimize
E0(V,R)
using Levenberg-Marquardt
2
. We
initialize this procedure by setting
vn=¯
vn
and
Rn=I3
for
each
n∈ {1, ..., N }
. Finally, we parameterize the rotations
using an axis-angle representation to enforce the constraint
that each
Rn
remains a rotation matrix. In practice, this leads
to a robust and real-time method for recovering the full mesh
from a sparse set of vertices (see Figure 7).
2https://code.google.com/p/ceres-solver/
Dataset Construction
We have shown how to capture ground
truth vertex positions
V
to associate with a signal
xF
.
As we are capturing temporal sequences, in addition to the
raw signal
xt
and vertex positions
Vt
at time
t
, we will also
record the integrated signal
ztRF
up to time
t
and the
instantaneous vertex displacements
Dt={dt
1, ..., dt
N} ⊆ R3
.
The vertex displacement for vertex
n
is simply
vt
nvt1
n
. By
capturing this information for a wide variety of sequences, we
obtain a large fixed dataset
{(xj,zj,Vj,Dj)}J
j=1
that we will
use for training.
Learning to Infer Shape and Deformation
In this section, we describe how to leverage this training set
to predict the positions
V
of the vertices. In particular, as
the rest pose
¯
V
is easily detected when
x0
, we assume
that we have seen a subsequent
t
time steps and attempt to
predict
Vt
. In order to estimate the 3D points
Vt
at time
t
, we
propose two different approaches, both data-driven. The first
uses a distance metric in the sensor input space (i.e. voltage
measurements) to interpolate a set of key deformation modes.
The second one uses machine learning techniques to directly
regress the 3D vertex displacements.
Nearest Neighbor Lookup and Linear Interpolation
Our first
method assumes that the vertex position vector
VR3N
is a
linear combination of a set of
K
blendshapes or deformation
modes which we denote as {B1, ..., BK} ⊆ R3N. That is,
V=
K
X
k
αkBk(5)
where
αk
defines the weight of the
k
’th deformation mode
{vnk}N
n=1 R3
. To find these
K
deformations we run K-
means to extract
K
modes
{¯
vk}K
k=1
from our set and which
appear to cover all the common configurations we expect to
use. For each k, we average the integrated signal
¯
zk=1
|ζk|X
jζk
zj(6)
where
ζkRF
is the set of integrated signals in the training
set whose corresponding vertex positions were assigned to
mode
k
in
K
-means. At runtime we see a new integrated
signal zand compute αkas
αk=1|¯
zkzt|F
fk
Pj|¯
zjzt|F
fjβ
,(7)
where
F
fk
weights the distances based on the number of sensors
involved during the bending gesture. In particular
F
is the total
number of sensors attached to the sheet and
fk
are the activated
sensors of the model
k
, those absolute values are greater than
θ= 200
;
β
is a regularization term to ensure smoothness
during the transition between different configurations. We
then plug these weights into
(5)
to reconstruct the positions
V
.
Learning-based Continuous Regression
The basic method
proposed in the previous section assumes that all the possible
poses can be generated through a linear combination of
K
modes. Also, recall that the signal
z
comes from an integration
process that could lead to drift effects over the time. Unfor-
tunately, these properties can lead to inaccuracies in the final
reconstruction.
To address these issues, we consider regression-based methods
that directly estimate the vertex displacements
D
using the raw
signal
xRF
. In particular, we seek to leverage our large
training set to directly learn a mapping
fin :xdin n= 1, . . . , N i= 1,2,3(8)
from the raw signal
x
to the displacement
din
in coordinate
i
of vertex
n
. For coordinate
i
of vertex
n
, we extract from
our training set the following set of tuples
{(xj,dj
in)}J
j=1
of
size
J
. We use this to learn a function
fin
that minimizes the
empirical risk
1
J
J
X
j=1
Lfin(xj,dj
in) + R(f),(9)
where
L()
is the loss function and
R()
is a regularization
term that gives a tradeoff between the accuracy and the com-
plexity of the model.
Linear Model. Since we want to predict the position of all the
N
vertices with real-time performance we use a linear regres-
sion model:
fin(x) = x>win
, with
win RF
. Indeed, with a
high sampling rate of the signal, it is reasonable to assume that
the relation between the input and output can be approximated
by such a linear function. Let
X= [x1,...,xJ]>RJ×F
be
the matrix of all the examples
J
and given the groundtruth ver-
tex displacements
Y= [D>
1,...,D>
J]RJ×3N
where
Dj
is just the set
{dj
in :n∈ {1, ..., N }, i ∈ {1,2,3}}
vectorized
into a column vector. This allows us to rewrite
(9)
for all
N
vertices simultaneously as
W?= arg min
WkYXWk2
2+λkWk2(10)
where
WRF×3N
is the matrix of all the linear regressors,
and
k ◦ k2
is a regularizer that favors smaller norms (i.e. lower
complexity). The above optimization problem is known as
Tikhonov regularization or Regularized Least Square (RLS)
[35, 4] and it has the close form solution:
W?= (X>X+λI)1X>Y(11)
with IRF×Fbeing the identity matrix.
We also extended this approach to handle non-linear functions
by using the Representer Theorem [
27
]. Both the linear and
non-linear model are able to describe the relation between
the sensor signal and vertices. However, due to the higher
complexity of the non-linear model (linear growth with the
number of examples
T
), our machine learning based approach
relies on the linear model.
Run-Time. At run time, given the current signal
xt
we com-
pute the
N
vertex displacements as
Dt=W>xt
. The current
position of the sheet is then computed as
Vt=Vt1+Dt
.
The initial position
V0
is assumed to be known (i.e. the resting
pose). The learning method we propose ensures enough robust-
ness against drift effects, which has little impact in the final
reconstruction. However, it is always possible to accurately
detect the resting pose when there is no activity on the sensor:
by simply computing the standard deviation
σ
of the signal
xt
and classify the current pose as initial position if σ < θ.
RESULTS AND EXPERIMENTS
We now evaluate the proposed surface reconstruction methods,
whereas the core question of the experiment is to measure our
system accuracy with regard to the two proposed algorithms.
For this we again use our ground truth rig. We acquire
10
sequences of bending gestures covering the most common
deformations for interaction scenarios. Each sequence con-
tains approximately
4,000
frames, in total, so our data-set is
composed of
40,000
frames. We randomly split the data-set
into training and validation set, where
30,000
examples are
used for training and the remaining 10,000 for testing.
The error (in meters) is computed using the average Euclidean
distances between the ground truth vertices and the predicted
configuration.
Reconstruction Error
We first compare the reconstruction performances of the linear
interpolation (blendshape model) and regularized least squares
(RLS). In Figure 9 (left) the average error over
10,000
bending
poses is reported, as a function of number of training examples.
The learning methods achieve the best results with an average
error of
0.015 ±0.007
m. Thanks to the strong correlation
between the signal and the actual vertex displacement, the
linear regressions are able to describe the relation between
the sensor signal and the vertices. The blendshape approach,
despite its simplicity also performs very well, with an average
error of
0.018 ±0.009
m. Notice that around
10,000
training
frames are enough for achieving the best results: this corre-
spond roughly to a video sequence of 5 mins. A comparison
of the running times shows, that both linear interpolation and
RLS have a complexity equal to
O(MN)
where
MN
is the
grid size. Qualitative examples of the shape reconstructions
are shown in Figure 8 (left).
As a second experiment we evaluated which part of the flexible
sheet obtains the highest error. We show the qualitative results
in Figure 9 (right). As expected most of the errors are around
the corners, where the interaction is mostly occurring. RLS
has a maximum and minimum error of
0.021 ±0.013
m and
0.006 ±0.001
m respectively. Linear interpolation instead has
a maximum error equal to
0.032 ±0.024
m and minimum of
0.007 ±0.015m.
Comparisons with Marker-Based Vision Systems
Although camera-based systems have certain different con-
straints compared to our self-contained setup (i.e. spatial reso-
lution of the camera, depth dependency, occlusions), we feel
that existing camera-based systems (e.g. FlexPad [
31
]) offer
the greatest degree of reconstruction quality currently. There-
fore, being able to reconstruct our deformations without this
Figure 8: Left: Qualitative results showing the reconstruction performances of the linear interpolation (blendshapes) model
versus the regularized linear regression of the vertex positions. Right: Reconstruction comparisons between RLS and a single
stereo camera using markers. Most of the errors in the vision based system are due to occlusions.
Linear Interpolation Error
RLS Error [m]
0
0,005
0,01
0,015
0,02
0,025
0,03
0 5000 10000 15000 20000 25000 30000
Error [m]
Number of Training Examples
Non-Linear RLS Linear RLS Linear Interpolation
0.02
0.015
0.01
0.005
0
Figure 9: Left: Error [m] for the three reconstruction algo-
rithms as a function of training samples. Right: Error maps
of the linear interpolation and the regularized linear regres-
sions with respect to the vertex positions. As expected, most
of the error is located where the interactions often occurr.
type of sensing infrastructure felt like an important question
to answer. For that reason, we compared our reconstructions
with a single-view marker-based system. In Figure 8 (right)
we show some examples where a single stereo camera is not
able to reconstruct the surface of the sheet. This mainly occurs
when the hands are occluding the markers and surface. On
average, the single stereo marker-based system has an error
of
0.023 ±0.014
m, whereas the machine learning approach
achieves
0.011 ±0.06
m. This not only motivates our need for
two stereo cameras for the ground truth capture, but also high-
lights that occlusions are a big challenge for systems that make
use of single depth or regular cameras for surface tracking.
APPLICATIONS AND INTERACTION TECHNIQUES
In this section we cover some of the interactive affordances
of FlexSense in two different application scenarios. These
strongly highlight the benefit of the continuous, smooth and
detailed level of reconstruction capabilities of our system, its
transparency, and its compact and self-contained form-factor.
Transparent smart cover for tablets
In the first scenario, we used FlexSense as a replacement for
existing tablet covers. By just adding a thin, transparent cover
our sensor layout allows for paper-like interaction with rigid
displays. Figure 10 depicts novel and natural usage exam-
ples, like switching between different layers in Photoshop
or in online maps, performing rapid application switching to
copy&paste content, comparing rendering enhancements for
image or video filters, similar to tangible magic lens, or re-
vealing on-demand solutions for games or education. Initial
user feedback suggests that the paper-like feeling provides a
completely novel, highly intuitive and natural interaction ex-
perience. The one-to-one mapping and directness of the input,
as the user peels the foil back and forth, greatly utilizes the
accuracy of the detailed reconstruction and lets the users ac-
curately choose the area to reveal, providing direct immediate
feedback.
One exciting use case for the transparent smart cover is in
expanding traditional animation, where each frame is painstak-
ingly drawn by hand. This work is mainly done with so-called
‘animation desks’, where animators sketch sequences on sheets
of semi-transparent drafting film. These sheets are usually put
on top of a light table and the animator is able to switch be-
tween the different frames by flipping the paper. Although,
these light tables are being increasingly replaced by graphics
tablets, many designers still use this “old style” of animation
3
.
Our transparent sensor is thin enough to sense the stylus-input
on the tablet. This allows the user to sketch directly on top of
the transparent FlexSense and flip between different frames by
bending the sensor, ushering this manual approach to anima-
tion back into digital domain.
With the use as cover of course certain practical implications
arise, in particular general “wear and tear” issues of the foil,
which are a general challenge of flexible devices. For later
prototypes it will be necessary to think about other issues like
durability, which we have not considered in this paper. How-
ever, we feel that new exciting possibilities are enabled through
this configuration and inspire HCI researchers (including us)
to work further in this space.
External high-DoF input device
As highlighted in related work, flexible sensors have also been
used as external input devices [
1
]. What makes these types of
sensors appealing is that they afford both 2.5D interactions,
resting on a surface, as well as true in-air 3D interactions,
while maintaining a level of tangible feedback (a problem
for touchless interfaces in general). In Figure 10 (right) we
demonstrate how the accurate reconstructions of the flexible
sheet can be used in a 3D game. Continuous and discrete
gestures are easily mapped to actions such as flying, steering,
and firing a gun at varying rates. These gestures are simple to
implement as we provide a fully tracked, temporally consistent
mesh. A further example shown in Figure 10 (far right), is a
physically realistic game where the reconstructions of shape
and deformations, enable interactions with virtual objects (e.g.
to catapult an object). The diversity of control in the exam-
ples would be difficult to achieve without the accuracy of our
system and shows that continuous and precise reconstructions
are crucial for these scenarios (as shown by prior work [
1
]).
Whilst these are fun “toy” examples and exemplary show our
system’s capabilities, there are certain other scenarios, which
could greatly benefit of the accurate reconstruction, such as
3D modeling and manipulations.
3http://www.youtube.com/watch?v=fR3IDisAQwE
(from 4:50 mins)
Figure 10: Example applications created using our sensor. A transparent tablet cover example, which acts as a magic lens
revealing hidden metadata, applying rendering effects in photoshop, suppor ting window management, and allowing paper-like
digital animation. Far right: using the foil as a high DoF 3D controller.
DISCUSSION & LIMITATIONS
We have discussed our sensor layout, reconstruction algo-
rithms, and potential application scenarios. Our methods
of reconstructing real-world shape and deformation from a
sparse set of piezoelectric measurements are novel, and af-
fords new interactive capabilities in self-contained lightweight
form-factor. This combined with the transparent nature of the
sensors creates many new application possibilities.
In describing our reconstruction algorithms, we have purpose-
fully presented two methods, which have worked incredibly
well in practice. Our linear interpolation model is relatively
simple to implement, and works well even for complex mod-
els. Indeed some of the clear cover application scenarios were
implemented with this model. The model can be trained and
extended with new examples, in a relatively straightforward
manner, and with limited samples can generate compelling
results. The main need for an extended algorithm, however,
comes in dealing with larger variations of shapes and deforma-
tions, and generalizing to unseen training data. For example,
if we wanted to develop a 3D modeling tool using the precise
surface deformation as input. However, given that the latter
requires machine learning knowledge, we feel that both meth-
ods will have great value for practitioners. It is also worth
noting, that our method should in theory generalize to other
self sensing or external sensing setups, including other bend
sensors. However, this remains future work. In terms of our
algorithm, we have experimented with using ARAP during the
prediction phase, at runtime. This type of regularization leads
to over-smoothing, but another area of future investigation is
to think about such a run-time regularizer particularly if more
complex materials or geometries are to be modeled.
In terms of hardware, developing a new sensor is challenging
and the FlexSense sensor does have limitations. The sensors
deteriorate over prolonged periods when performing extreme
gestures. Gestures such as folding are also problematic. The
sensors are also not fully transparent, although indium tin
oxide (ITO) used in traditional touch screens, has similar trans-
missive capabilities. Another limitation is, that the integrated
sensor signal used in the linear interpolation approach can
suffer from drift issues when performing rapid interactions.
Generally, however, the sensor is highly promising and has
a lot of potentials for further projects. It will be interesting
to conduct more profound user studies, particularly as new
interaction techniques are developed further. Moreover, de-
tailed comparisons of different existing sensor configurations
with the FlexSense setup would be highly interesting. From
a more general point of view, we would like to combine our
input sensor with a flexible display (e.g. eInk / OLED). Smart
watches are becoming more and more popular; especially, if
they are combined with deformable shapes, we can imagine
devices which have not been possible before. Another interest-
ing area is that of touch/pressure sensing in combination with
bend sensors. In particular the signal processing challenges of
differentiating one from the other.
CONCLUSIONS
In this paper, we presented FlexSense, a new thin-film, trans-
parent self-sensing surface, which can reconstruct complex
deformations without the need for any external sensing, such
as cameras. We have built on prior work to demonstrate a new
piezoelectric bendable input device, with sensors printed on
the periphery of the surface substrate. Our main contribution
has been to devise a novel set of algorithms to fully reconstruct
the complex deformations of the sheet, using only these sparse
sensor measurements. We have demonstrated a number of new
types of applications for such a device that exploit the accurate
shape and deformations afforded.
ACKNOWLEDGEMENTS
We acknowledge Florian Perteneder, Cem Keskin and Barbara
Stadlober for their invaluable input. The research leading to
these results has received funding from the European Union,
Seventh Framework Programme FP7/2007-2013 under grant
agreement No611104.
REFERENCES
1. Balakrishnan, R., Fitzmaurice, G., Kurtenbach, G., and
Singh, K. Exploring Interactive Curve and Surface
Manipulation Using a Bend and Twist Sensitive Input
Strip. In I3D’99, ACM, 1999, 111–118.
2. Caglioti, V., Giusti, A., Mureddu, L., and Taddei, P. A
Manipulable Vision-Based 3D Input Device for Space
Curves. In Articulated Motion and Deformable Objects.
Springer, 2008, 309–318.
3. Danisch, L. A., Englehart, K., and Trivett, A. Spatially
continuous six-degrees-of-freedom position and
orientation sensor. In Photonics East, International
Society for Optics and Photonics, 1999, 48–56.
4. Evgeniou, T., Pontil, M., and Poggio, T. Regularization
Networks and Support Vector Machines. In Advances in
Computational Mathematics, 2000.
5. Follmer, S., Leithinger, D., Olwal, A., Cheng, N., and
Ishii, H. Jamming User Interfaces: Programmable
Particle Stiffness and Sensing for Malleable and
Shape-changing Devices. In UIST’12, ACM, 2012,
519–528.
6. Gallant, D. T., Seniuk, A. G., and Vertegaal, R. Towards
More Paper-like Input: Flexible Input Devices for
Foldable Interaction Styles. In UIST’08, ACM, Oct. 2008,
283.
7. Gomes, A., Nesbitt, A., and Vertegaal, R. MorePhone: A
Study of Actuated Shape Deformations for Flexible
Thin-Film Smartphone Notifications. In CHI’13, ACM,
Apr. 2013, 583.
8. Herkenrath, G., Karrer, T., and Borchers, J. TWEND:
Twisting and Bending as new Interaction Gesture in
Mobile Devices. In CHI’08 EA, ACM, Apr. 2008, 3819.
9. Holman, D., Vertegaal, R., Altosaar, M., Troje, N., and
Johns, D. PaperWindows: Interaction Techniques for
Digital Paper. In CHI’05, ACM, 2005, 591–599.
10. Kabsch, W. A solution for the best rotation to relate two
sets of vectors. Acta Crystallographica (1976).
11. Kato, H., and Billinghurst, M. Marker Tracking and
HMD Calibration for a Video-Based Augmented Reality
Conferencing System. In IWAR’99, IEEE Computer
Society, 1999.
12. Kato, T., Yamamoto, A., and Higuchi, T. Shape
recognition using piezoelectric thin films. In IEEE
Industrial Technology, vol. 1, IEEE, 2003, 112–116.
13. Khalilbeigi, M., Lissermann, R., Kleine, W., and Steimle,
J. Foldme: Interacting with double-sided foldable
displays. In TEI’12, ACM, 2012, 33–40.
14. Khalilbeigi, M., Lissermann, R., M¨
uhlh¨
auser, M., and
Steimle, J. Xpaaand: Interaction Techniques for Rollable
Displays. In CHI’11, ACM, 2011, 2729–2732.
15. Kildal, J., Paasovaara, S., and Aaltonen, V. Kinetic
Device: Designing Interactions with a Deformable
Mobile Interface. In CHI EA’12, May 2012.
16.
Konieczny, J., Shimizu, C., Meyer, G., and Colucci, D. A
Handheld Flexible Display System, 2005.
17. Lahey, B., Girouard, A., Burleson, W., and Vertegaal, R.
PaperPhone: Understanding the Use of Bend Gestures in
Mobile Devices with Flexible Electronic Paper Displays.
In CHI’11, ACM, May 2011, 1303.
18. Leal, A., Bowman, D., Schaefer, L., Quek, F., and Stiles,
C. K. 3D Sketching Using Interactive Fabric for Tangible
and Bimanual Input. In GI’11, Canadian
Human-Computer Communications Society, 2011,
49–56.
19.
Lee, J. C., Hudson, S. E., and Tse, E. Foldable interactive
displays. In UIST’08, ACM, 2008, 287–290.
20. Lee, S.-S. et al. How Users Manipulate Deformable
Displays as Input Devices. In CHI’10, ACM, Apr. 2010,
1647.
21.
Lee, S.-S. et al. FlexRemote: Exploring the effectiveness
of deformable user interface as an input device for TV. In
HCI International 2011–Posters’ Extended Abstracts.
Springer, 2011, 62–65.
22. Punpongsanon, P., Iwai, D., and Sato, K. DeforMe:
Projection-based Visualization of Deformable Surfaces
Using Invisible Textures. In ETech SA’13, ACM, 2013.
23. Rendl, C. et al. PyzoFlex: Printed Piezoelectric Pressure
Sensing Foil. In UIST’12, ACM, 2012.
24. Rosenberg, I., and Perlin, K. The UnMousePad: An
Interpolating Multi-touch Force-sensing Input Pad. In
ACM Transactions on Graphics (TOG), vol. 28, ACM,
2009, 65.
25. Roudaut, A., Karnik, A., L¨
ochtefeld, M., and
Subramanian, S. Morphees: Toward High ”Shape
Resolution” in Self-Actuated Flexible Mobile Devices. In
CHI’13, ACM, Apr. 2013, 593.
26. Sato, T., Mamiya, H., Koike, H., and Fukuchi, K.
PhotoelasticTouch: Transparent Rubbery Tangible
Interface Using an LCD and Photoelasticity. In UIST’09,
ACM, 2009, 43–50.
27. Sch¨
olkopf, B., Herbrich, R., and Smola, A. A
Generalized Representer Theorem. In Conference on
Computational Learning Theory, 2001.
28. Schwesig, C., Poupyrev, I., and Mori, E. Gummi: A
Bendable Computer. In CHI’04, ACM, Apr. 2004,
263–270.
29. Smith, R. T., Thomas, B. H., and Piekarski, W. Digital
Foam Interaction Techniques for 3D Modeling. In
VRST’08, ACM, 2008, 61–68.
30. Sorkine, O., and Alexa, M. As-rigid-as-possible surface
modeling. In SGP’07, 2007.
31. Steimle, J., Jordt, A., and Maes, P. Flexpad: Highly
Flexible Bending Interactions for Projected Handheld
Displays. In CHI’13, ACM, Apr. 2013, 237.
32. Tajika, T., Yonezawa, T., and Mitsunaga, N. Intuitive
Page-turning Interface of E-books on Flexible E-paper
based on User Studies. In MM’08, ACM, Oct. 2008, 793.
33. Tarun, A. P. et al. PaperTab: An Electronic Paper
Computer with Multiple Large Flexible Electrophoretic
Displays. In CHI EA’13, ACM, 2013, 3131–3134.
34. Taylor, J. et al. User-specific hand modeling from
monocular depth sequences. In CVPR, 2014.
35. Tikhonov, A., Leonov, A., and A.G., Y. Nonlinear
Ill-Posed Problems. In Kluwer Academic Publishers,
1998.
36. Warren, K., Lo, J., Vadgama, V., and Girouard, A.
Bending the Rules: Bend Gesture Classification for
Flexible Displays. In CHI’13, ACM, Apr. 2013, 607.
37. Watanabe, J., Mochizuki, A., and Horry, Y. Bookisheet:
Bendable Device for Browsing Content Using the
Metaphor of Leafing Through the Pages. In UbiComp’08,
ACM, Sept. 2008, 360.
38. Wellner, P. Interacting with paper on the DigitalDesk.
Communications of the ACM 36, 7 (1993), 87–96.
39. Ye, Z., and Khalid, H. Cobra: Flexible Displays for
Mobile Gaming Scenarios. In CHI EA’10, ACM, 2010,
4363–4368.
40. Zimmerman, T. G., Lanier, J., Blanchard, C., Bryson, S.,
and Harvill, Y. A Hand Gesture Interface Device. In
CHI’87, ACM, 1987, 189–192.
41. Zirkl, M. et al. An All-Printed Ferroelectric Active
Matrix Sensor Network Based on Only Five Functional
Materials Forming a Touchless Control Interface.
Advanced Materials, Volume 23, Issue 18 (2011),
2069–2074.
... F. Castro et al., 2014), sensors with integrated signal conditioning filters (H. F. Castro et al., 2016), audio circuits (Street et al., 2020), screen-printed piezoelectric matrices (Rendl et al., 2012(Rendl et al., , 2014, electrochromic displays (Xuan Cao et al., 2016), etc. ...
... November 2021 Since then, piezoelectric polymers were a target of large research to enhance their properties and piezoelectric response, from which copolymers and terpolymers of PVDF were developed, with particular emphasis to poly(vinylidene fluoride trifluoroethylene) (P(VDF-TrFE)) that appeared in the late 1970's, among others (Higashihata et al., 1981;Yagi et al., 1984;Yagi & Tatemoto, 1979). Nowadays, both the homopolymer PVDF and the copolymer P(VDF-TrFE) have been broadly used for applications, ranging from sensors (Katsuura et al., 2017;Rendl et al., 2016Rendl et al., , 2014, to actuators ( All of these directly influence the piezoelectricity of materials (Pedro Costa, Jiangyu Li et al., 2013). Therefore, they are going to be explained in the following. ...
... To elaborate a case study prototype, several hypothesis were considered, from flexible game controllers (Rendl et al., 2014), to flexible tape controllers (Balakrishnan et al., 1999;Dementyev et al., 2015), wearables (Jones, 2019), hybrid books (Q. Li & Wang, 2016), roll up devices (Gomes et al., 2018), or wrapping devices (Drogemuller et al., 2021), among others. ...
Thesis
Full-text available
The last decade was marked by the computer-paradigm changing with other digital devices suddenly becoming available to the general public, such as tablets and smartphones. A shift in perspective from computer to materials as the centerpiece of digital interaction is leading to a diversification of interaction contexts, objects and applications, recurring to intuitive commands and dynamic content that can proportionate more interesting and satisfying experiences. In parallel, polymer-based sensors and actuators, and their integration in different substrates or devices is an area of increasing scientific and technological interest, which current state of the art starts to permit the use of smart sensors and actuators embodied within the objects seamlessly. Electronics is no longer a rigid board with plenty of chips. New technological advances and perspectives now turned into printed electronics in polymers, textiles or paper. We are assisting to the actual scaling down of computational power into everyday use objects, a fusion of the computer with the material. Interactivity is being transposed to objects erstwhile inanimate. In this work, strain and deformation sensors and actuators were developed recurring to functional polymer composites with metallic and carbonaceous nanoparticles (NPs) inks, leading to capacitive, piezoresistive and piezoelectric effects, envisioning the creation of tangible user interfaces (TUIs). Based on smart polymer substrates such as polyvinylidene fluoride (PVDF) or polyethylene terephthalate (PET), among others, prototypes were prepared using piezoelectric and dielectric technologies. Piezoresistive prototypes were prepared with resistive inks and restive functional polymers. Materials were printed by screen printing, inkjet printing and doctor blade coating. Finally, a case study of the integration of the different materials and technologies developed is presented in a book-form factor.
... The researchers have used different deformable materials, including flexible plexiglass [3,4], paper [5,6], flexible plastic [7,8], ethylene-vinyl acetate (EVA) foam [8][9][10][11], polyvinyl chloride (PVC) [12,13], polycarbonate [14], and silicone [11,[14][15][16] to develop the prototype's body. The resistive bend (or flex) sensors with different lengths and directional capabilities [2-4, 11, 14, 16-18], optical bend sensors [8,19], piezoelectric sensors [20,21], and conductive foam-based sensors [22] are used to detect initiation, extension, and direction of deformation. Literature also offers different feedback techniques such as visual feedback with flexible display [6,17], rigid display [4,[23][24][25], and projected display [7,26,27] and audio [13,16] and vibrotactile [13,28] feedback. ...
... Optical sensors are also explored in the literature to detect device deformation [8,19]. Rendl et al. [20,21] proposed printed piezoelectric sensors that can detect complex deformations. Chien et al. [44] proposed a shape-sensing flexible sensor strip composed of an array of strain gauges. ...
Article
Full-text available
In human-computer interaction research, prototypes allow for communicating design ideas and conducting early user studies to understand user experience without developing the actual product. For investigating deformation-based interaction, functional prototyping becomes challenging due to the unavailability of commercial platforms and the marginal availability of flexible electronic components. During functional prototyping, incurred time and cost are essential factors that further depend on the ease of stiffness customization, reproduction, and upgrade. To offer these advantages, this work presents the fabrication workflow of Nāmya, a smartphone-sized flexible prototype that can detect bend gestures and touch-based inputs using off-the-shelf sensors and flexible materials. This do-it-yourself (DIY) approach to fabricating deformable prototypes focuses on addressing the challenges of selecting flexible material, type of sensor, and sensor positions. We also demonstrate that the proposed use of a flexible three-dimensional- (3D-) printed internal structure with sensor pockets and the one-part silicone cast allows the development of robust deformable prototypes. This fabrication process offers the opportunity to easily customize device stiffness, reproduce prototypes with similar physical properties, and upgrade existing prototypes.
... Shape sensing using optical fibres have been widely used as a health monitoring strategy in various fields including aerospace [2,3], civil [10] and medical [11] applications. Although different sensing methods such as piezoelectric sensing [12] and fringe projection [13] can be adopted for the same purpose, they do not fare well when compared to optical fibres considering EMI immunity, resistance to corrosion, high sensitivity and unobtrusive nature when bonded on or embedded in a structure [14,15]. Fibre Bragg Grating (FBG) based optical fibre sensing is the most favoured technology for shape sensing [16] and for the monitoring of aircraft wing deformations [17,18]. ...
Article
Full-text available
In this paper, with the final aim of shape sensing for a morphing aircraft wing section, a developed multimodal shape sensing system is analysed. We utilise the method of interrogating a morphing wing section based on the principles of both hybrid interferometry and Fibre Bragg Grating (FBG) spectral sensing described in our previous work. The focus of this work is to assess the measurement performance and analyse the errors in the shape sensing system. This includes an estimation of the bending and torsional deformations of an aluminium mock-up section due to static loading that imitates the behaviour of a morphing wing trailing edge. The analysis involves using a detailed calibration procedure and a multimodal sensing algorithm to measure the deflection and shape. The method described In this paper, uses a standard single core optical fibre and two grating pairs on both the top and bottom surfaces of the morphing section. A study on the fibre placement and recommendations for efficient monitoring is also included. The analysis yielded a maximum deflection sensing error of 0.7 mm for a 347 × 350 mm wing section.
Article
Background: The quality of powder-blown Laser-Directed Energy Deposition (L-DED) is mainly governed by the energy density per unit of mass employed to melt the material. In most of the previous works, the focus in process monitoring and control was on the control of energy input, by controlling the properties of the melt pool. However, the powder mass input is as important to monitor as the energy input, in order to preserve the equilibrium of the process. Methods: In this paper, the authors present the first test results of the Pyzoflex® sensor for powder flow monitoring in L-DED using real powder feeding system in the robot-based laser-processing cell. The sensor was tested against the powder projected from the powder feeder under typical flow regimes and the real-time measurements were taken using a specifically designed software tool. Results: The graphical representation of the registered sensor signals are clearly correlated with the powder flow values set at the powder feeder, which demonstrates that the piezoelectric sensors can detect the powder flow with elevated precision in real time. Conclusions: The first laboratory tests of flexible printed piezoelectric sensors demonstrate that they are fast and precise in the powder flow measurement, but that more effort must be invested in the robustness of the measurement setup as well as in clearing and stabilization of the registered signal.
Article
Motion capture technologies reconstruct human movements and have wide-ranging applications. Mainstream research on motion capture can be divided into vision-based methods and inertial measurement unit (IMU)-based methods. The vision-based methods capture complex 3D geometrical deformations with high accuracy, but they rely on expensive optical equipment and suffer from the line-of-sight occlusion problem. IMU-based methods are lightweight but hard to capture fine-grained 3D deformations. In this work, we present a configurable self-sensing IMU sensor network to bridge the gap between the vision-based and IMU-based methods. To achieve this, we propose a novel kinematic chain model based on the four-bar linkage to describe the minimum deformation process of 3D deformations. We also introduce three geometric priors, obtained from the initial shape, material properties and motion features, to assist the kinematic chain model in reconstructing deformations and overcome the data sparsity problem. Additionally, to further enhance the accuracy of deformation capture, we propose a fabrication method to customize 3D sensor networks for different objects. We introduce origami-inspired thinking to achieve the customization process, which constructs 3D sensor networks through a 3D-2D-3D digital-physical transition. The experimental results demonstrate that our method achieves comparable performance with state-of-the-art methods.
Article
With infinite degrees of freedom, soft robots are expected to achieve dexterous and complex tasks, but this also puts forward higher requirements for their sensing capabilities. An important sensing task in soft robots is sensing their own deformation and current shape. Currently, most of the existing soft shaping sensors are limited by local perception abilities, stretchability, and fabrication difficulties. We propose a sensing method based on Electrical Impedance Tomography (EIT), which reconstructs conductivity patterns distributed on a surface, by considering the deformation-caused resistance changes. Comparison between the theoretical and experimental patterns reveals that even though the quality of the pattern is affected by a large amount of noise, the considered features are still able to reflect the change of shape. With the help of neural networks, the pattern is decoded to the physical data related to the deformation. Detection of the planar shape changes and proprioception of a sensor-integrated soft robot are presented to exhibit the capability of our method. Results show that the detected error ratios are mostly under 5% and 3% for 2D and 3D conditions respectively.
Article
Background: The quality of powder-blown Laser-Directed Energy Deposition (L-DED) is mainly governed by the energy density per unit of mass employed to melt the material. In most of the previous works, the focus in process monitoring and control was on the control of energy input, by controlling the properties of the melt pool. However, the powder mass input is as important to monitor as the energy input, in order to preserve the equilibrium of the process. Methods: In this paper, the authors present the first test results of the Pyzoflex® sensor for powder flow monitoring in L-DED using real powder feeding system in the robot-based laser-processing cell. The sensor was tested against the powder projected from the powder feeder under typical flow regimes and the real-time measurements were taken using a specifically designed software tool. Results: The graphical representation of the registered sensor signals are clearly correlated with the powder flow values set at the powder feeder, which demonstrates that the piezoelectric sensors can detect the powder flow with elevated precision in real time. Conclusions: The first laboratory tests of flexible printed piezoelectric sensors demonstrate that they are fast and precise in the powder flow measurement, but that more effort must be invested in the robustness of the measurement setup as well as in clearing and stabilization of the registered signal.
Article
Full-text available
Bend gestures have a large number of degrees of freedom and therefore offer a rich interaction language. We propose a classification scheme for bend gestures, and explore how users perform these bend gestures along four classification criterion: location, direction, size, and angle. We collected 36 unique bend gestures performed three times by each participant. The results suggest a strong agreement among participants for preferences of location and direction. Size and angle were difficult for users to differentiate. Finally, users performed and perceived two distinct levels of magnitude. We propose recommendations for designing bend gestures with flexible displays.
Conference Paper
Full-text available
Malleable and organic user interfaces have the potential to enable radically new forms of interactions and expressiveness through flexible, free-form and computationally controlled shapes and displays. This work, specifically focuses on particle jamming as a simple, effective method for flexible, shape-changing user interfaces where programmatic control of material stiffness enables haptic feedback, deformation, tunable affordances and control gain. We introduce a compact, low-power pneumatic jamming system suitable for mobile devices, and a new hydraulic-based technique with fast, silent actuation and optical shape sensing. We enable jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and fluids, and 2) capacitive and electric field sensing. We explore the design space of malleable and organic user interfaces enabled by jamming through four motivational prototypes that highlight jamming's potential in HCI, including applications for tabletops, tablets and for portable shape-changing mobile devices.
Conference Paper
Full-text available
We present Papertab, a paper computer with multiple 10.7" functional touch sensitive flexible electrophoretic displays. Papertab merges the benefits of working with electronic documents with the tangibility of paper documents. In Papertab, each document window is represented as a physical, functional, flexible e-paper screen called a displaywindow. Each displaywindow is an Android computer that can show documents at varying resolutions. The location of displaywindows is tracked on the desk using an electro-magnetic tracker. This allows for context-aware operations between displaywindows. Touch and bend sensors in each displaywindow allow users to navigate content.
Conference Paper
Full-text available
Ferroelectric material supports both pyro-and piezoelectric effects that can be used for sensing pressures on large, bended surfaces. We present PyzoFlex, a pressure-sensing input device that is based on a ferroelectric material. It is constructed with a sandwich structure of four layers that can be printed easily on any material. We use this material in combination with a high-resolution Anoto-sensing foil to support both hand and pen input tracking. The foil is bendable, energy-efficient, and it can be produced in a printing process. Even a hovering mode is feasible due to its pyroelectric effect. In this paper, we introduce this novel input technology and discuss its benefits and limitations.
Conference Paper
This paper presents a method for acquiring dense nonrigid shape and deformation from a single monocular depth sensor. We focus on modeling the human hand, and assume that a single rough template model is available. We combine and extend existing work on model-based tracking, subdivision surface fitting, and mesh deformation to acquire detailed hand models from as few as 15 frames of depth data. We propose an objective that measures the error of fit between each sampled data point and a continuous model surface defined by a rigged control mesh, and uses as-rigid-as-possible (ARAP) regularizers to cleanly separate the model and template geometries. A key contribution is our use of a smooth model based on subdivision surfaces that allows simultaneous optimization over both correspondences and model parameters. This avoids the use of iterated closest point (ICP) algorithms which often lead to slow convergence. Automatic initialization is obtained using a regression forest trained to infer approximate correspondences. Experiments show that the resulting meshes model the user's hand shape more accurately than just adapting the shape parameters of the skeleton, and that the retargeted skeleton accurately models the user's articulations. We investigate the effect of various modeling choices, and show the benefits of using subdivision surfaces and ARAP regularization.
Article
Recently, there has been great interest in multi-touch interfaces. Such devices have taken the form of camera-based systems such as Microsoft Surface [de los Reyes et al. 2007] and Perceptive Pixel's FTIR Display [Han 2005] as well as hand-held devices using capacitive sensors such as the Apple iPhone [Jobs et al. 2008]. However, optical systems are inherently bulky while most capacitive systems are only practical in small form factors and are limited in their application since they respond only to human touch and are insensitive to variations in pressure [Westerman 1999]. We have created the UnMousePad, a flexible and inexpensive multitouch input device based on a newly developed pressure-sensing principle called Interpolating Force Sensitive Resistance. IFSR sensors can acquire high-quality anti-aliased pressure images at high frame rates. They can be paper-thin, flexible, and transparent and can easily be scaled to fit on a portable device or to cover an entire table, floor or wall. The UnMousePad can sense three orders of magnitude of pressure variation, and can be used to distinguish multiple fingertip touches while simultaneously tracking pens and styli with a positional accuracy of 87 dpi, and can sense the pressure distributions of objects placed on its surface. In addition to supporting multi-touch interaction, IFSR is a general pressure imaging technology that can be incorporated into shoes, tennis racquets, hospital beds, factory assembly lines and many other applications. The ability to measure high-quality pressure images at low cost has the potential to dramatically improve the way that people interact with machines and the way that machines interact with the world.
Conference Paper
In this paper, we present DeforMe, a new projection-based mixed reality technique for augmenting deformable surfaces with deformation rendering graphics. DeforMe combines a geometry tracking method with a deformation reconstruction model to estimate the tangential deformation of a surface. The motions of the feature points are measured between two successive frames. Moreover, the system estimates the surface deformation on the basis of the moving least squares algorithm, and interpolates the deformation estimation result to the projected graphics. Users can interact in real time with the deformable object, while the realistic projected graphics deform according to the deformation of the surface. We aim to integrate our technique with various types of design-support and interactive applications in spatial augmented reality.
Conference Paper
Flexpad is an interactive system that combines a depth camera and a projector to transform sheets of plain paper or foam into flexible, highly deformable, and spatially aware handheld displays. We present a novel approach for tracking deformed surfaces from depth images in real time. It captures deformations in high detail, is very robust to occlusions created by the user's hands and fingers, and does not require any kind of markers or visible texture. As a result, the display is considerably more deformable than in previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation. We illustrate these unique capabilities through three application examples: curved cross-cuts in volumetric images, deforming virtual paper characters, and slicing through time in videos. Results from two user studies show that our system is capable of detecting complex deformations and that users are able to perform them quickly and precisely.
Book
The variational regularizing algorithms for solving nonlinear ill-posed problems are considered as well as their applications.