Content uploaded by Fabien Danieau
Author content
All content in this area was uploaded by Fabien Danieau on Oct 26, 2018
Content may be subject to copyright.
Texture Rendering on a Tactile Surface using
Extended Elastic Images and Example-Based
Audio Cues
Julien Fleureau, Yoan Lefevre, Fabien Danieau,
Philippe Guillotel and Antoine Costes
Technicolor R&I
Abstract. A texture rendering system relying on pseudo-haptic and au-
dio feedback is presented in this paper. While the user touches the texture
displayed on a tactile screen, the associated image is deformed according
to the contact area and the rubbing motion to simulate pressure. Addi-
tionally audio feedback is synthesized in real-time to simulate friction. A
novel example-based scheme takes advantage of recorded audio samples
of friction between actual textures and a finger at several speeds to syn-
thesize the final output sound. This system can be implemented on any
existing tactile screen without any extra mechanical device.
1 Introduction
Texture rendering is an active and challenging field of study where many in-
put and output devices have been proposed. In their survey, Chouvardas et al.
classified these textures rendering devices into three categories [3]: mechanical,
electrotactile and thermal devices. Mechanical devices stimulates the mechanore-
ceptors within the skin using mechanical actuators. They include pin-based de-
vices applying pressure, vibrating, ultrasonic and acoustic actuators, and devices
based on electrorheological fluids. Electrotactile devices use electric stimulation
to activate the mechanoreceptors. A matrix of electrodes is a typical example of
such devices. Finally thermal devices provide heat or cool stimuli to the skin.
Another way to simulate texture properties without a specific device is to rely
on pseudo-haptic feedback. L´ecuyer has shown that various haptic sensations
can be induced with visual stimuli [6]. This technique may provide sensations
of stiffness, friction, mass of objects or haptic textures. Bump and holes have
been simulated by varying the speed of the cursor exploring the texture [7].
The elasticity of a texture was also simulated by a deformation of the image
and of the cursor [1]. These two approaches require to explore the texture with a
mouse. To make the interaction more natural, Li et al. proposed a similar system
embedded on a tablet [8]. The user can feel softness of a surface using a pen or
a finger. Punpongsanon et al. developed an augmented reality system where the
user touches an actual object while a projector changes the visual appearance
of this object [9]. This visual feedback changes the perception of the softness of
the object.
2
Audio stimuli may also modify the perception of texture. Kim et al. shows
that the intensity of the sound changes the perception of roughness with or
without haptic feedback [5]. The denseness and ruggedness are also affected by
this intensity. Even with actual materials such as abrasive papers, sound modifies
the perceived roughness [10].
In this work, we present a texture haptic rendering system based on visual
and audio pseudo-haptic feedback. It may have applications in the context of
e-shopping to virtually touch different materials of interest associated to clothes
or furniture for instance. The user by interacting through a standard tactile
screen is able to explore the physical properties of the texture, namely stiffness
and friction. In line with the approaches mentioned hereinbefore, we rely on
visual and audio illusions to generate haptic sensations. The new contributions
involved in this latter system are twofolds:
– First, we propose to rely on the paradigm of elastic images introduced in
[1], currently limited to punctual pressure contact with a mouse device. We
introduce the features of continuous rubbing interaction and non-punctual
contact with a finger on a tablet device. To that end, an underlying vis-
coelastic law and a new contact model are proposed. During the interaction,
the texture is visually and dynamically deformed according to the contact
area with the finger and rubbing motions.
– Second, a novel example-based audio synthesis process is proposed to render
friction properties. It makes use of real audio samples to create a friction
sound synchronized to the user’s exploratory movement and consistent with
the actual texture and rubbing speed.
Both visual deformation and sound synthesis are on-line processes inducing
low-computational complexity. In the remaining of this paper these two key
components of the global system are further detailed.
2 Pseudo-haptic Rendering
The first aspect of our work deals with a visual mechanism to render pressure
interaction when the user is rubbing the texture displayed on the tablet. In the
original elastic paradigm [1], the contact duration with the displayed image is
assimilated to the amount of “pressure” applied to the associated texture by
the end-user. The image is then radially deformed around the contact point
with a dedicated heuristic function (see Figure 1) to give the illusion of a true
deformation.
However, as it is, this paradigm uniquely addresses static and punctual clicks
and the case of a user sliding or rubbing the surface continuously with his finger
is not handled. To cope with these limitations, two different enhancements are
now introduced. First, a new contact model addresses the problem of natural
interaction on a tablet with a finger, and second, a viscoelastic mechanical model
is proposed to enable dynamic rubbing motions.
3
Fig. 1. Left: Deformation obtained from [1]. Right: Deformation obtained with the
proposed contact model.
2.1 Contact Model
In [1], the interaction between the user and the image is made by means of a
mouse device and the maximum “pressure” is applied at the cursor position.
In the context we address here, the end-user is not interacting with a mouse
pointer anymore but rather with his finger. The contact area is not punctual
anymore and the whole surface of interaction has thus to be taken into account.
In the new contact model, we therefore propose to divide the touched area into
two main components (see Figure 2): i) a first component related to the surface
right under the contact location, and ii) a second component involving the region
right around this contact area.
Fig. 2. Left: Schematic representation of a finger touching the surface. The contact
surface is divided into two areas under and around the actual contact zone (respectively
green and yellow). Right: The two contact areas represented on an image of a sponge
texture.
The maximum amount of “pressure” is now applied on the whole contact
surface (and not only a single point) whereas an exponential decrease occurs in
the boundary area.
Dedicated heuristic radial functions are also proposed here to quantify the
amount of “pressure” applied at a distance dfrom the contact point after a
contact duration tin the two regions defined previously (i.e. d∈[0, Rth ] and
d∈[Rth, Rmax ]):
4
p1
deform(d, t) = e
5(d−Rth)
Rth −1
10 −tfor d∈[0, Rth] (1)
p2
deform(d, t) = −e
−5(d−Rth)
Rmaxt(t−1
50) for d∈[Rth , Rmax] (2)
This analytical expression is plotted for various distances and contact dura-
tions in Figure 3 and a visual comparison of the contact model from [1] and our
is given in Figure 1.
Fig. 3. Plot of the dedicated heuristic radial functions used in the contact model for
different contact durations with Rmax = 10 and Rth = 0.25
2.2 Viscoelastic Model
The original elastic image paradigm [1] is limited to “static” and punctual pres-
sures and it is not designed for sliding or rubbing interactions. The deformation
induced by the finger contact is therefore not managed when the end-user moves
the contact-point along the texture surface. To cope with this limitation, the
textured image is not considered anymore as a 2D deformable object but rather
as a 3D grid where each node is associated with one pixel of the texture image.
Each vertex is additionally connected to its 4-connected neighborhood.
A purely vertical mechanical viscoelastic model is then associated to this
latter 3D grid. It makes successive contacts of the user spatially and temporally
consistent on the whole textured image. In this new context, the contact model
described in the previous section is not considered anymore as a deformation field
but rather as a vertical force field applied to the 3D grid along the z-axis and
given as an external input force to the viscoelastic model. The deformation along
5
the z-axis for each node of the grid is defined by a first order linear differential
equation which discrete scheme is given by:
Pz[i, j, n] = KEz[i, j, n] + vEz[i, j, n]−Ez[i, j, n −1]
Ts
(3)
The stiffness Kand the viscosity vare the 2 parameters involved in the
viscoelastic model and:
–Pz[i, j, n] is the “force” at the sample time n, computed with the contact
model described hereinbefore (see equations 1 and 2) and applied to the
node located at (i, j ) in the image plane. At a sample time n,Pz[i, j, n] is
maximal under the contact surface.
–Ez[i, j, n] is the deformation / translation along the z-axis. It is noteworthy
that Ez[i, j, n] not only depends on the current force Pz[i, j, n] but also on
the previous displacement Ez[i, j, n −1].
–Tsis the sampling period.
In steady state (and for a constant input pressure Pz[i, j, n] = pz), the dis-
placement Ez[i, j, n] converges to the value Ez[i, j, n] = pz
Kwhich is proportional
to the force pz. For transient state (i.e. increase or decrease of the input force),
the displacement reaches its steady state value within a time defined by the
viscosity parameter ν. In other words, the higher is K, the lower is the final
deformation (the stiffer is the texture) and the lower is ν, the faster the texture
reaches its steady state. The final deformation thus ensures a time consistency as
well as the integration of the user external interaction while preserving low com-
putational loads. Finally, tuning Kand νmakes possible to fit different kinds of
texture in order to adapt the visual rendering to simulate different “mechanical
properties”.
2.3 Details of Implementation
The viscoelastic model presented previously can be easily implemented by means
of a regular 3D graphic engine. The textured image is used as a regular 2D texture
mapped on a 3D regular square grid. Each node of the grid is continuously
updated according to equations 1, 2 and 3. A normal is estimated for each node
(on the basis of the local neighborhood) at each sample time which enables the
use of a light source to do the shadow rendering, thus increasing the realism of
the simulation.
3 Example-Based Audio Synthesis
As mentioned hereinbefore, an audio feedback synchronized to exploratory move-
ments improves the realism of the texture rendering and may change the percep-
tion of the roughness. In this context, we now propose a method, complementary
to the visual feedback, to synthesize a friction sound when the targeted texture
6
is rubbed by the end-user. The proposed approach is suited for any duration and
presents properties which self-adapt with the speed of the rub. Contrarily to [4]
and [2] where self-adaptation to speed is proposed on synthesized textures, the
proposed method makes use of several real audio recording of the sound gener-
ated off-line when touching real samples of the texture of interest at different
speed (typically low, medium, high). New sound samples are then synthesized for
a given rubbing speed by a combination of the spectral and intensity properties
of the initial examples.
The proposed synthesis approach naturally goes through two different steps,
namely a learning step and a generation step, that we are going to detail here-
after.
3.1 Learning Step
The initial off-line learning step aimed at capturing the spectral properties as
well as the properties of the intensity of the friction sound made when a texture is
rubbed at different speeds. These properties will be then re-used in the generation
step. To that end, Naudio samples siare recorded by means of a dedicated setup
when a user is rubbing the texture of interest at varying speed vi. Each signal si
is first high-pass filtered to remove the baseline which does not embed the high-
frequency spectral properties of the texture we are interested in. The remaining
part is therefore a centered audio signal, for which spectrum and energy can be
computed, and depend on the rubbing speed (see Figure 4).
The spectral properties are captured making use of a regular auto-regressive
(AR) model but making use of realistic signals. Such a model is represented by an
all-pole infinite Impulse Response filter (IIR) which coefficients viare optimized
(Yule-Walker equations resolution) so that filtering a white noise with this IIR
would result in a new signal with similar spectral properties as the example used
for the AR fitting (see Figure 4). The mean power Aiof each temporal sample
is also computed to capture the energy properties of the friction sound at each
speed.
Eventually, for a given texture, we have Ntriplets (vi, Fi, Ai) which char-
acterize its spectral and energy properties at different rubbing speeds. These
descriptors are then re-used in the generation step to synthesize the final speed-
varying friction sound.
3.2 Generation Step
The synthesis process consists in creating a nth audio sample y[n] consistent with
the current rubbing speed v[n] of the end-user as well as with the intrinsic audio
properties of the texture. To that end, for each new audio sample to generate at
step n:
–Nwhite noises wiare updated by sampling a new i.i.d. (identically indepen-
dently distributed) value wi[n].
7
Fig. 4. Original (red) and AR-based estimated (green) spectra of audio samples ob-
tained when recording a user rubbing a sheet of paper at low (left), medium (middle)
and high speed (right).
– Each of these Nwhite noises wiare then filtered through the IIR filter whose
coefficients are given by Fi, producing a new associated output yi[n].
– The 2 consecutive indices aand bsuch that va≤vi≤vbare then computed.
– Under a linear assumption, a first value u0[n] is computed by u0[n] =
(vb−v[n])ya[n]+(v[n]−va)yb[n]
vb−vawhich is a weighted value of the signal samples
which associated spectra are the closer from the one which should occur at
the given speed.
– Still assuming a linear behavior, u0[n] is finally scaled by a scaling factor
β[n] = (vb−v[n])Aa+(v[n]−va)Ab
vb−valeading to the final new sample value u[n] =
β[n]u0[n].
In the end, the new sample is simply a linear speed-based intensity modu-
lation of a linear speed-based combination of the different spectrally-consistent
outputs of the auto-regressive models. Figure 5 sums up the different steps of
the generation process.
4 Results & Discussion
We conducted preliminary tests to highlight the advantages as well as the draw-
backs of the proposed approach. More precisely, our system has been tested with
four different texture samples, namely a sponge (K= 1.4 and ν= 0.1), a piece
of paper (K= 7.and ν= 0.3), a paper towel (K= 7.and ν= 0.3) and a carpet
(K= 4.and ν= 0.3). For each of these materials, examples of audio samples
(required for the audio feedback synthesis process) have been captured making
use of a Senheiser ME66/K6 microphone on a Zoom R16 recorder. Resulting files
are wave formatted file sampled at 44.1 kHz with a 24bits dynamic. The textures
were rubbed with the fingertip to produce the sound of friction. Three records
per material have been gathered for three different speeds corresponding to slow,
medium and fast rubbing speed. All the system has been implemented on the
Samsung Galaxy S4 tablet with appropriate images for each visual feedback.
8
..., w1[n-1], w1[n] F1y1[n]
..., y1[n-2], y1[n-1]
..., w2[n-1], w2[n] F2y2[n]
..., y2[n-2], y2[n-1]
..., wa[n-1], wa[n] Faya[n]
..., ya[n-2], ya[n-1]
..., wb[n-1], wb[n] Fbyb[n]
..., yb[n-2], yb[n-1]
..., wN[n-1], wN[n] FNyN[n]
..., yN[n-2], yN[n-1]
Spectral
Estimation
v[n]
va
vb
A1A2AaAbAN
.
.
.
.
..
Power
Estimation
Scaling
u0[n]
β[n]
u[n]
Fig. 5. Block-diagram summing up the different steps involved in the example-based
audio synthesis process.
Figure 6 depicts screen captures of the four examples introduced hereinbefore.
It is noteworthy to mention that the finger is not represented on the images for
illustration purposes, but one should have in mind that the contact areas should
be covered by the finger of the end-user. Depending on the consistency of the
proposed material with the assumption made in the visual and audio feedback
processes, realism and quality of the experience may vary.
First regarding the sponge example, the deformation appears quite realistic
because this material has elastic properties which nicely match the viscoelastic
model. However, the audio feedback is less convincing because the quite high-
frequency content captured by the AR model is not sufficient to render the
complexity of the friction sound due to the holes covering the material.
On the contrary, the audio feedback is more realistic for the paper which
audio spectrum is more compatible with the assumptions of our audio modeling.
The texture deformation looks however mode artificial as the intrinsic mechanical
behavior of such a material is poorly represent by an elastic model.
The two last examples fit quite well the underlying modeling assumptions.
They both present quite regular surface structures which then produce high-
frequency friction sounds compatible with the audio model. Besides, the visual
deformation, light for the paper towel, stronger for the carpet, are also realistic
because each material are quite well modeled by a viscoelastic law.
The video1provided with this paper gives to the reader a more representa-
tive idea of the visual and audio behaviors of the whole framework in real con-
ditions. As suggested before, materials presenting mechanical properties close
to viscoelastic are obviously better rendered. The sheet of paper for which the
elasticity is questionable is therefore poorly deformed whereas the sponge or the
carpet provide interesting feedbacks. Similarly, as soon as the friction sounds em-
1http://dai.ly/x3pqkwx
9
Fig. 6. Examples of textures simulated with the proposed system: sponge, sheet of
paper, paper towel and carpet (from top to bottom). The left image is the image of the
texture without interaction. The middle image corresponds to a pressure on the left of
the screen. The right image is the result of a sliding gesture toward the right.
bed complex patterns induced by meso or macroscopic reliefs, the auto-regressive
approach does not provide anymore sufficient degrees of freedom to model the
friction sounds. For microscopic reliefs, the speed-varying AR approach is quite
relevant and one can especially observe the consistent speed-dependent friction
sound variations (in terms of energy and spectrum) obtained on the sheet of
paper when changing the rubbing speed. These preliminary tests were necessary
to roughly understand the limitations of our systems. More rigorous studies will
be conducted to finely characterize the perception of those textures.
5 Conclusion
We have proposed a new framework to render texture properties on a tactile
screen without using any extra mechanical device. We relied on the elastic im-
age paradigm and proposed a new contact model based on a viscoelastic law to
offer to the end-user a pseudo-haptic visual feedback when he is rubbing or press-
ing the texture with his finger. Additionally, an example-based audio synthesis
methodology has been introduced to render texture-specific friction sounds at
different rubbing speeds. First qualitative results have been finally proposed to
highlight the advantages as well as the limitations of our approach. Indeed, it
seems that elastic materials as well as materials with high-frequency audio signa-
ture are better suited for the proposed solution. Future works should now focus
10
on the generalization of this framework to more complex textures as well as to
the setting up of a more quantitative evaluation of the system performances.
References
1. Argelaguet, F., J´auregui, D.A.G., Marchal, M., L´ecuyer, A.: A novel approach for
pseudo-haptic textures based on curvature information. In: Haptics: Perception,
Devices, Mobility, and Communication, pp. 1–12. Springer (2012)
2. Bianchi, M., Poggiani, M., Serio, A., Bicchi, A.: A novel tactile display for softness
and texture rendering in tele-operation tasks. IEEE World Haptics Conference pp.
49–56 (2015)
3. Chouvardas, V., Miliou, A., Hatalis, M.: Tactile displays: Overview and recent
advances. Displays 29(3), 185–194 (2008)
4. Culbertson, H., Unwin, J., Kuchenbecker, K.J.: Modeling and rendering realistic
textures from unconstrained tool-surface interactions. Haptics, IEEE Transactions
on 7(3), 381–393 (2014)
5. Kim, S.C., Kyung, K.U., Kwon, D.S.: The effect of sound on haptic perception.
In: EuroHaptics Conference, 2007 and Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems. World Haptics 2007. Second Joint. pp.
354–360. IEEE (2007)
6. L´ecuyer, A.: Simulating haptic feedback using vision: A survey of research and
applications of pseudo-haptic feedback. Presence: Teleoperators and Virtual Envi-
ronments 18(1), 39–53 (2009)
7. L´ecuyer, A., Burkhardt, J., Etienne, L.: Feeling bumps and holes without a haptic
interface: the perception of pseudo-haptic textures. In: Proceedings of the SIGCHI
conference on Human factors in computing systems. pp. 239–246. ACM (2004)
8. Li, M., Ridzuan, M.B., Sareh, S., Seneviratne, L.D., Dasgupta, P., Althoefer, K.:
Pseudo-haptics for rigid tool/soft surface interaction feedback in virtual environ-
ments. Mechatronics 24(8), 1092–1100 (2014)
9. Punpongsanon, P., Iwai, D., Sato, K.: Softar: Visually manipulating haptic softness
perception in spatial augmented reality. Visualization and Computer Graphics,
IEEE Transactions on 21(11), 1279–1288 (2015)
10. Suzuki, Y., Gyoba, J.: Effects of sounds on tactile roughness depend on the con-
gruency between modalities. In: EuroHaptics conference, 2009 and Symposium on
Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Hap-
tics 2009. Third Joint. pp. 150–153. IEEE (2009)