Conference PaperPDF Available

Texture Rendering on a Tactile Surface Using Extended Elastic Images and Example-Based Audio Cues

Authors:
  • Pollen Robotics

Abstract and Figures

A texture rendering system relying on pseudo-haptic and audio feedback is presented in this paper. While the user touches the texture displayed on a tactile screen, the associated image is deformed according to the contact area and the rubbing motion to simulate pressure. Additionally audio feedback is synthesized in real-time to simulate friction. A novel example-based scheme takes advantage of recorded audio samples of friction between actual textures and a finger at several speeds to synthesize the final output sound. This system can be implemented on any existing tactile screen without any extra mechanical device.
Content may be subject to copyright.
Texture Rendering on a Tactile Surface using
Extended Elastic Images and Example-Based
Audio Cues
Julien Fleureau, Yoan Lefevre, Fabien Danieau,
Philippe Guillotel and Antoine Costes
Technicolor R&I
Abstract. A texture rendering system relying on pseudo-haptic and au-
dio feedback is presented in this paper. While the user touches the texture
displayed on a tactile screen, the associated image is deformed according
to the contact area and the rubbing motion to simulate pressure. Addi-
tionally audio feedback is synthesized in real-time to simulate friction. A
novel example-based scheme takes advantage of recorded audio samples
of friction between actual textures and a finger at several speeds to syn-
thesize the final output sound. This system can be implemented on any
existing tactile screen without any extra mechanical device.
1 Introduction
Texture rendering is an active and challenging field of study where many in-
put and output devices have been proposed. In their survey, Chouvardas et al.
classified these textures rendering devices into three categories [3]: mechanical,
electrotactile and thermal devices. Mechanical devices stimulates the mechanore-
ceptors within the skin using mechanical actuators. They include pin-based de-
vices applying pressure, vibrating, ultrasonic and acoustic actuators, and devices
based on electrorheological fluids. Electrotactile devices use electric stimulation
to activate the mechanoreceptors. A matrix of electrodes is a typical example of
such devices. Finally thermal devices provide heat or cool stimuli to the skin.
Another way to simulate texture properties without a specific device is to rely
on pseudo-haptic feedback. L´ecuyer has shown that various haptic sensations
can be induced with visual stimuli [6]. This technique may provide sensations
of stiffness, friction, mass of objects or haptic textures. Bump and holes have
been simulated by varying the speed of the cursor exploring the texture [7].
The elasticity of a texture was also simulated by a deformation of the image
and of the cursor [1]. These two approaches require to explore the texture with a
mouse. To make the interaction more natural, Li et al. proposed a similar system
embedded on a tablet [8]. The user can feel softness of a surface using a pen or
a finger. Punpongsanon et al. developed an augmented reality system where the
user touches an actual object while a projector changes the visual appearance
of this object [9]. This visual feedback changes the perception of the softness of
the object.
2
Audio stimuli may also modify the perception of texture. Kim et al. shows
that the intensity of the sound changes the perception of roughness with or
without haptic feedback [5]. The denseness and ruggedness are also affected by
this intensity. Even with actual materials such as abrasive papers, sound modifies
the perceived roughness [10].
In this work, we present a texture haptic rendering system based on visual
and audio pseudo-haptic feedback. It may have applications in the context of
e-shopping to virtually touch different materials of interest associated to clothes
or furniture for instance. The user by interacting through a standard tactile
screen is able to explore the physical properties of the texture, namely stiffness
and friction. In line with the approaches mentioned hereinbefore, we rely on
visual and audio illusions to generate haptic sensations. The new contributions
involved in this latter system are twofolds:
First, we propose to rely on the paradigm of elastic images introduced in
[1], currently limited to punctual pressure contact with a mouse device. We
introduce the features of continuous rubbing interaction and non-punctual
contact with a finger on a tablet device. To that end, an underlying vis-
coelastic law and a new contact model are proposed. During the interaction,
the texture is visually and dynamically deformed according to the contact
area with the finger and rubbing motions.
Second, a novel example-based audio synthesis process is proposed to render
friction properties. It makes use of real audio samples to create a friction
sound synchronized to the user’s exploratory movement and consistent with
the actual texture and rubbing speed.
Both visual deformation and sound synthesis are on-line processes inducing
low-computational complexity. In the remaining of this paper these two key
components of the global system are further detailed.
2 Pseudo-haptic Rendering
The first aspect of our work deals with a visual mechanism to render pressure
interaction when the user is rubbing the texture displayed on the tablet. In the
original elastic paradigm [1], the contact duration with the displayed image is
assimilated to the amount of “pressure” applied to the associated texture by
the end-user. The image is then radially deformed around the contact point
with a dedicated heuristic function (see Figure 1) to give the illusion of a true
deformation.
However, as it is, this paradigm uniquely addresses static and punctual clicks
and the case of a user sliding or rubbing the surface continuously with his finger
is not handled. To cope with these limitations, two different enhancements are
now introduced. First, a new contact model addresses the problem of natural
interaction on a tablet with a finger, and second, a viscoelastic mechanical model
is proposed to enable dynamic rubbing motions.
3
Fig. 1. Left: Deformation obtained from [1]. Right: Deformation obtained with the
proposed contact model.
2.1 Contact Model
In [1], the interaction between the user and the image is made by means of a
mouse device and the maximum “pressure” is applied at the cursor position.
In the context we address here, the end-user is not interacting with a mouse
pointer anymore but rather with his finger. The contact area is not punctual
anymore and the whole surface of interaction has thus to be taken into account.
In the new contact model, we therefore propose to divide the touched area into
two main components (see Figure 2): i) a first component related to the surface
right under the contact location, and ii) a second component involving the region
right around this contact area.
Fig. 2. Left: Schematic representation of a finger touching the surface. The contact
surface is divided into two areas under and around the actual contact zone (respectively
green and yellow). Right: The two contact areas represented on an image of a sponge
texture.
The maximum amount of “pressure” is now applied on the whole contact
surface (and not only a single point) whereas an exponential decrease occurs in
the boundary area.
Dedicated heuristic radial functions are also proposed here to quantify the
amount of “pressure” applied at a distance dfrom the contact point after a
contact duration tin the two regions defined previously (i.e. d[0, Rth ] and
d[Rth, Rmax ]):
4
p1
deform(d, t) = e
5(dRth)
Rth 1
10 tfor d[0, Rth] (1)
p2
deform(d, t) = e
5(dRth)
Rmaxt(t1
50) for d[Rth , Rmax] (2)
This analytical expression is plotted for various distances and contact dura-
tions in Figure 3 and a visual comparison of the contact model from [1] and our
is given in Figure 1.
Fig. 3. Plot of the dedicated heuristic radial functions used in the contact model for
different contact durations with Rmax = 10 and Rth = 0.25
2.2 Viscoelastic Model
The original elastic image paradigm [1] is limited to “static” and punctual pres-
sures and it is not designed for sliding or rubbing interactions. The deformation
induced by the finger contact is therefore not managed when the end-user moves
the contact-point along the texture surface. To cope with this limitation, the
textured image is not considered anymore as a 2D deformable object but rather
as a 3D grid where each node is associated with one pixel of the texture image.
Each vertex is additionally connected to its 4-connected neighborhood.
A purely vertical mechanical viscoelastic model is then associated to this
latter 3D grid. It makes successive contacts of the user spatially and temporally
consistent on the whole textured image. In this new context, the contact model
described in the previous section is not considered anymore as a deformation field
but rather as a vertical force field applied to the 3D grid along the z-axis and
given as an external input force to the viscoelastic model. The deformation along
5
the z-axis for each node of the grid is defined by a first order linear differential
equation which discrete scheme is given by:
Pz[i, j, n] = KEz[i, j, n] + vEz[i, j, n]Ez[i, j, n 1]
Ts
(3)
The stiffness Kand the viscosity vare the 2 parameters involved in the
viscoelastic model and:
Pz[i, j, n] is the “force” at the sample time n, computed with the contact
model described hereinbefore (see equations 1 and 2) and applied to the
node located at (i, j ) in the image plane. At a sample time n,Pz[i, j, n] is
maximal under the contact surface.
Ez[i, j, n] is the deformation / translation along the z-axis. It is noteworthy
that Ez[i, j, n] not only depends on the current force Pz[i, j, n] but also on
the previous displacement Ez[i, j, n 1].
Tsis the sampling period.
In steady state (and for a constant input pressure Pz[i, j, n] = pz), the dis-
placement Ez[i, j, n] converges to the value Ez[i, j, n] = pz
Kwhich is proportional
to the force pz. For transient state (i.e. increase or decrease of the input force),
the displacement reaches its steady state value within a time defined by the
viscosity parameter ν. In other words, the higher is K, the lower is the final
deformation (the stiffer is the texture) and the lower is ν, the faster the texture
reaches its steady state. The final deformation thus ensures a time consistency as
well as the integration of the user external interaction while preserving low com-
putational loads. Finally, tuning Kand νmakes possible to fit different kinds of
texture in order to adapt the visual rendering to simulate different “mechanical
properties”.
2.3 Details of Implementation
The viscoelastic model presented previously can be easily implemented by means
of a regular 3D graphic engine. The textured image is used as a regular 2D texture
mapped on a 3D regular square grid. Each node of the grid is continuously
updated according to equations 1, 2 and 3. A normal is estimated for each node
(on the basis of the local neighborhood) at each sample time which enables the
use of a light source to do the shadow rendering, thus increasing the realism of
the simulation.
3 Example-Based Audio Synthesis
As mentioned hereinbefore, an audio feedback synchronized to exploratory move-
ments improves the realism of the texture rendering and may change the percep-
tion of the roughness. In this context, we now propose a method, complementary
to the visual feedback, to synthesize a friction sound when the targeted texture
6
is rubbed by the end-user. The proposed approach is suited for any duration and
presents properties which self-adapt with the speed of the rub. Contrarily to [4]
and [2] where self-adaptation to speed is proposed on synthesized textures, the
proposed method makes use of several real audio recording of the sound gener-
ated off-line when touching real samples of the texture of interest at different
speed (typically low, medium, high). New sound samples are then synthesized for
a given rubbing speed by a combination of the spectral and intensity properties
of the initial examples.
The proposed synthesis approach naturally goes through two different steps,
namely a learning step and a generation step, that we are going to detail here-
after.
3.1 Learning Step
The initial off-line learning step aimed at capturing the spectral properties as
well as the properties of the intensity of the friction sound made when a texture is
rubbed at different speeds. These properties will be then re-used in the generation
step. To that end, Naudio samples siare recorded by means of a dedicated setup
when a user is rubbing the texture of interest at varying speed vi. Each signal si
is first high-pass filtered to remove the baseline which does not embed the high-
frequency spectral properties of the texture we are interested in. The remaining
part is therefore a centered audio signal, for which spectrum and energy can be
computed, and depend on the rubbing speed (see Figure 4).
The spectral properties are captured making use of a regular auto-regressive
(AR) model but making use of realistic signals. Such a model is represented by an
all-pole infinite Impulse Response filter (IIR) which coefficients viare optimized
(Yule-Walker equations resolution) so that filtering a white noise with this IIR
would result in a new signal with similar spectral properties as the example used
for the AR fitting (see Figure 4). The mean power Aiof each temporal sample
is also computed to capture the energy properties of the friction sound at each
speed.
Eventually, for a given texture, we have Ntriplets (vi, Fi, Ai) which char-
acterize its spectral and energy properties at different rubbing speeds. These
descriptors are then re-used in the generation step to synthesize the final speed-
varying friction sound.
3.2 Generation Step
The synthesis process consists in creating a nth audio sample y[n] consistent with
the current rubbing speed v[n] of the end-user as well as with the intrinsic audio
properties of the texture. To that end, for each new audio sample to generate at
step n:
Nwhite noises wiare updated by sampling a new i.i.d. (identically indepen-
dently distributed) value wi[n].
7
Fig. 4. Original (red) and AR-based estimated (green) spectra of audio samples ob-
tained when recording a user rubbing a sheet of paper at low (left), medium (middle)
and high speed (right).
Each of these Nwhite noises wiare then filtered through the IIR filter whose
coefficients are given by Fi, producing a new associated output yi[n].
The 2 consecutive indices aand bsuch that vavivbare then computed.
– Under a linear assumption, a first value u0[n] is computed by u0[n] =
(vbv[n])ya[n]+(v[n]va)yb[n]
vbvawhich is a weighted value of the signal samples
which associated spectra are the closer from the one which should occur at
the given speed.
Still assuming a linear behavior, u0[n] is finally scaled by a scaling factor
β[n] = (vbv[n])Aa+(v[n]va)Ab
vbvaleading to the final new sample value u[n] =
β[n]u0[n].
In the end, the new sample is simply a linear speed-based intensity modu-
lation of a linear speed-based combination of the different spectrally-consistent
outputs of the auto-regressive models. Figure 5 sums up the different steps of
the generation process.
4 Results & Discussion
We conducted preliminary tests to highlight the advantages as well as the draw-
backs of the proposed approach. More precisely, our system has been tested with
four different texture samples, namely a sponge (K= 1.4 and ν= 0.1), a piece
of paper (K= 7.and ν= 0.3), a paper towel (K= 7.and ν= 0.3) and a carpet
(K= 4.and ν= 0.3). For each of these materials, examples of audio samples
(required for the audio feedback synthesis process) have been captured making
use of a Senheiser ME66/K6 microphone on a Zoom R16 recorder. Resulting files
are wave formatted file sampled at 44.1 kHz with a 24bits dynamic. The textures
were rubbed with the fingertip to produce the sound of friction. Three records
per material have been gathered for three different speeds corresponding to slow,
medium and fast rubbing speed. All the system has been implemented on the
Samsung Galaxy S4 tablet with appropriate images for each visual feedback.
8
..., w1[n-1], w1[n] F1y1[n]
..., y1[n-2], y1[n-1]
..., w2[n-1], w2[n] F2y2[n]
..., y2[n-2], y2[n-1]
..., wa[n-1], wa[n] Faya[n]
..., ya[n-2], ya[n-1]
..., wb[n-1], wb[n] Fbyb[n]
..., yb[n-2], yb[n-1]
..., wN[n-1], wN[n] FNyN[n]
..., yN[n-2], yN[n-1]
Spectral
Estimation
v[n]
va
vb
A1A2AaAbAN
.
.
.
.
..
Power
Estimation
Scaling
u0[n]
β[n]
u[n]
Fig. 5. Block-diagram summing up the different steps involved in the example-based
audio synthesis process.
Figure 6 depicts screen captures of the four examples introduced hereinbefore.
It is noteworthy to mention that the finger is not represented on the images for
illustration purposes, but one should have in mind that the contact areas should
be covered by the finger of the end-user. Depending on the consistency of the
proposed material with the assumption made in the visual and audio feedback
processes, realism and quality of the experience may vary.
First regarding the sponge example, the deformation appears quite realistic
because this material has elastic properties which nicely match the viscoelastic
model. However, the audio feedback is less convincing because the quite high-
frequency content captured by the AR model is not sufficient to render the
complexity of the friction sound due to the holes covering the material.
On the contrary, the audio feedback is more realistic for the paper which
audio spectrum is more compatible with the assumptions of our audio modeling.
The texture deformation looks however mode artificial as the intrinsic mechanical
behavior of such a material is poorly represent by an elastic model.
The two last examples fit quite well the underlying modeling assumptions.
They both present quite regular surface structures which then produce high-
frequency friction sounds compatible with the audio model. Besides, the visual
deformation, light for the paper towel, stronger for the carpet, are also realistic
because each material are quite well modeled by a viscoelastic law.
The video1provided with this paper gives to the reader a more representa-
tive idea of the visual and audio behaviors of the whole framework in real con-
ditions. As suggested before, materials presenting mechanical properties close
to viscoelastic are obviously better rendered. The sheet of paper for which the
elasticity is questionable is therefore poorly deformed whereas the sponge or the
carpet provide interesting feedbacks. Similarly, as soon as the friction sounds em-
1http://dai.ly/x3pqkwx
9
Fig. 6. Examples of textures simulated with the proposed system: sponge, sheet of
paper, paper towel and carpet (from top to bottom). The left image is the image of the
texture without interaction. The middle image corresponds to a pressure on the left of
the screen. The right image is the result of a sliding gesture toward the right.
bed complex patterns induced by meso or macroscopic reliefs, the auto-regressive
approach does not provide anymore sufficient degrees of freedom to model the
friction sounds. For microscopic reliefs, the speed-varying AR approach is quite
relevant and one can especially observe the consistent speed-dependent friction
sound variations (in terms of energy and spectrum) obtained on the sheet of
paper when changing the rubbing speed. These preliminary tests were necessary
to roughly understand the limitations of our systems. More rigorous studies will
be conducted to finely characterize the perception of those textures.
5 Conclusion
We have proposed a new framework to render texture properties on a tactile
screen without using any extra mechanical device. We relied on the elastic im-
age paradigm and proposed a new contact model based on a viscoelastic law to
offer to the end-user a pseudo-haptic visual feedback when he is rubbing or press-
ing the texture with his finger. Additionally, an example-based audio synthesis
methodology has been introduced to render texture-specific friction sounds at
different rubbing speeds. First qualitative results have been finally proposed to
highlight the advantages as well as the limitations of our approach. Indeed, it
seems that elastic materials as well as materials with high-frequency audio signa-
ture are better suited for the proposed solution. Future works should now focus
10
on the generalization of this framework to more complex textures as well as to
the setting up of a more quantitative evaluation of the system performances.
References
1. Argelaguet, F., J´auregui, D.A.G., Marchal, M., L´ecuyer, A.: A novel approach for
pseudo-haptic textures based on curvature information. In: Haptics: Perception,
Devices, Mobility, and Communication, pp. 1–12. Springer (2012)
2. Bianchi, M., Poggiani, M., Serio, A., Bicchi, A.: A novel tactile display for softness
and texture rendering in tele-operation tasks. IEEE World Haptics Conference pp.
49–56 (2015)
3. Chouvardas, V., Miliou, A., Hatalis, M.: Tactile displays: Overview and recent
advances. Displays 29(3), 185–194 (2008)
4. Culbertson, H., Unwin, J., Kuchenbecker, K.J.: Modeling and rendering realistic
textures from unconstrained tool-surface interactions. Haptics, IEEE Transactions
on 7(3), 381–393 (2014)
5. Kim, S.C., Kyung, K.U., Kwon, D.S.: The effect of sound on haptic perception.
In: EuroHaptics Conference, 2007 and Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems. World Haptics 2007. Second Joint. pp.
354–360. IEEE (2007)
6. ecuyer, A.: Simulating haptic feedback using vision: A survey of research and
applications of pseudo-haptic feedback. Presence: Teleoperators and Virtual Envi-
ronments 18(1), 39–53 (2009)
7. ecuyer, A., Burkhardt, J., Etienne, L.: Feeling bumps and holes without a haptic
interface: the perception of pseudo-haptic textures. In: Proceedings of the SIGCHI
conference on Human factors in computing systems. pp. 239–246. ACM (2004)
8. Li, M., Ridzuan, M.B., Sareh, S., Seneviratne, L.D., Dasgupta, P., Althoefer, K.:
Pseudo-haptics for rigid tool/soft surface interaction feedback in virtual environ-
ments. Mechatronics 24(8), 1092–1100 (2014)
9. Punpongsanon, P., Iwai, D., Sato, K.: Softar: Visually manipulating haptic softness
perception in spatial augmented reality. Visualization and Computer Graphics,
IEEE Transactions on 21(11), 1279–1288 (2015)
10. Suzuki, Y., Gyoba, J.: Effects of sounds on tactile roughness depend on the con-
gruency between modalities. In: EuroHaptics conference, 2009 and Symposium on
Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Hap-
tics 2009. Third Joint. pp. 150–153. IEEE (2009)
... Some alternative approaches made A. Costes use of an intermediate proxy acting on the finger with minimal sight obstruction [6]. Also, crossmodal effects like pseudohaptic feedback can elicit haptic percepts without the need of a haptic actuator [7]. Figure 1 summarizes the scientific topics and challenges haptic images are related to. ...
... The approach of Watanabe was adapted to touchscreens in [131] and [132], inducing quantitative sliding sensations. Fleureau et al. adapted the Elastic Image technique to a tactile tablet, with an additional audio feedback to simulate roughness [7]. Costes et al. relied on a deformable cursor between the user's finger and a tablet computer to provide pseudo haptic sensations of hardness, friction, fine roughness, and macro roughness [133]. ...
Article
The development of tactile screens opens new perspectives for co-located image and haptic rendering, leading to the concept of “haptic images”. They emerge from the combination of image data, rendering hardware, and haptic perception. This enables one to perceive haptic feedback while manually exploring an image. This raises nevertheless two scientific challenges, which serve as thematic axes of the state of the art in this survey. Firstly, the choice of appropriate haptic data raises a number of issues about human perception, measurements, modeling and distribution. Secondly, the choice of appropriate rendering technology implies a difficult trade-off between expressiveness and usability.
... Simulating the deformation d of images based on the pressing duration Dt results in the perceived different compliance of images. Fleureau et al. [50] applied the same technique to a digital tablet with an additional audio feedback to simulate roughness. Kawabe et al. [51] simulated compliance perception when pulling a virtual surface with fingers in mid-air. ...
Article
Full-text available
In the last two decades, the design of pseudo-haptics as a haptic presentation method that does not require a mechanical feedback device has been proposed in various research papers. Moreover, applications using pseudo-haptics have been proposed and evaluated in various contexts. However, the findings from these studies have not yet been comprehensively organized in a survey paper in the recent times. In this paper, findings from a series of individual prior studies were summarized from the design through to the application proposals. First, we summarize visual stimuli designs based on the target haptic object properties to induce pseudo-haptics. Second, we summarize two special issues when designing pseudo-haptics; (1) workaround design for the visualized mismatch of visual stimuli and user input and (2) the combination design of pseudo-haptics and physical stimuli. Third, application proposals that use pseudo-haptics for training, assistance, and entertainment are presented. This survey paper would help not only researchers in academia but also application developers who intend to use pseudo-haptics as a haptic presentation method.
... One of the most exciting follow-up of our pseudo-haptic approach would be to investigate the auditory modality, in the line of the work of Fleureau et al. [42], which we decided not to include in this manuscript as I had only a minor participation in it. In addition to a visual stiffness effect inspired from the Elastic Image [6], they proposed to generate audio cues by interpolating several audio recordings depending on stroke velocity. ...
Thesis
Touchscreens have largely spread out over the last decade and have become one of the most ordinary human-machine interface. However, despite their many assets, touchscreens still lack of tactile sensations: they always feel flat, smooth, rigid and static under the finger, no matter the visual content. In this work, we investigate how to provide touchscreens with the means to touch us and express a variety of image-related haptic features. We first propose a new format for haptic data which provides a generic haptic description of a virtual object without prior knowledge on display hardware. This format is meant to be seamlessly integrated in audiovisual content creation workflows, and to be easily manipulated by non-experts in multidisciplinary contexts. Then, we address the challenge of providing a diversity of haptic sensations with lightweight actuation, with the novel approach called “KinesTouch”. We propose in particular a novel friction effect based on large lateral motion that increases or diminishes the sliding velocity between the finger and the screen. Finally, we introduce “Touchy”, a method to apply pseudo-haptic principles to touchscreen interactions. We present a set of pseudo-haptic effects which evoke haptic properties like roughness, stiffness or friction through the vibrations, stretches, dilatations and compressions of a ring-shaped cursor displayed under the user’s finger. We extend these effects to 3D scenes, and discuss the differences between 2D and 3D content enhancement.
... Previous work has shown high relevance of vibro-and audio-tactile cues for improving performance [6,29,51,52,59,62,79]. The motivation for combining audio and tactile cues is that perception research has shown that sound can enhance touch perception [23,66], e.g., during mobile [31] and haptic interaction [19], and specifically also for texture perception [41], which is highly relevant for the art application presented in the second half of this paper. ...
Conference Paper
Full-text available
We present a novel, multilayer interaction approach that enables state transitions between spatially above-screen and 2D on-screen feedback layers. This approach supports the exploration of haptic features that are hard to simulate using rigid 2D screens. We accomplish this by adding a haptic layer above the screen that can be actuated and interacted with (pressed on) while the user interacts with on-screen content using pen input. The haptic layer provides variable firmness and contour feedback, while its membrane functionality affords additional tactile cues like texture feedback. Through two user studies, we look at how users can use the layer in haptic exploration tasks, showing that users can discriminate well between different firmness levels, and can perceive object contour characteristics. Demonstrated also through an art application, the results show the potential of multilayer feedback to extend on-screen feedback with additional widget, tool and surface properties, and for user guidance.
... About a decade later, alternative approaches were explored, like simulating stiffness by locally deforming an image being clicked (Argelaguet Sanz et al., 2013) or pressed (Punpongsanon et al., 2015). Fleureau et al. (2016) applied this technique to a digital tablet, with an additional audio feedback to simulate roughness. Nakakoji et al. (2011) proposed a variety of remote tactile interactions built on pseudo-haptic principles. ...
Article
Full-text available
Haptic enhancement of touchscreens usually involves vibrating motors producing limited sensations or custom mechanical actuators that are difficult to disseminate. In this paper, we propose an alternative approach called “Touchy,” where a symbolic cursor is introduced under the user's finger, to evoke various haptic properties through changes in its shape and motion. This novel metaphor enables to address four different perceptual dimensions, namely: hardness, friction, fine roughness, and macro roughness. Our metaphor comes with a set of seven visual effects that we compared with real texture samples within a user study conducted with 14 participants. Taken together our results show that Touchy is able to elicit clear and distinct haptic properties: stiffness, roughness, reliefs, stickiness, and slipperiness.
Article
Touch interactions are central to many human activities, but there are few technologies for computationally augmenting free-hand interactions with real environments. Here, we describe Tactile Echoes, a finger-wearable system for augmenting touch interactions with physical objects. This system captures and processes touch-elicited vibrations in real-time in order to enliven tactile experiences. In this article, we process these signals via a parametric signal processing network in order to generate responsive tactile and auditory feedback. Just as acoustic echoes are produced through the delayed replication and modification of sounds, so are Tactile Echoes produced through transformations of vibrotactile inputs in the skin. The echoes also reflect the contact interactions and touched objects involved. A transient tap produces discrete echoes, while a continuous slide yields sustained feedback. We also demonstrate computational and spatial tracking methods that allow these effects to be selectively assigned to different objects or actions. A large variety of distinct multisensory effects can be designed via ten processing parameters. We investigated how Tactile Echoes are perceived in several perceptual experiments using multidimensional scaling methods. This allowed us to deduce low-dimensional, semantically grounded perceptual descriptions. We present several virtual and augmented reality applications of Tactile Echoes. In a user study, we found that these effects made interactions more responsive and engaging. Our findings show how to endow a large variety of touch interactions with expressive multisensory effects.
Conference Paper
Full-text available
Pseudo-haptic textures allow to optically-induce relief in textures without a haptic device by adjusting the speed of the mouse pointer according to the depth information encoded in the texture. In this work, we present a novel approach for using curvature information instead of relying on depth information. The curvature of the texture is encoded in a normal map which allows the computation of the curvature and local changes of orientation, according to the mouse position and direction. A user evaluation was conducted to compare the optically-induced haptic feedback of the curvature-based approach versus the original depth-based approach based on depth maps. Results showed that users, in addition to being able to efficiently recognize simulated bumps and holes with the curvature-based approach, were also able to discriminate shapes with lower frequency and amplitude.
Conference Paper
Full-text available
Recent studies have demonstrated that the tactile roughness perception of surface texture was modified when touch-produced sounds or non-touch-produced sounds were presented while exploring a surface. Our previous research demonstrated that white noise modified tactile roughness perception of an abrasive paper with a certain particle size. In the present study, we investigated whether the dependence of the effects of sound on the particle size of surfaces indicates the correspondences of perceived roughness between tactile and auditory information. The results revealed that the congruency of roughness information between the modalities is necessary to modify the effects of sounds on tactile roughness perception with regard to not only the intensity of the crossmodal effects but also whether the direction of the auditory effects on tactile roughness perception is smoother or rougher.
Conference Paper
Full-text available
We present a new interaction technique to simulate textures in desktop applications without a haptic interface. The proposed technique consists in modifying the motion of the cursor on the computer screen - i.e. the Control/Display ratio. Assuming that the image displayed on the screen corresponds to a top view of the texture, an acceleration (or deceleration) of the cursor indicates a negative (or positive) slope of the texture. Experimental evaluations showed that participants could successfully identify macroscopic textures such as bumps and holes, by simply using the variations of the motion of the cursor. Furthermore, the participants were able to draw the different profiles of bumps and holes which were simulated, correctly. These results suggest that our technique enabled the participants to successfully conjure a mental image of the topography of the macroscopic textures. Applications for this technique are: the feeling of images (pictures, drawings) or GUI components (windows' edges, buttons), the improvement of navigation, or the visualization of scientific data.
Article
Full-text available
ABSTRACT This paper presents a survey of the main results obtained in the field of “pseudo-haptic feedback”: a technique meant to simulate haptic sensations in virtual environments using visual feedback and properties of human visuo-haptic perception. Pseudo-haptic feedback uses vision to distort haptic perception and verges on haptic illusions. Pseudo-haptic feedback has been used to simulate various haptic properties such as the stiffness of a virtual spring, the texture of an image, or the mass of a virtual object. This paper describes the several experiments inwhich,these haptic properties were simulated. It assesses the definition and the properties of pseudo-hapticfeedback. It also describes several virtual reality applications in which pseudo-haptic feedback has been successfully implemented, such as a virtual environment for vocational training of milling machine operations, or a medical simulator for training in regional anesthesia procedures. Pseudo-Haptic Feedback 3
Article
We present SoftAR, a novel spatial augmented reality (AR) technique based on a pseudo-haptics mechanism that visually manipulates the sense of softness perceived by a user pushing a soft physical object. Considering the limitations of projection-based approaches that change only the surface appearance of a physical object, we propose two projection visual effects, i.e., surface deformation effect (SDE) and body appearance effect (BAE), on the basis of the observations of humans pushing physical objects. The SDE visualizes a two-dimensional deformation of the object surface with a controlled softness parameter, and BAE changes the color of the pushing hand. Through psychophysical experiments, we confirm that the SDE can manipulate softness perception such that the participant perceives significantly greater softness than the actual softness. Furthermore, fBAE, in which BAE is applied only for the finger area, significantly enhances manipulation of the perception of softness. We create a computational model that estimates perceived softness when SDE+fBAE is applied. We construct a prototype SoftAR system in which two application frameworks are implemented. The softness adjustment allows a user to adjust the softness parameter of a physical object, and the softness transfer allows the user to replace the softness with that of another object.
Article
Texture gives real objects an important perceptual dimension that is largely missing from virtual haptic interactions due to limitations of standard modeling and rendering approaches. This paper presents a set of methods for creating a haptic texture model from tool-surface interaction data recorded by a human in a natural and unconstrained manner. The recorded high-frequency tool acceleration signal, which varies as a function of normal force and scanning speed, is segmented and modeled as a piecewise autoregressive (AR) model. Each AR model is labeled with the source segment’s median force and speed values and stored in a Delaunay triangulation to create a model set for a given texture. We use these texture model sets to render synthetic vibration signals in real time as a user interacts with our TexturePad system, which includes a Wacom tablet and a stylus augmented with a Haptuator. We ran a human-subject study with two sets of ten participants to evaluate the realism of our virtual textures and the strengths and weaknesses of this approach. The results indicated that our virtual textures accurately capture and recreate the roughness of real textures, but other modeling and rendering approaches are required to completely match surface hardness and slipperiness.
Article
This paper proposes a novel pseudo-haptic soft surface stiffness simulation technique achieved by displaying the deformation of the soft surface and maneuvering an indenter avatar over a virtual soft surface by means of a touch-sensitive tablet. The visual feedback of the surface deformation and the alterations to the indenter avatar behavior produced by the proposed technique create the illusion of interaction with a hard inclusion embedded in the virtual soft surface. The proposed pseudo-haptics technique is validated with a series of experiments conducted by employing a tablet computer with an S-pen input and a tablet computer with a bare finger input. Tablet computers provide unique opportunities for presenting the pseudo-haptic (indenter avatar speed), haptic (contact reaction force from the device surface) and visual cues (surface information) at the same active point of interaction which facilitates information fusion. Hence, here, we evaluate the performance of tablet computers in identification of hard inclusions within virtual soft objects and compare it with the performance of a touchpad input device. A direct hand-soft surface interaction is used for benchmarking of this study. We found that compared with using a touchpad, both the sensitivity and the positive predictive value of the hard inclusion detection can be significantly improved by 33.3% and 13.9%, respectively, by employing tablet computers. Using tablet computers could produce results comparable to the direct hand-soft surface interaction in detecting hard inclusions in a soft object. The experimental results presented here confirm the potential of the proposed technique for conveying haptic information in rigid tool/soft surface interaction in virtual environments.
Article
Tactation is the sensation perceived by the sense of touch, and is based on the skin’s receptors. Touch is a common medium used by the general population and the sensory impaired. Tactile substitution can be used by the blind or deaf in order to: (a) enhance access to computer graphical user interfaces and (b) enhance mobility in controlled environments.The skin nerves can be stimulated through six types of receptors by mechanical, electrical, or thermal stimuli. Modalities, such as vibration and pressure, can stimulate these receptors. Advances in tactile communication using implementations of the actuating devices have been developed via several new technologies. These technologies include static or vibrating pins, focused ultrasound, electrical stimulation, surface acoustic waves, and other.This paper is a review of the state-of-the-art in the physiological and technological principles, considerations and characteristics, as well as latest implementations of microactuator-based tactile graphic displays. We also review fabrication technologies, in order to demonstrate the potential and limitations in tactile applications.
Conference Paper
Research on the intermodality relationship of auditory and tactile perceptions was conducted. An experiment is performed with 78(26 auditory cues times 3 tactile cues) stimuli combinations. The result of this experiment showed that the sound intensity level definitely affects perceived sensation in three ways: 1) The perceived roughness and ruggedness sensation is more innervated when the specific frequency is enhanced, e.g. approximately 30Hz~300Hz for ruggedness and 30Hz~600Hz for roughness sensation; 2) Conversely, the perceived density is reciprocally affected by an intensity adjustment of the sound; 3) A frequency adjustment of the sound can elicit a perceived roughness if even no stimuli are conveyed to subjects. In general, the importance of congruency between modalities was observed