ArticlePDF Available

Abstract and Figures

By applying scientific knowledge about human visual processing, we have recently developed a light projection technique, called Deformation Lamps, which can add a variety of illusory, yet realistic distortions, to a wide range of static projection targets. In this paper, to explain how Deformation Lamps works, we first describe its basic algorithm and related human visual processing. We then describe the latest version of the Deformation Lamps system. It is a ready-to-use application that automatically finds projection targets and projects motion-inducer patterns in alignment with images or textures on the targets' surfaces. The application also supports interactive editing of animation contents.
Content may be subject to copyright.
Animating static objects by illusion-based projection mapping
Taiki Fukiage
Takahiro Kawabe
Masataka Sawayama
Shinya Nishida
Abstract By applying scientic knowledge about human visual processing, we have recently
developed a light projection technique, called Deformation Lamps, which can add a variety of
illusory, yet realistic distortions, to a wide range of static projection targets. In this paper, to explain
how Deformation Lamps works, we rst describe its basic algorithm and related human visual
processing. We then describe the latest version of the Deformation Lamps system. It is a ready-to-use ap-
plication that automatically nds projection targets and projects motion-inducer patterns in alignment
with images or textures on the targetssurfaces. The application also supports interactive editing of ani-
mation contents.
Keywords projection mapping, human vision, motion, augmented reality.
DOI # 10.1002/jsid.572
1 Introduction
Light projection is a powerful spatial-augmented-reality tool
that allows us to edit the visual appearance of objects in the
real world.
1,2
Previous studies have shown that light projec-
tion can effectively manipulate the surface properties of ob-
jects such as color, luminance dynamic range, gloss, and
shading.
35
The basic strategy has been to reproduce physical
light transport by pixel-wise intensity/color modication. For
this physics-based strategy, however, moving surface textures
present a difcult problem because projection cannot
completely erase the original texture and correctly paint it in
the new shifted location.
Deformation Lamps
*
a novel light projection technique
we recently developedsolves this problem by applying sci-
entic knowledge about human visual processing.
6
By deceiv-
ing the observersbrains, Deformation Lamps can add a
variety of realistic distortions to a wide range of static 2D
and 3D projection targets, without accurately reproducing
the physical light transport of the moving targets.
In the rst half of this paper, we provide a brief overview of
this technique and explain how it deceives the observers
brains. In the latter half, we propose a ready-to-use applica-
tion for Deformation Lamps.
2 What is Deformation Lamps?
A typical Deformation Lamps system consists of a camera, a
projector, and a computer (Fig. 1). First, the camera takes a
grayscale image of a target object. Then, the computer creates
a movie sequence by dynamically deforming the grayscale
image in accordance with a sequence of pre-dened deforma-
tion maps. Next, it subtracts, in the pixel intensity domain, the
original static image from the movie sequence. After the addi-
tion of the mean luminance and adjustment of signal lumi-
nance contrast, the resulting grayscale movie sequence is
projected onto the target object. Note that precise alignment
of the coordinates between the camera and projector is desir-
able. See https://youtu.be/wihzwjm5398 for the results.
The basic technique in conventional projection mapping is
physical appearance control. An example is Shader Lamps,
2
which projects a new pattern on the target objects surface,
erasing the original surface colors/textures. On the other
hand, Deformation Lamps is based on perceptual appearance
control. It projects a luminance motion pattern only, leaving
the original colors/textures as they are.
Deformation Lamps does not reproduce the deforming
object pattern on the objects surface in a physically correct
way because it only projects a grayscale (luminance) image se-
quence and because the luminance contrast of the projected
image does not have to be perfect (i.e., it can be signicantly
weaker than that necessary for perfect compensation for re-
production of the target movement). Thanks to these fea-
tures, unlike traditional projection mapping techniques,
Deformation Lamps works well even in bright environments,
where ambient illumination reduces the luminance contrast
of the projected pattern.
Considering its simple projection principle, one might re-
gard Deformation Lamps as a simplied or degraded version
of Shader Lamps. However, that would be a misconception.
To understand why Deformation Lamps can produce a realis-
tic motion impression without accurately reproducing light
transport, one has to understand how it deceivesthe human
visual processing.
Received 04/08/17; accepted 08/08/17.
The authors are with NTT Communication Science Labs, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa 243-0198, Japan; e-mail: t.fukiage@gmail.com.
© Copyright 2017 Society for Information Display 1071-0922/17/2507-0572$1.00.
*Also called HenGenTou in Japanese.
434 Journal of the SID 25/7, 2017
3 Human visual processing
3.1 Parallel processing
In early visual processing, the human brain separately anal-
yses different visual attributes such as color, form, and
motion in separate processing pathways.
7
Subsequently, the
results of each analysis are integrated into a coherent visual
representation.
The basic design concept of Deformation Lamps is to
selectively modulate the visual motion signal. Then the brain
integrates the modied motion signal with the color and form
signals coming from the target object, producing a visual rep-
resentation similar to one produced with a real moving object
(Fig. 2).
3.2 Motion processing
Visual motion processing starts with motion sensors. The most
effective motion sensor (rst-order motion sensor) detects lo-
cal spatiotemporal orientation, or motion energy falling in its
receptive eld.
8,9
It responds well to a ow of coarse lumi-
nance patterns even when there is a large change in pattern
details during the movement. The motion sensor is highly
sensitive to luminance patterns but not to chromatic ones,
and it is more sensitive to coarse (low spatial frequency) pat-
terns than to ne (high spatial frequency) ones.
Given these characteristics of the visual motion sensors,
luminance (grayscale) pattern projection is sufcient to
drive them and induce motion perception. In addition, to
produce a motion impression, the spatial pattern and
FIGURE 1 Process overview of a representative system of Deformation Lamps consisting of a video projec-
tor and a camera. First, the camera takes a grayscale image of a projection target. Second, a deformation image
sequence is generated by pixel warping the camera image with an arbitrarily dened deformation map. Third,
a difference image sequence is generated by subtracting the original camera image from the movie sequence.
Finally, the difference image sequence is optically projected onto the projection target object.
FIGURE 2 Visual processing for a normal color movie and that for Deformation Lamps.
Fukiage et al. / Illusion-based projection mapping of visual motion 435
contrast of the projected image do not have to be close to
those necessary for perfect reproduction of the target move-
ment, as long as the motion sensors are activated in a rea-
sonable way.
3.3 Projection contrast
Let us consider the effects of projection luminance contrast
on motion signals. Even when the contrast of the projected
pattern is lower than that necessary for perfect compensation,
the spatial shift of the target pattern produced by adding the
projected pattern is correct with regard to direction, but it is
smaller and less spatially coherent (Fig. 3). This is not the case
for the high-spatial-frequency components, whose phase
shifts become larger than a half cycle of the frequency; how-
ever, these components contribute little to motion processing.
Therefore, reducing projection contrast does not qualitatively
degrade the spatial pattern of motion; it only quantitatively
weakens the motion magnitude.
3.4 Color and form
Adding luminance modulations does not strongly affect per-
ceived color (hue/saturation). While form processing is sensi-
tive to luminance signals, contrast sensitivity to dynamic
luminance patterns is higher for motion processing than for
form processing. Therefore, by projecting a dynamic lumi-
nance pattern of relatively low contrast, Deformation Lamps
can effectively drive visual motion processing with little effect
on the processing of color and form. As a result of cross-
attribute integration in the observers brain, it gives a motion
impression to the target object while keeping the appearances
of color and texture close to those seen under normal
illumination.
3.5 Cross-attribute integration
In the human visual processing, after different attributes are
separately analyzed, they are integrated into a multi-attribute
visual representation. When there are inconsistencies among
the attributes, the brain attempts to reconcile them to make
a coherent representation.
For instance, when a stationary color pattern is presented
with a moving luminance pattern, the color position is percep-
tually captured by the luminance motion (motion cap-
ture).
10,11
This is presumably because luminance motion
signals are more reliable than are color position signals for hu-
man vision. Similarly, the position of a ne luminance texture
is captured by the movement of a coarse luminance pat-
tern.
12,13
In addition, it is known that pattern and color signals
are spatiotemporally integrated along the trajectory of
motion.
14,15
In Deformation Lamps, these cross-attribute interactions
play critical roles in producing illusory motion of the color
and texture of the target object. That is, the color and texture
of the target object are motion captured by the projected mo-
tion signal.
3.6 Material from motion ow
Another line of vision science that inuenced the develop-
ment of Deformation Lamps is the research on material per-
ception. Our recent study showed that dynamic deformation
is a useful visual cue for human observers to see liquid-like
materials.
16,17
As it had been shown that projection mapping
was able to change the optical material properties of real ob-
jects, we attempted to change the mechanical material prop-
erties. This was our initial motivation for developing
Deformation Lamps. By producing illusory deformation in
stationary pictures, Deformation Lamps can make hard
FIGURE 3 Perfect radiometric compensation is not necessary for Deformation Lamps.
Even when the contrast of the projection pattern is not sufcient for perfect compensation
as shown in (b), the spatial shift of the target pattern produced by addition of the projected
pattern is correct with regard to the direction.
436 Journal of the SID 25/7, 2017
objects visually deform as if they are under water, in the wind,
or behind hot air.
4 A ready-to-use application for Deformation
Lamps
The rest of this paper covers a ready-to-use tool that supports
automatic alignment and interactive editing of projection ef-
fects. This tool was developed to extend the range of uses of
Deformation Lamps.
The basic algorithm of Deformation Lamps is simple and
easy to implement. However, to obtain good animation
effects, one has to carefully solve a couple of problems.
One is alignment of the target object and projected image.
With imprecise alignment, the quality of the illusory motion is
severely degraded. Aligning the projected pattern by eyes,
which we used in an earlier version, was time-consuming
and inaccurate. Automatic alignment based on accurate esti-
mation of the position and posture of the projection target is
desirable. A standard solution is to use markers attached to
the target, but we did not want to worry about how to hide
the markers from the observers view. We instead use the im-
age of the target itself to estimate the position and posture, as
well as the identity, of the target.
The other problem is how to decide the pattern of defor-
mation. Deformation Lamps cannot produce large move-
ments. When the deformation magnitude is too large, the
projected pattern does not perceptually merge with the sur-
face pattern of the projection target. On the other hand, when
the deformation magnitude is too small, the animation effect
is not impressive. The proper magnitude of deformation
depends on many factors, including the lighting condition,
projector parameters, target reectance, and the characteris-
tics of human visual processing. A tool for real-time interac-
tive editing of animation contents should allow users to
select the proper magnitude of deformation by direct image
checking by their own eyes.
The key features of the proposed tool are as follows: (1) It
automatically nds projection targets by matching features
between a captured image and reference images in a database
and projects motion-inducer patterns in alignment with the
targets. (2) The user can interactively edit the motion se-
quence just by combining several base deformation patterns
while observing the projection result on the real target object.
The tool is based on the simplest variants of projector-
camera systems.
18
Figure 4 shows how the tool works in a typ-
ical setting. The workow of the process involves an ofine
part and an online part. The following sections describe the
details of each process.
4.1 Ofine process
4.1.1 Calibration
First, the user sets a projector and a camera in front of a pro-
jection target and starts an automatic calibration process,
where the pixel-wise correspondence between camera coordi-
nates and projector coordinates (P2C map) is acquired by
means of the structured light projection
19
(Fig. 4a). To ob-
tain an ideal result, a gamma calibration of the projector
should also be performed. In most cases, however, just using
a preset value (i.e., gamma = 2.2) is enough because radio-
metric compensation is not vital for producing the effect in
Deformation Lamps.
FIGURE 4 Process overview of the proposed system.
Fukiage et al. / Illusion-based projection mapping of visual motion 437
4.1.2 Acquisition of static pictures of the tar-
get object
After the calibration, the user starts the projection of the ef-
fect by pressing a button. Immediately after the button press,
the projector sequentially outputs uniform gray light with dif-
ferent intensities, G
1
and G
2
(G
1
<G
2
), while the camera ac-
quires two grayscale images (I
G1
and I
G2
) of the target object
(Fig. 4b). The images are used in the subsequent image pro-
cessing step to generate an output image sequence. Note that
as chromatic information is irrelevant for Deformation
Lamps, all the image processing in our system is conducted
in the grayscale intensity domain.
4.1.3 Feature matching and perspective
transform
The projection target is automatically detected on the basis of
the results of scale-invariant feature transform feature
matching
20
between reference images in the database and
camera image I
G2
(Fig. 4c). (To create an animation effect
for a new target object, the user has to place a picture of the
target in the database in advance.) As we assume that the pro-
jection target does not move during the projection, this pro-
cess is run only once before the projection of the
deformation effect starts. After matched feature points are ob-
tained, we conduct the ratio test proposed by Lowe
20
and list
those matches that pass the ratio test (ratio = 0.7) as good
matches. If the number of good matches exceeds a threshold
τ(τ= 10), the perspective transformation matrix T(from the
reference image to the captured image) is estimated by using
random sample consensus
21
on the good match pairs (Fig. 4d).
Then, matrix Tis checked to see whether it is a reasonable
transformation. If the reference is ipped or stretched too
much by T, the system rejects the matching result. Finally,
the image areas covering the feature points that were used to
estimate Tare extracted from both the captured image (trans-
formed by T
1
) and the reference image. If the correlation co-
efcient between those image areas exceeds a threshold ρ
(ρ= 0.4), the system decides that the images are matched.
In the matching process described earlier, perspective
transformation matrix T
i
is obtained for each detected target
i.I
G1
and I
G2
are then transformed by T
i
, and images I0
G1
(i)
and I0
G2
(i), where the captured target iis rendered in the
same perspective as the reference image, are obtained.
The image processing described later is applied to those
images for every detected target.
4.1.4 Estimation of surface albedo
To boost the projection effect, the surface albedo of the target
object is estimated and used for radiometric compensation, as
in many other recent works.
5,18,22
Although 100% compensa-
tion need not be achieved to produce a satisfactory effect, tak-
ing into account the radiometric factor is still important for
producing as large an effect as possible in a given
environment.
Given that the projector is linearized, camera image C
capturing a target object with surface albedo Kcan be
described as follows:
C¼KFPþEðÞ;(1)
where Edenotes environmental light (including the black
level of the projector) and Fdenotes the fraction of the
camera intensity increment on the white (K= 1) surface to
the projection light increment. Instead of estimating Kin
isolation, we estimate the product of Kand F(Fig. 4f). To ob-
tain KF,werst substitute (G
1
,I0
G1
) and (G
2
,I0
G2
) for (P,C)
into Eq. (1), respectively, which gives
I0
G1¼KFG
1þEðÞ;(2)
I0
G2¼KFG
2þEðÞ:(3)
By subtracting Eq. (2) from Eq. (3), we obtain
KF ¼I0
G2I0
G1
.G2G1
:(4)
4.2 Online process
After the ofine process nishes, the online process starts.
The online process described in this section is conducted in
every frame. To maintain an interactive frame rate, all
calculations are executed in the graphics pipeline (OpenGL
and GLSL).
4.2.1 Image deformation
First, the static picture of the target object I0
G1
is deformed
according to the animation scenario in the database, and the
deformed image I0
D
is generated (Fig. 4g). The details of
the image deformation are explained in Section 4.3.
4.2.2 Differential image generation
To make the projection result appear as I0
D
, the projected
light should be
P¼I0
DKE.KF :(5)
We obtain the preceding equation by substituting I0
D
for C
into Eq. (1). Here, from Eq. (2), KE can be written as
KE ¼I0
G1KFG1:(6)
By substituting Eq. (6) into Eq. (5), we obtain
P¼I0
DI0
G1.KF þG1;(7)
where KF is given in Eq. (4). In most cases (i.e., unless the
environmental light contribution is very small in the dark or
438 Journal of the SID 25/7, 2017
the contrast of the target image is very low), Pin Eq. (7) con-
tains values outside of the dynamic range of the projector. The
projected intensity is clipped at the lower or upper limit,
which makes the luminance contrast of the projected pattern
weaker than is required. Nevertheless, even when the projec-
tion contrast is weak, Deformation Lamps can produce satis-
factory motion impressions to human observers as long as it
produces on the object surface a proper pattern of motion en-
ergy, to which the human motion system is sensitive (Kawabe
et al.
6
).
The differential image obtained by Eq. (7) is then trans-
formed by Tand rendered in the original camera perspective
image P0(Fig. 4i). Finally, the output image is obtained by
displacing the pixels in P0according to the correspondence
map between the camera coordinates and projector coordi-
nates (P2C map), which are obtained in the calibration step in
Section 4.1.1.
4.3 Motion editing
With our system, the user can interactively edit the motion se-
quence of the projection target by dening an area to ani-
mate, choosing a base deformation pattern, setting motion
parameters to ne-tune the animation pattern, and selecting
a sound effect to play with the animation (optional). The users
edit is immediately reected in the image deformation in the
online process (Section 4.2.1), and the user can check how the
projection result looks.
4.3.1 Base deformation patterns
To reduce the load for creating complicated movements, we
prepared several base deformation patterns composed of dif-
ferent kinds of vibratory movements (a transverse wave, longi-
tudinal wave, radial transverse wave, and radial longitudinal
wave). The actual functions to produce each of those move-
ments are
dx ¼sinθAcos fxcosθþysinθ

þst þφ

dy ¼cosθAcos fxcosθþysinθ

þst þφ

(;(8)
for the transverse wave;
dx ¼cosθAcos fxcosθþysinθ

þst þφ

dy ¼sinθAcos fxcosθþysinθ

þst þφ

(;(9)
for the longitudinal wave;
dx ¼xxc
ðÞcosαyyc

sinαxxc
ðÞ
dy ¼xxc
ðÞsinαþyyc

cosαyyc

α¼Acos fffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
xxc
ðÞ
2þyyc

2
qþst þφ

8
>
>
>
<
>
>
>
:
;(10)
for the radial transverse wave; and
dx ¼cos arctany
=x
ðÞAcos fffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
xxc
ðÞ
2þyyc

2
qþst þϕ

dy ¼sin arctany
=x
ðÞAcos fffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
xxc
ðÞ
2þyyc

2
qþst þϕ

8
>
>
>
<
>
>
>
:
;
(11)
for the radial longitudinal wave. Deformed image I0
D
is
generated by warping pixels in I0
G1
by dx and dy. In these
equations, (x,y) denotes the spatial coordinate in the
deformation area, (x
c
,y
c
) denotes the center coordinate of
the deformation area, and tdenotes the current time. A,s,
θ,f, and φare parameters corresponding to amplitude, speed,
angle, frequency, and phase, respectively. By editing them as
well as by combining multiple base deformations, users can
produce their own variations of motion effects. Using the base
deformation patterns, we can also signicantly reduce the size
of data stored in the database. The range of the motion
expression can be easily expanded by adding new base
deformation patterns.
4.3.2 User interface
Figure 5 shows the user interface of our tool. The image on
the left is a picture captured by the camera. The area
surrounded by the blue line indicates an area recognized as
a projection target (one of the reference images in the
database).
The green and yellow closed lines designate deformation
windows (DWs) to which deformation effects are applied
(the yellow one is the currently selected DW). The user
can dene the shape and the location of each DW and
select the base deformation pattern and its parameters
through the user interface. Although the deformation is
applied to the entire region inside the DW, it is actually
perceived only around the regions where edges or textures
exist. Therefore, users may dene the contour of the DW
so that it roughly encloses the parts or segments they want
to animate. By doing so, the animation effects become very
tolerant against small spatial registration errors produced by
perspective transformation because small misalignments are
not perceived as long as the image segment to deform
remains inside the DW. To reduce the impression of
discontinuity when the contour of the DW crosses edges
or textures, the pixel-warp size is gradually decreased
around the DW contour.
The region at the top of the screen shows the time line of
the whole animation sequence. The white vertical bar indi-
cates the location of the current frame. The green horizontal
bars in the time line indicate locations of the DWs in the time
axis. The user can determine the length of the animation se-
quence as well as the temporal range in which each DW is ac-
tivated. As long as enough memory is available, the user can
combine an arbitrary number of DWs to create an animation
sequence of arbitrary length.
Fukiage et al. / Illusion-based projection mapping of visual motion 439
4.4 Evaluation
4.4.1 Accuracy of the image feature matching
in the ofine process
We evaluated the accuracy of the image matching in terms of
the accuracy of the target object detection and the accuracy of
the posture estimation of a detected target object. In the ex-
periment, we prepared ten different target objects and regis-
tered a reference image for each one in the database (Fig. 6).
To test the robustness of the matching process, we conducted
the experiment under a messy room environment as shown in
Fig. 7a. We located a target object in front of a projector
(EPSON, EB-1965, 1920 × 1080 pixels, Suwa-shi, Nagano,
Japan) and evaluated the matching process for six different
camera positions (Fig. 7b) and under two environmental light
conditions (bright, 507.9 lx; dim, 75.2 lx) for each object. The
camera was a Flea3 (Point Grey Research, 1600 × 1200 pixels,
Richmond, BC, Canada). We xed the camera parameters,
such as the exposure and the angle of view (except for the
focus), throughout the experiment. We evaluated the accu-
racy of the target detection as the average hit rate (the
proportion that a target is correctly detected when the target
object exists) and the average false alarm rate (the proportion
that a target is incorrectly detected when the target does not
exist). As a measure of the accuracy of the posture estimation,
we calculated the reprojection error of feature points used for
estimating the perspective transformation matrix. Given the
feature points xin a reference image, their corresponding
feature points yin a camera image, and an estimated perspec-
tive transformation matrix T,werst reproject xin a camera
image using Tand obtain x0. Then, we compute reprojection
error eas the averaged distance between x0and yby the
following equation:
e¼1
NX
N
i¼1
dx0i;yi

;
where d(x,y) denotes the Euclid distance between points x
and y.
Table 1 shows the result of the target detection evaluation.
The second and third rows show the average hit rate and the
average false alarm rate, respectively. The second column
shows the result for our implemented detection algorithm,
and the third column shows the result when we removed
the decision process based on the image correlation between
a reference image and a camera image (Section 4.1.3). The
system accurately detected target objects regardless of
whether the image correlation was taken into account. How-
ever, without considering the image correlation, non-target
objects were sometimes falsely detected. We successfully pre-
vent these false detections by implementing the decision
based on the image correlation. We empirically determined
the threshold correlation value as ρ= 0.4 such that all the false
alarms were correctly rejected and the real targets were not
falsely rejected.
Table 2 shows the reprojection errors of the perspective
transform for the six camera positions and the two lighting
conditions. Each cell shows the reprojection error averaged
across all ten target objects. The reprojection errors in all
the conditions were reasonably small and did not exceed 2.0
pixels even in the worst case. Moreover, the reprojection error
does not directly cause visible artifacts in the projection re-
sults because the projection patterns are generated based on
the target image captured by the camera not on a reference
image reprojected on the camera image. The reprojection er-
ror only affects the mapping of DWs onto the camera image.
Therefore, as long as the image segment to deform remains
FIGURE 5 User interface of the proposed system. The user can dene spatiotemporal lo-
cations of the deformation windows (DW) (deformation areas), indicated by green and yellow
lines. By editing motion parameters through the interface on the right side, users can produce
their own variations of animation effects.
440 Journal of the SID 25/7, 2017
FIGURE 6 Images of the target objects used in the evaluation.
FIGURE 7 Conguration of the projection target object and the camera. (a) The environ-
ment where the target objects were placed. (b) The six camera positions tested in the
evaluation.
Fukiage et al. / Illusion-based projection mapping of visual motion 441
inside the DW, the reprojection error does not cause any
perceptible artifact.
Taken together, the results indicate that the image
feature matching process in our tool is accurate enough for
practical use.
4.4.2 Computational efficiency in the online
process
To demonstrate that our system works at an interactive frame
rate during motion editing, we measured the frame rate on a
MacBook Pro (13-in. display, Intel Core i7, 3.1 GHz, 16 GB
CPU memory, with Intel Iris Graphics 6100, Santa Clara,
CA, USA). Figure 8 shows the frame rate of the system as a
function of the number of DWs. In the experiment, each
DW covered the entire spatial area. All the DWs were always
activated. The resolutions of the camera and the projector
were 1900 × 1200 and 1600 × 900 pixels, respectively. The
resolution of the reference image in the database was
600 × 400 pixels. Given that the number of DWs activated
at the same time is usually less than ten, the performance of
the system is good (more than 50 Hz) enough for interactive
editing.
5 Conclusion
According to the standard denition of image movement
(i.e., spatial shifts of intensity/color pattern over time),
Deformation Lamps does not always produce image
movements of the intensity component in a physically cor-
rect way, nor does it produce any image movements of the
color component. Nevertheless, it can generate a vivid and
natural appearance of colorful motion thanks to the process-
ing characteristics of the human visual system.
While researchers have proposed various projection tech-
niques that can change the appearances of real-world
objects,
5,18,22
it is still difcult for non-expert users to make
use of them. The system described in Section 4 can reduce
the burden on end users (i.e., alignment issues and content
creation) so that they can immediately enjoy the projection
effect as well as interactively edit animation of real-world
objects as if they were using Photoshop. Although the cur-
rent implementation includes the Deformation Lamps tech-
nique only, our system has the potential to increase
opportunities for the general public to experience projection
mapping in more familiar situations. For example, animation
effects may be added to physical products in a shop to ad-
vertise them. In addition, one may share ones own projec-
tion effects via the Internet, and others can try those
effects by downloading the animation data and printing out
the target image.
Acknowledgment
This work was supported by JSPS KAKENHI Grant Number
JP15H05915.
References
1 R. Raskar et al., Spatial augmented reality.in Proceedings of the
First IEEE Workshop on Augmented Reality (IWAR98), 1998,
pp. 17.
2 R. Raskar et al., Shader lamps: animating real objects with image- based
illumination.in Proceedings of the 12th Eurographics Workshop on Ren-
dering Techniques, 2001, pp. 89102.
TABLE 1 Evaluation of the target object detection.
Implemented
version (%)
Without considering
image correlation (%)
Hit rate 98.3 98.3
False alarm rate 0.0 3.9
The second column shows the result for our implemented detection algo-
rithm, and the third column shows the result when we removed the decision
process based on the image correlation between a reference image and a cam-
era image (Section 4.1.3).
TABLE 2 Reprojection errors of the perspective transform as a result of
the image feature matching.
Environmental
light
Camera position
Near (1 m) Far (2 m)
Left Center Right Left Center Right
Bright 1.21 0.96 1.32 0.94 0.58 1.08
Dim 1.24 1.00 1.29 0.91 0.63 0.93
FIGURE 8 Frame rate of the system as a function of the number of de-
formation windows (DWs). The performance of the system is good enough
for interactive editing because the number of DWs activated at the same
time is usually less than ten.
442 Journal of the SID 25/7, 2017
3 O. Bimber and D. Iwai, Superimposing dynamic range,ACM Trans.
Graph.,27, No. 5, 150:1150:8 (2008).
4 T. Amano et al., Successive wide viewing angle appearance manipula-
tion with dual projector camera systems.in Proceedings of ICAT
EGVEInternational Conference on Articial Reality and
TelexistenceEurographics Symposium on Virtual Environments, 2014,
pp. 4954.
5 T. Amano, Projection based real-time material appearance manipula-
tion.in Proceedings of the 2013 IEEE Conference on Computer Vision
and Pattern Recognition Workshops, 2013, pp. 918923.
6 T. Kawabe et al., Deformation Lamps: a projection technique to make
static objects perceptually dynamic,ACM Trans. Appl. Percept.,13,
No. 2, 10:110:17 (Mar. 2016).
7 M. S. Livingstone and D. H. Hubel, Psychophysical evidence for separate
channels for the perception of form, color, movement, and depth,
J. Neurosci.,7, 34163468 (1987).
8 E. H. Adelson and J. R. Bergen, Spatiotemporal energy models for the
perception of motion,J. Opt. Soc. Am. A,2, 284299 (1985).
9 S. Nishida, Advancement of motion psychophysics: review 20012010,
J. Vis.,11, No. 5, 153 (2011).
10 V. S. Ramachandran, Interaction between colour and motion in human
motion,Nature,328, 645647 (1987).
11 N. Goda and Y. Ejima, Moving stimuli dene the shape of stationary
chromatic patterns,Perception,26, No. 11, 14131422 (1997).
12 V. S. Ramachandran and P. Cavanagah, Motion capture anisotropy,
Vision Res.,27, No. 1, 97106 (1987).
13 S. Nishida and A. Johnston, Inuence of motion signals on the perceived
position of spatial pattern,Nature,397, 610612 (1999).
14 S. Nishida, Motion-based analysis of spatial patterns by the human visual
system,Curr. Biol.,14, 830839 (2004).
15 S. Nishida et al., Human visual system integrates color signals along a
motion trajectory,Curr. Biol.,17, 366372 (2007).
16 T. Kawabe et al., Perceptual transparency from image deformation.
Proceedings of the National Academy of Sciences, vol. Early edition,
2015.
17 T. Kawabe and R. Kogov˘sek, Image deformation as a cue to material
category judgment,Sci. Rep.,7, 44274 (2017).
18 O. Bimber et al., The visual computing of projector-camera systems,in
ACM SIGGRAPH 2008 Classes, ser. SIGGRAPH 08, 2008,
pp. 84:184:25.
19 S. Inokuchi et al., Range-imaging for 3-D object recognition,in
Proceedings of International Conference on Pattern Recognition, 1984,
pp. 806808.
20 D. G. Lowe, Distinctive image features from scale-invariant keypoints,
Int. J. Comput. Vis.,60, No. 2, 91110 (2004).
21 M. Fischler and R. Bolles, Random sample consensus: a paradigm for
model tting with applications to image analysis and automated cartogra-
phy,Commun. ACM,24, 381385 (1981).
22 M. Grossberg et al., Making one object look like another: controlling ap-
pearance using a projector-camera system,in Proceedings of 2004 IEEE
Conference on Computer Vision and Pattern Recognition Workshops,
2004, pp. 452459.
Taiki Fukiage is a research associate at Sensory
Representation Group of Human Information Sci-
ence Laboratory in NTT Communication Science
Laboratories. He received his Ph.D. in Interdisci-
plinary Information Studies from the University of
Tokyo in 2015. He joined NTT Communication
Science Laboratories in 2015, where he studies
media technologies based on scientic knowledge
about visual perception. He is a member of the Vi-
sion Sciences Society and the Vision Society of
Japan.
Takahiro Kawabe is a senior research scientist at
Sensory Representation Group of Human Informa-
tion Science Laboratory in NTT Communication
Science Laboratories. He received his Ph.D. in Psy-
chology from Kyushu University, Fukuoka, in
2005. In 2011, he joined NTT Communication Sci-
ence Laboratories, where he studies human mate-
rial recognition and cross-modal perception. He
is a review editor of Frontiers in Psychology (Per-
ception Science) and is a member of the Vision Sci-
ences Society and the Vision Society of Japan.
Masataka Sawayama is a research scientist at Sen-
sory Representation Group of Human Information
Science Laboratory in NTT Communication Sci-
ence Laboratories. He received his Ph.D. in Psy-
chology from Chiba University in 2013. He
joined NTT Communication Science Laboratories
in 2013, where he studies human material process-
ing. He is a member of the Vision Sciences Society
and the Vision Society of Japan.
Shinya Nishida nished his Ph.D. in Psychology at
Kyoto University in 1990 and received his PhD de-
gree in 1996. He is a Senior Distinguished Scien-
tist, and Group Leader of Sensory Representation
Research Group, NTT Communication Science
Labs, Japan. He is an expert of psychophysical re-
search on human visual processing, in particular
motion perception, cross-attribute/modality inte-
gration, time perception, and material perception.
He is the president of the Vision Society of Japan
and an editorial board member of Journal of Vision
and Vision Research.
Fukiage et al. / Illusion-based projection mapping of visual motion 443
... By exploiting the properties of the human visual system (HVS), one can produce an impressive effect that goes beyond physical constraints. In line with this idea, Kawabe et al. [15,21] proposed a perceptually based approach that can add dynamic impressions to physical stationary objects without changing their original colors and textures. In their method, called Deformation Lamps, the projected pattern consists only of a monochromatic luminance modulation and does not reproduce the shifted (moving) version of the original appearance in a perfect way. ...
... The previous solution to this problem was to have users manually adjust the displacement sizes added to a static object by eye [15,21]. However, finding a good parameter that maximizes the size of the induced motion without increasing the subjective inconsistency requires much time and effort. ...
... Thus, the realism of the projection results under bright environmental light is often degraded because the apparent contrast and colors directly depend on the capability of the projector. On the other hand, Kawabe et al.'s [15,21] approach (and ours) exploits the characteristics of the HVS to create animations of existing real objects without changing their original colors/textures and contrast. In exchange for its robustness to environmental light as well as the capability of retaining original colors/textures, however, this approach suffers from difficulty in controlling the shift sizes while handling subjective inconsistency. ...
Article
A recently developed light projection technique can add dynamic impressions to static real objects without changing their original visual attributes such as surface colors and textures. It produces illusory motion impressions in the projection target by projecting gray-scale motion-inducer patterns that selectively drive the motion detectors in the human visual system. Since a compelling illusory motion can be produced by an inducer pattern weaker than necessary to perfectly reproduce the shift of the original pattern on an object's surface, the technique works well under bright environmental light conditions. However, determining the best deformation sizes is often difficult: When users try to add a large deformation, the deviation in the projected patterns from the original surface pattern on the target object becomes apparent. Therefore, to obtain satisfactory results, they have to spend much time and effort to manually adjust the shift sizes. Here, to overcome this limitation, we propose an optimization framework that adaptively retargets the displacement vectors based on a perceptual model. The perceptual model predicts the subjective inconsistency between a projected pattern and an original one by simulating responses in the human visual system. The displacement vectors are adaptively optimized so that the projection effect is maximized within the tolerable range predicted by the model. We extensively evaluated the perceptual model and optimization method through a psychophysical experiment as well as user studies.
Article
Pseudo-haptic feedback takes advantage of a cross-modal integration between vision and haptics. Previous studies have shown that object stiffness can be rendered with pseudo-haptic feedback with external haptic inputs. This study explored whether the pseudo-haptic feedback was feasible with a mid-air action wherein no external haptic input was given. On each trial of the experiments, participants conduced a mid-air action to laterally move their hands as if they horizontally stretched an object in the display. In synchronized with the hands' motion, the object horizontally deformed. The magnitude of the object deformation varied with the horizontal distance between participants' hands (i.e., a hand distance). The ratio of deformation magnitudes to the hand distance (i.e., a deformation-to-distance ratio) was controlled; With a larger ratio, a smaller hand distance produced the maximum level of object deformation. The Poisson's ratio was also controlled; a higher Poisson's ratio produced a larger magnitude of vertical deformation. The participants were asked to report the stiffness of the objects with a five-point rating scale. Consequently, the stiffness rating decreased with the deformation-distance ratio and with the Poisson's ratio. The results indicate that pseudo-haptic stiffness can be rendered with mid-air action by manipulating the deformation-distance ratio and Poisson's ratio.
Article
A cast shadow is one of the visual features that serve as a perceptual cue to the three-dimensional (3D) layout of objects. Although it is well known that adding cast shadows to an object produces the illusion that the object has a 3D layout, investigations into this illusion have been limited to virtual objects in a display. Using a light-projection technique, we show that it is possible to create a similar 3D layout illusion for real two-dimensional objects. Specifically, we displayed spatial patterns that look like cast shadows in the vicinity of an object depicted as a printed image. The combination of the cast shadow patterns with the printed object made it appear as if the printed object hovered over its original location even though the object was physically two-dimensional. By using this technique, we demonstrated that the shadow-induced layout illusion resulted in printed images having novel perceptual transparency. Vision researchers may find our technique useful if they want to extend their studies on the perception of cast shadows and transparency with real objects.
Article
Full-text available
Human observers easily recognize complex natural phenomena, such as flowing water, which often generate highly chaotic dynamic arrays of light on the retina. It has not been clarified how the visual system discerns the source of a fluid flow. Here we show that the magnitude of image deformation caused by light refraction is a critical factor for the visual system to determine the perceptual category of fluid flows. Employing a physics engine, we created computer-rendered scenes of water and hot air flows. For each flow, we manipulated the rendering parameters (distortion factors and the index of refraction) that strongly influence the magnitude of image deformation. The observers rated how strongly they felt impressions of water and hot air in the video clips of the flows. The ratings showed that the water and hot air impressions were positively and negatively related to the magnitude of image deformation. Based on the results, we discuss how the visual system heuristically utilizes image deformation to discern non-rigid materials such as water and hot air flows.
Conference Paper
Full-text available
In this study, we investigated the use of successive omnidirectional appearance manipulation for the cooperative control of multiple projector camera systems. This type of system comprises several surrounding projector camera units, where each unit projects illumination independently onto a different aspect of a target object based on feedback from the projector cameras. Thus, the system can facilitate appearance manipulation from any viewpoint in the surrounding area. An advantage of this system is that it does not require information sharing or a geometrical model. However, this approach is problematic because the stability of the total control system cannot be guaranteed even if the feedback system of each projector camera is stable. Therefore, we simulated the feedback from the cooperative projector camera system to evaluate its stability. Based on hardware experiments, we confirmed the stability of omnidirectional appearance manipulation using two projector camera units in an interference condition. The results showed that the object's appearance could be manipulated throughout approximately 296 degrees of the total circumference of the target object.
Article
We propose a new light projection technique named 'Deformation Lamps', which adds a variety of realistic movement impressions to a static projection target. While static pictures are good at expressing spatial information about objects and events, they are not good at expressing temporal information. To overcome the deficit of a static picture, Deformation Lamps superimposes luminance motion signals onto a colorful static picture and produces an illusory, but realistic movement of the picture by fully utilizing the processing characteristics of the human visual system.
Article
Light projection is a powerful technique to edit appearances of objects in the real world. Based on pixel-wise modification of light transport, previous techniques have successfully modified static surface properties such as surface color, dynamic range, gloss and shading. Here, we propose an alternative light projection technique that adds a variety of illusory, yet realistic distortions to a wide range of static 2D and 3D projection targets. The key idea of our technique, named Deformation Lamps, is to project only dynamic luminance information, which effectively activates the motion (and shape) processing in the visual system, while preserving the color and texture of the original object. Although the projected dynamic luminance information is spatially inconsistent with the color and texture of the target object, the observer's brain automatically com- bines these sensory signals in such a way as to correct the inconsistency across visual attributes. We conducted a psychophysical experiment to investigate the characteristics of the inconsistency correction, and found that the correction was dependent critically on the retinal magnitude of inconsistency. Another experiment showed that perceived magnitude of image deformation by our techniques was underestimated. The results ruled out the possibility that the effect by our technique stemmed simply from the physical change of object appearance by light projection. Finally, we discuss how our techniques can make the observers perceive a vivid and natural movement, deformation, or oscillation of a variety of static objects, including drawn pictures, printed photographs, sculptures with 3D shading, objects with natural textures including human bodies.
Article
Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid's surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of "invisible" transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation.
Conference Paper
We introduce a new projection display technique that converts a visual material appearance of target object. Unlike conventional projection display, our approach allowed us successive material appearance manipulation by the projector camera feedback without scene modeling. First, we introduce an appearance control framework with a coaxial projector camera system. Next, we introduce two image based material appearance manipulation methods of translucency and glossiness. Last, we verify the ability of the material appearance manipulation of the proposed display technique through the experiments.