Texture synthesis is widely used in virtual reality and computer games and has become one of the most active research areas.
Research into texture synthesis is normally concerned with generation of 2D images of texture. However, real-world surface
textures comprise rough surface geometry and various reflectance properties. These surface textures are different from 2D
still texture as their images can therefore vary dramatically with illumination directions. This paper presents a simple framework
for 3D surface texture synthesis. Firstly, we propose a novel 2D texture synthesis algorithm based on wavelet transform that
can be efficiently extended to synthesis surface representations in multi-dimensional space. The proposed texture synthesis
method can avoid joint seams during synthesis by first fitting wavelet coefficients in the overlap texture images, and then
performing an inverse wavelet transform to generate new textures. Then, Photometric Stereo (PS) is used to generate surface
gradient and albedo maps from three synthesized surface texture images. The surface gradient maps can be further integrated
to produce a surface height map (surface profile). With the albedo and height or gradient maps, new images of a Lambertian
surface under arbitrary illuminant directions can be generated. Experiments show that the proposed approach can not only produce
3D surface textures under arbitrary illumination directions, but also have the ability to retain the surface geometry structure.
[Show abstract][Hide abstract] ABSTRACT: We present an algorithm based on statistical learning for
synthesizing static and time-varying textures matching the appearance of
an input texture. Our algorithm is general and automatic and it works
well on various types of textures, including 1D sound textures, 2D
texture images, and 3D texture movies. The same method is also used to
generate 2D texture mixtures that simultaneously capture the appearance
of a number of different input textures. In our approach, input textures
are treated as sample signals generated by a stochastic process. We
first construct a tree representing a hierarchical multiscale transform
of the signal using wavelets. From this tree, new random trees are
generated by learning and sampling the conditional probabilities of the
paths in the original tree. Transformation of these random trees back
into signals results in new random textures. In the case of 2D texture
synthesis, our algorithm produces results that are generally as good as
or better than those produced by previously described methods in this
field. For texture mixtures, our results are better and more general
than those produced by earlier methods. For texture movies, we present
the first algorithm that is able to automatically generate movie clips
of dynamic phenomena such as waterfalls, fire flames, a school of
jellyfish, a crowd of people, etc. Our results indicate that the
proposed technique is effective and robust
IEEE Transactions on Visualization and Computer Graphics 05/2001; 7(2-7):120 - 135. DOI:10.1109/2945.928165 · 2.17 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We develop an efficient algorithm for synthesizing random images subject to these constraints, by iteratively projecting onto the set of images satisfying each constraint, and we use this to test the perceptual validity of the model. In particular, we demonstrate the necessity of subgroups of the parameter set by showing examples of texture synthesis that fail when those parameters are removed from the set. We also demonstrate the power of our model by successfully synthesizing examples drawn from a diverse collection of artificial and natural textures.
International Journal of Computer Vision 02/2001; 40(1). DOI:10.1023/A:1026553619983 · 3.81 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: A texture synthesis method is presented that generates similar texture from an example image. It is based on the emulation of simple but rather carefully chosen image intensity statistics. The resulting texture models are compact and no longer require the example image from which they were derived. They make explicit some structural aspects of the textures and the modeling allows knitting together different textures with convincingly looking transition zones. As textures are seldom flat, it is important to also model 3D effects when textures change under changing viewpoint. The simulation of such changes is supported by the model, assuming examples for the different viewpoints are given.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.