Interactive Fur Shaping and Rendering Using Nonuniform-Layered Textures
This system represents furry surfaces as nonuniform layers of texture slices, automatically adjusting the layers to achieve efficient, high-quality rendering. It employs layered shadow maps to simulate self-shadowing. Interactive tools let users intuitively create and edit furry objects and instantly view the rendered objects.
Available from: Tomoyuki Nishita
- "We add a simple shadow effect by introducing a factor γ = (K shadow − 1 + h)/K shadow , for K shadow ≥ 1 chosen by the user, which multiplied by the fur color darkens the lower part of the fur. Although methods such as [Lokovic and Veach, 2000, Yang et al., 2008] can possibly provide better results, for short fur this simple method is effective and inexpensive. Fig. 8 (a) shows a rendering of normal (or dry) fur, which is similar to previous methods. "
[Show abstract] [Hide abstract]
ABSTRACT: Fur is present in most mammals which are common characters in both movies and video-games, and it is important to model and render fur both realistically and quickly. When the objective is real-time performance, fur is usually represented by texture layers (or 3D textures), which limits the dynamic characteristics of fur when compared with methods that use an explicit representation for each fur strand.
This paper proposes a method for animating and shaping fur in real-time, adding curling and clumping effects to the existing real-time fur rendering methods on the GPU. Besides fur bending using a mass-spring strand model embedded in the fur texture, we add small scale displacements to layers to represent curls which are suitable for vertex shader implementation, and we also use a fragment shader to compute intra-layer offsets to create fur clumps. With our method, it becomes easy to dynamically add and remove fur curls and clumps, as can be seen in real fur as a result of fur getting wet and drying up.
The Visual Computer 06/2010; 26(6-8):659-667. DOI:10.1007/s00371-010-0484-4 · 0.96 Impact Factor
Available from: Bing-Yu Chen
- "The value σ occ ∈   and σ wet ∈   are constants given by the user, which control the percentage of hair darkening due to occlusion and wetness respectively. Although methods such as    can possibly provide better results, for short fur this simple method is effective and inexpensive. "
[Show abstract] [Hide abstract]
ABSTRACT: In the method that represents fur with multi-layer textured slices, representing silhouette fur is a time consuming work,
which requires silhouette-edge detection and fin slices generation. In the paper, we present an accelerated method for representing
silhouette fur by taking advantage of the programmable ability of Graphic Process Units (GPU). In the method, by appending
edge info on each vertex, the silhouette-edge detection can be implemented in GPU; and by storing fin slices data in video memory in
preprocessing, the time spent on fin slices generation and on data transmission from CPU to GPU can be saved. Experimental
results show that our method accelerates silhouette fur representation greatly, and hence improves the performance of rendering
Universal Access in Human-Computer Interaction. Intelligent and Ubiquitous Interaction Environments, 5th International Conference, UAHCI 2009, Held as Part of HCI International 2009, San Diego, CA, USA, July 19-24, 2009. Proceedings, Part II; 01/2009
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.