ArticlePDF Available

Simulating and rendering wet hair

Authors:
  • Intel, Portland Oregon, United States

Abstract and Figures

An abstract is not available.
Content may be subject to copyright.
Simulating and Rendering Wet Hair
Kelly Ward Nico Galoppo Ming C. Lin
University of North Carolina at Chapel Hill
1 Motivation
Simulating the motion and appearance of hair has been an active
area of research in computer graphics due to its importance for
modeling virtual humans in various applications. Existing hair
modeling methods have focused primarily on capturing the basic
characteristics of dry hair. In the natural world, humans interact
with water every day and the physical behavior and appearance of
hair is drastically changed when it becomes wet.
As it is easy to observe the physical differences between wet
and dry hair on a real person, it is crucial to accurately model these
characteristics in a simulation. As hair strands absorb water, they
become heavier, they adhere more readily with nearby wet strands,
and they tend to look darker and shinier due to the presence of
water. Our hair modeling system captures these influences and is
able to adjust these properties dynamically as hair becomes wet.
2 Overview of Approach
Our hair modeling system relies on a dual-skeleton setup to capture
the various dynamic properties of hair. This dual-skeleton system
consists of a global-skeleton and a local-skeleton, which provide
the ability to decouple global and local motions of hair, allow-
ing us to capture additional hair motions and various hairstyles.
The global-skeleton accounts for the overall motion of the hair,
while the local-skeleton is positioned around the global-skeleton to
model a desired hairstyle. The rendered hair geometry is positioned
around the local-skeleton. Strands in close proximity with each
other are grouped together to follow the same dual-skeleton system,
capturing the natural clumping of strands found in nature. Circular
cross-sections are defined at each node of the local-skeleton, deter-
mining the initial thickness of that section of hair. The individual
strands are then placed randomly within the confines of those cross-
sections.
We create a localized collision detection method that accurately
identifies interactions between the hair and the body as well as
among the hairs by placing swept sphere bounding volumes (SSVs)
around the local-skeleton and rendered hair geometry. Our overall
dynamics model is able to capture the intrinsic properties of dry
hair and can dynamically adjust to changing physical properties as
the hair interacts with water.
2.1 Adjustment of Dynamic Properties
Hair strands can absorb up to 45% of its natural, dry weight in
water [L’O04]. This increased mass significantly alters the physical
motion of wet hair strands. The global-skeleton controls the overall
motion of the hair and consists of node points connected by soft
springs. Each node point has a mass associated with it, representing
the mass of the hair at that point. The mass then becomes a function
of wetness, increasing until the fraction of wetness becomes 100%.
2.2 Flexible Geometric Structure
As hair becomes wet, it becomes less voluminous. Wet strands of
hair in close proximity adhere with each other due to the presence
of water, causing the overall volume of the hair to decrease. To
account for this behavior, when water is applied to the hair, the radii
of the hair sections decrease accordingly. The radius contraction is
directly related to the number of hair strands in that section of hair
and the percentage of water absorbed into the hair. The SSVs used
for collision detection also automatically adjust their form as water
e-mail:{wardk,nico,lin}@cs.unc.edu
is absorbed. Collision detection remains accurate and efficient in
light of the changing geometric structure.
2.3 Rendering
As noted by [JLD99], many materials become darker and shinier
due to the absorption of water. Hair acts in the same manner. When
hair becomes wet, a thin film of water is formed around the fibers,
forming a smooth, mirror-like surface on the hair. In contrast to
the naturally rough, tiled surface of dry hair, this smoother surface
creates a shinier appearance of the hair due to increased specular
reflections. Furthermore, light rays are subject to total internal re-
flection inside the film of water around the hair strands. This phe-
nomenon contributes to the darker appearance wet hair has over dry
hair [JLD99]. Moreover, water is absorbed into the hair fiber, in-
creasing the opacity value of each strand leading to more aggressive
self-shadowing.
We have captured the interactions of light with the wet strands
by varying the rendering parameters based on the amount of water
present on the hair. Specifically, using standard anisotropic lighting
[HS98] and hair shadowing [KN01] techniques, we make the opac-
ity, shininess value, and anisotropic lighting contribution a function
of the wetness percentage. As the wetness factor varies between 0%
and 100% the parameters vary directly, creating a damped or wet
look for the hair strands.
3 Results
We presented several simple yet efficient techniques for simulating
and rendering wet hair. Our system is able to capture the altered
motion, physical structure, and rendered appearance of hair when
it gets wet. These methods can be applied dynamically to hair to
model changing wetness of hair over time. Our results are consis-
tent with the influences of water demonstrated on real hair. Figure
1 shows a visual comparison between simulated wet vs. dry hair.
Please refer to the supplementary document and videos for addi-
tional images and demonstration.
Figure 1: Long, curly, red hair blowing in the wind (a) dry and (b)
wet
References
HEID RIC H W., SEIDE L H.-P.: Efficient rendering of anisotropic surfaces using com-
puter graphics hardware. Proc. of Image and Multi-dimensional Digital Signal
Processing Workshop (IMDSP) (1998).
JENS EN H. W., L EGAK IS J. , DORS EY J.: Rendering of wet material. Rendering
Techniques (1999), 273–282.
KIM T.- Y., NE UMAN N U.: Opacity shadow maps. Proc. of Eurographics Rendering
Workshop (2001).
L’ORE AL: http://www.loreal.com, 2004.
... Ward, Galoppo and Lin have studied the general effects of water and other materials over human hair. In two papers [24,25] they presented several techniques for simulating and rendering wet hair. Their system is able to capture the altered motion, physical structure, and rendered appearance of hair when it gets wet. ...
... Human hair has a few differences from animal fur, particularly very short mammal fur, and therefore it would be interesting to have a specific solution for modeling wet fur. Our paper presents an extension on the Fake Fur Rendering technique [10], using the water influence according to some ideas presented in [25]. ...
Article
Full-text available
This work presents a probabilistic lighting model for rendering of thin fur of mammals under environmental influence, particularly water. We have extended the technique known as Fakefur by adding a method for capturing the key characteristics of fur influenced by water. The system described in this work has a lower computational cost than typical resolvable fur methods, due not only to its probabilistic nature, but also to its GPU (Graphics Processing Unit) implementation, that provides realtime renderings of furry animals dry and wet. Wet and dry fur rendering, Real-time rendering, GPU. RESUMO Este trabalho apresenta um modelo de iluminação probabilístico para renderização de pelagem rala de mamíferos sob influência de água. Nós estendemos a técnica conhecida como Fakefur incluindo um método que captura as características principais do pêlo no estado de molhado. O sistema descrito neste trabalho tem complexidade computacional menor do que os métodos de renderização de pêlos tradicionais, devido não apenas à sua natureza probabilística, mas também por sua implementação em GPU (Graphics Processing Unit), que possibilita renderização de pelagem animal seca e molhada em tempo-real. Palavras-chave Renderização de pêlo seco e molhado, Renderização em tempo-real, GPU.
Article
We propose a novel framework for hair animation as well as hair-water interaction that supports millions of hairs. First, we develop a hair animation framework that embeds hair into a tetrahedralized volume mesh that we kinematically skin to deform and follow the exterior of an animated character. Creating a copy of the tetrahedral mesh, endowing it with springs, and attaching it to the kinematically skinned mesh creates more dynamic behavior. Notably, the springs can be quite weak and thus efficient to simulate because they are structurally supported by the kinematic mesh. If independent simulation of individual hairs or guide hairs is desired, they too benefit from being anchored to the kinematic mesh dramatically increasing efficiency as weak springs can be used while still supporting interesting and dramatic hairstyles. Furthermore, we explain how to embed these dynamic simulations into the kinematically deforming skinned mesh so that they can be used as part of a blendshape system where an artist can make many subsequent iterations without requiring any additional simulation. We discuss hair-water interaction as well, how porosities are stored in the kinematic mesh, how the kinematically deforming mesh can be used to apply drag and adhesion forces to the water, etc.
Article
Our goal is to simulate the full hair geometry, consisting of approximately one hundred thousand hairs on a typical human head. This will require scalable methods that can simulate every hair as opposed to only a few guide hairs. Novel to this approach is that the individual hair/hair interactions can be modeled with physical parameters (friction, static attraction, etc.) at the scale of a single hair as opposed to clumped or continuum interactions. In this vein, we first propose a new altitude spring model for preventing collapse in the simulation of volumetric tetrahedra, and we show that it is also applicable both to bending in cloth and torsion in hair. We demonstrate that this new torsion model for hair behaves in a fashion similar to more sophisticated models with significantly reduced computational cost. For added efficiency, we introduce a semi-implicit discretization of standard springs that makes them truly linear in multiple spatial dimensions and thus unconditionally stable without requiring Newton-Raphson iteration. We also simulate complex hair/hair interactions including sticking and clumping behavior, collisions with objects (e.g. head and shoulders) and self-collisions. Notably, in line with our goal to simulate the full head of hair, we do not generate any new hairs at render time.
Chapter
A 3D model defines the structural attributes of its subject. By itself, this is not enough to produce a convincing representation. To make it more realistic, material attributes are needed. Materials define what the structure is made of. These attributes are stored in what is called a shader. Shaders are programs that interact with the renderer at render time to determine how the different objects in a scene react to lighting and each other. Shaders are embedded in CG applications and are used by artists to specify the appearance of the objects they make.
Article
Our goal is to simulate the full hair geometry, consisting of ap- proximately one hundred thousand hairs on a typical human head. This will require scalable methods that can simulate every hair as opposed to only a few guide hairs. Novel to this approach is that the individual hair/hair interactions can be modeled with physical parameters (friction, static attraction, etc.) at the scale of a single hair as opposed to clumped or continuum interactions. In this vein, we first propose a new altitude spring model for preventing col- lapse in the simulation of volumetric tetrahedra, and we show that it is also applicable both to bending in cloth and torsion in hair. We demonstrate that this new torsion model for hair behaves in a fashion similar to more sophisticated models with significantly reduced computational cost. For added efficiency, we introduce a semi-implicit discretization of standard springs that makes them truly linear in multiple spatial dimensions and thus unconditionally stable without requiring Newton-Raphson iteration. We also simu- late complex hair/hair interactions including sticking and clumping behavior, collisions with objects (e.g. head and shoulders) and self- collisions. Notably, in line with our goal to simulate the full head of hair, we do not generate any new hairs at render time.
Article
Opacity shadow maps approximate light transmittance inside a complex volume with a set of planar opacity maps. A volume made of standard primitives (points, lines, and polygons) is sliced and rendered with graphics hardware to each opacity map that stores alpha values instead of traditionally used depth values. The alpha values are sampled in the maps enclosing each primitive point and interpolated for shadow computation. The algorithm is memory efficient and extensively exploits existing graphics hardware. The method is suited for generation of self-shadows in discontinuous volumes with explicit geometry, such as foliage, fur, and hairs. Continuous volumes such as clouds and smoke may also benefit from the approach.
Efficient rendering of anisotropic surfaces using computer graphics hardware
  • Heidrich W
  • Seidel H.-P
HEIDRICH W., SEIDEL H.-P.: Efficient rendering of anisotropic surfaces using computer graphics hardware. Proc. of Image and Multi-dimensional Digital Signal Processing Workshop (IMDSP) (1998).