ArticlePDF Available

Ray Tracing Volume Densities

Authors:
  • Rapid Prototypes, Inc.

Abstract

This paper presents new algorithms to trace objects represented by densities within a volume grid, e.g. clouds, fog, flames, dust, particle systems. We develop the light scattering equations, discuss previous methods of solution, and present a new approximate solution to the full three-dimensional radiative scattering problem suitable for use in computer graphics. Additionally we review dynamical models for clouds used to make an animated movie.
Computer Graphics Volume 18, Number 3 July 1984
i
RAY TRACING VOLUME DENSITIES
James T. Kajiya
Brian P. Von Herren
California Institute of Technology
Pasadena, Calif
ABSTRACT
This paper presents new algorithms to trace
objects represented by densities within a volume grid, e.g.
clouds, fog, flames, dust, particle systems. We develop the
light scattering equations, discuss previous methods of solu-
tion, and present a new approximate solution to the full
three-dimensional radiative scattering problem suitable for
use in computer graphics. Additionally we review dynami-
cal models for clouds used to make an animated movie.
KEYWORDS: computer graphics, raster graphics, ray trac-
ing, stochastic modelling, simulation of natural phenomena,
radiative transport, light scattering, clouds, particle sys-
tems.
CR CATEGORIES:
1.3.3, 1.3.5, 1.3.7
§1 Introduction
A large class of natural phenomena is described by
partial differential equations. In almost all cases, the
description of these phenomena is given by a set of
vector or scalar fields defined on a uniform mesh in
3-space. This paper will render objects defined in this
way via the ray tracing method (Whitted 1980, Appel
1968, Goldstein 1971, Kajiya 1982, 1983).
Recently, the synthesis of images with clouds and,
more generally, of objects defined as volume densities
has been pursued by a number of investigators (Blinn
1982, Max 1983, Voss 1983). This paper is a continua-
tion of that work in the context of ray tracing.
Blinn introduced the use of density models in computer
graphics in Blinn (1982), where he considers plane
parallel atmospheres. Other researchers have adapted
Permission to copy without fee all or part of this material is granted
provided that the copies are not made or distributed for direct
commercial advantage, the ACM copyright notice and the title of the
publication and its date appear, and notice is given that copying is by
permission of the Association for Computing Machinery. To copy
otherwise, or to republish, requires 'a fee and/or specific permissic~n.
© 1984 ACM 0-89791-138-5/84/007/0165 $00.75
his models to more general shapes. Max defines clouds
as densities with boundaries defined by analytic func-
tions. Voss has fractally generated densities with a
series of plane parallel models, yielding images of strik-
ing realism.
The work presented here extends previous efforts in
two ways: first, we present an alternative to the Blinn
scattering model which models multiple radiative scat-
tering against particles with high albedo. Second, we
show how to ray trace these models.
We emphasize that the rendering techniqes presented
here are general. We are able to view the models from
any angle, with multiple arbitrarily placed light sources
(even within the densities). The density model may
intersect other procedural models. The viewing point
may lie inside the density function. With these tech-
niques we are able to render clouds that cast shadows
on their environment as well as on themselves. We may
have scenes in which mountain peaks disappear into a
cloud interior. We may fly through the clouds. And,
of course, the clouds appear reflected and refracted
in other objects in the scene. There is one situa-
tion, however, which we do not handle correctly: other
procedural objects, while they may be shadowed ac-
curately by clouds, do not themselves cast shadows
upon the clouds.
While clouds are the most obvious application of this
representation, other phenomena also lend themselves
well to this representation. For example, it is possible
to model media which do not simply scatter, but also
absorb and emit light. In this way we can model flames.
Additionally, it is possible to generate models of very
high geometric complexity which are treated simply as
volume densities. In this way these techniques allow
the application of ray tracing to Reeves' particle sys-
tems (Reeves 1983).
165
4>
@SIGGRAPH'84
§2 The scattering equation
In this section we discuss the relevant physical para-
meters and set up an equation which describes the scat-
tering of radiation in volume densities. This section
loosely follows the derivation in Chandrasekhar(1950).
The quantity to be calculated in a scattering problem
is the energy per unit solid angle per unit area:
dE
--~
I(z, w) sin #dwda
This quantity is called the inter~dty of radiation at a
point x in the direction of the solid angle dw.
The scattering equation can be derived by considering
a differential cylindrical volume dV ---- dads, where da
is the cross section of the cylinder and ds is the length
(figure 1). If we follow a pencil of radiation along the
length of the cylinder, we find that the difference in
intensity between the two ends is given by
dl = --absorbed + emitted
= -~pdsdadw + 3pdadadw
(2.1)
where
p
is the density of matter in the volume element;
is the absorbption coefficient, viz. optical depth per
unit density; and 3 is the emission coefficient.
The emission coefficient can be broken into two terms
3
----
3 (~) + 3 (')
where 3(e) is the emission coefficient due to pure emis-
sion of the medium, for example a black body term for
flames or stellar interiors; and 3(') is the emission term
due to pure scattering of incident radiation into the
direction of interest. The form of this term is usually
written as
/¢Z
f
3 (8)
= /..
47r Jll~ll
=1
This expression says that the light scattered in direc-
tion s is a linear operator of the light incident upon the
volume element from all angles. The function p(s, ~)
is called the phate function and gives the amount of
light scattered from direction s to direction Y. In
many situations the medium is itotropie, in the case
the phase function depends only on the phase angle O,
the angle between s and ~. Although there are many
interesting phenomena in which the emission coefficient
3(e) is nonzero, let us for simplicity assume it is zero
in the remainder of this paper.
The phase function embodies all the information about
the scattering behavior of the medium. From it we may
derive all the other lighting parameters popular in com-
puter graphics. For example, Lambert and Phong sur-
faces are simply phase functions with particular shape
parameters. In these cases anisotropy prevails: there
are preferred angles for example, the normal of the
surface element. Thus the phase function varies with
more than just the phase angle. When the medium is
composed of a large number of particles, no preferred
orientations occur and isotropy obtains. In this case
the phase angle completely determines the phase func-
tion value. Blinn (1982) discusses a number of impor-
tant phase functions. For the work on clouds, two will
be of particular interest: 1) perfectly diffuse scattering:
p(cos O) ~- wo where w0 is an arbitrary constant, and
2) Rayleigh scattering: p(cos O) = w043-(1 + cos20).
The scattering equation can be brought into general
form by dividing both sides of equation (2.1) by -Icpds.
But the derivative along the cylinder is simply a direc-
tional derivative along s
dI
-- = t. V=I.
ds
This gives us the scattering equation:
-1
--a. vJ(z, - I(x,
~p
+ 4 p("' = o.
§3 Solving the scattering equation
The scattering equation is solvable analytically only
in a few very special cases: indeed, it is very difficult
to solve even numerically without assumptions which
reduce the dimensionality of the intensity field I(z, 8),
a function of six real variables.
Various assumptions are customarily made to reduce
the difficulty of the problem. Here are some common
assumptions: 1) the medium is isotropic--the phase
function is only dependent on the phase angle; 2) the
medium is uniform--its density does not change from
point to point in space; 3) the geometry is simple--the
medium may vary in space but only along, say, the
z-axis (this is the plane parallel or scattering in a slab
problem); 4) the phase function is of a very simple type,
viz. isotropic; 5) the albedo is very small or very large.
Various combinations of these assumptions have been
treated extensively in the literature. The slab scatter-
ing problem has been the most common assumption
(Chandrasekhar 1950).
166
Computer Graphics Volume 18, Number 3 July 1984
The method of Wick-Chandrasekhar or
discrete or-
dinates
is a numerical method for plane parallel atmo-
spheres. It sets up a coupled array of PDEs each of
which represents one scattering angle. A finite Euler
approximation is made for the phase integral which is
the coupling mechanism of the individual equations.
Convergence results for such approximations have ap-
peared in Keller (19fi0ab) and more recently Anselone
and Gibbs (1974).
Unfortunately, the discrete ordinates method is rela-
tively unsuitable for computer graphics: we need to
most often finely sample a given portion of the solid
angle sphere, rather than have a uniform sampling
across the whole sphere.
It may well be that a finite element approach would be
a promising alternative to Wick-Chandrasekhar, but
we have found that simpler schemes are effective for
image synthesis.
3.1 Blinn's Low Albedo approximation
Blinn (1982) was the first to introduce a volume density
scattering model to computer graphics. In this paper
he made a number of approximations well suited to the
problem he was studying: the rings of Saturn. Blinn
chose to model a uniform medium of relatively low al-
bedo with a single illuminating light source. (Although
his method generalizes easily to multiple light sources.)
Assuming the above and, in addition, only a
single
scattering
of the radiation from the light source to the
eye, he was able to solve the problem analytically. Of
course, multiple scattering is a second order effect for
a medium with low albedo. The Blinn model is thus
valid for a wide variety of phenomena.
Voss (1083) has adapted Blinn's procedure to more
general geometries, by essentially modelling a sandwich
of several Blinn models. With it, he has made
some exceptional images of clouds. Both the Blinn
and Voss methods place restrictions on the light-
ing and viewing geometry of the scene.
Unfortunately, clouds have a very high albedo--the
single scattering approximation does not hold. A num-
ber of visible defects appear when rendering by the new
technique. This is because the older method imposed
viewing geometry restrictions which hid the defects.
It is one finding of this paper that realistic rendering of
clouds demands more accuracy in the scattering model.
3.2 A ray tracing algorithm for the low albedo case
In this section we will describe a new technique which
allows one to ray trace volume densities without any
viewing or lighting restrictions. But for one slight
twist, this method is essentially a brute force develop-
meat of the Blinn single scattering model for ray trac-
ing.
The key to the new method is that it separates the
rendering procedure into two steps. The first step
drives the radiation from light source i through a den-
sity array p(x, y, z) into an array Ii(x, y, z) which holds
the contribution of each light source to the brightness
of each point in space. This is done simply by calculat-
ing in parallel the following line integrals for each path
F=,y,z = (z(t),
It(t), z(t))
from the light source through
p(~,y,z).
I~(x,',z)=-exp(-rfr,.,. P(~)d~ )
where ~
=
rp. The principal observation is that this
computation need only be done from at most once per
frame to at least once per scene.
The second step occurs once per ray trace. Each ray is
first culled against a bounding rectangular prism as an
extent. The brightness of a ray sums the contribution
of each volume element. It is given by:
B = f/x' e-"
f:l
p(~(p),y(p),z(/~))
d/~
JX].
x pCzCt), yCt), z(t)) dt
In this expression, ),1, )'2 are the beginning and ending
of the path between the eye and furthest visible volume
element. It is set by
kl ~ max(0, dl)
X2 ~ min(dgtobal, d2)
Where dl is the distance to the nearest intersection
point with the bounding extent, dgloba 1 is the distance
to the nearest intersection point with the rest of the
world database, and d2 is the distance to the farthest
intersection point with the bounding extent.
The first exponential in the brightness integral gives
the amount of attenuation due to absorption and scat-
tering of the material visible to the eye. The sum term
gives the brightness contribution of each light source
to the brightness of the particular point.
According to the integral there are two remaining steps
which must be done. The first is to compute the in-
tegrated optical path length along a particular ray.
167
9SIGGRAPH'84
This is done by simply bilinearly sampling and sum-
ming the density array along the ray. The second step
is to compute and sum the actual brightness integral.
Note that each of the integral terms has been precom-
puted so that a point sampling is all that is needed
to compute the brightness term. We have used the
Romberg integration method to actually compute the
integral (Dahlquist and Bjork 1974).
§4 High Albedo approximation.
The low albedo approximation suffers from a number
of defects when used to model clouds, a scattering
medium of very high albedo. This can be seen in the
results section. There are portions of the clouds which
are abnormally dark due to shadowing of one part of
a cloud upon another. In the actual physical situation
these dark portions are illuminated by the second and
higher order scattering centers within the cloud.
If one observes these clouds from above, the shadowing
problem is not observable, since the eye and the light
source are on the same side of the cloud. When looking
from the underside of the cloud on the opposite side of
the light source, one cannot determine what the actual
thickness of the cloud is, so again the eye cannot dis-
cern an artificial darkening. However, clouds observed
to shadow themselves viewed from the side show this
problem quite clearly (see the figures).
Blinn (1982) has suggested treating the multiple scat-
tering problem by a Neumann expansion involving the
phase integral. This method is likely to work well only
with lower albedo media, since the series is geometric
in the albedo w0. If the albedo w0 is close to one, many
terms will be needed to converge to a solution.
4.1
A
Perturbation solution, conservative systems
In order to approximate the high albedo solution, we
perform a perturbation expansion on fl ---- (1 - w0)-
We normalize the phase function to
p(O) = w0 p(o)
= (1 -/~) p(o)
For compactness, we write the scattering equation
-1
--s. VJ(x, s)
-
Z(z, s)
ap
1 ~ p(s, ~)ZCx, ~) d~ = 0
+ ~(1-~) dfx
as the sum of two linear operators
where
LI + (1 - fl)MZ ---- 0
-1
LI = --s. V=Z(x, s) -- I(z, s)
tcp
4~rl fll *11=1 p(s, ~)Z(z, ~) d~.
M/=
---
Expanding I into a power series in fl gives
Z= E flk Zk.
Substituting into the original equation for Z and equat-
ing like powers of fl gives a set of equations
LIk + MIk --~ --MIk-1.
Thus the perturbation solution presents us with a
'series of forced conservative (w0 ---- 1) scattering equa-
tions. We now develop techniques which allow us to
approximate the solutions for the conservative case.
4.2 The Scattering equation expressed in Spherical
Harmonics
We expand Z(z, s) into spherical harmonics in s to ob-
tain
oo l
l~O m~--I
The functions ]Qm are the customary normalized spheri-
cal harmonics of degree l and order m
5re(o, ¢) = am(0,
where the Pl~ are the associated Legendre polynomials
of degree l and order m (Courant and Hilbert 1953).
Substituting the spherical harmonic expansion into the
scattering equation we obtain
~
_1. [w,~(~)] y,~(~) _ i,~(~)~,~C, )
+,'~fp(,.~)~,~(~)~= o.
168
Computer Graphics Volume 18, Number 3 July 1984
Multiplying this equation by
Yt,,,,,(s)
and integrating
we get
~ l vIt"(z) . / Y t;m,(s)sYt,,,(s) ds
- I"(=)~.,
~,,,,,,,
//"
+ Yt,,n, (s)p(*. ~)YL,~(~) d-~ do = 0
or, writing it in Dirac "bra-ket" notation
~w'~(=)
(Y,,,,,,(.)l. Ir~,,.(.))
- lt"(z)~u,~,,,,,,,
+ It"(x)(Y~,.,.,(s)lPlr.,.(s)) =
o;
where we write
(XIOIY)
for the integral
fo '~ // X " O Y sin O d~ dO.
St
This gives us
a
coupled set of first order PDEs for
I~"~(r). If we know the coupling coefficients given by
the matrix elements
and
(r,,.,.,(.)lplY,,.(.))
(Y~,...,(.)I. IY.,.(.))
then we can solve this system by relaxation. For
graphics applications, only the first few spherical har-
monics are necessary for a convincing image. We trun-
cate the so-called "p-wave", viz. after the l = 1 term.
The next order of business is then to calculate the
matrix coupling coefficients.
3.3 Matrix elements for the position
To calculate the matrix element
(Yt,m,(s)]slYlm(s))
for
the direction operator s, we calculate the matrix ele-
ment for each component of s,
(Y,,,,,,(.)I=IY,,,,(.))
=
(~,,.,,(~)1
sinOcos
~,lr,,..(~))
(r,,,,,,(,)lylY,,.(,))
= (Yt,,,.,(s)l sin
o sin
elf,.,.(.))
(rv,,.,(,)lzlY.,,(.)) = (r~,,,.,(.)l cos elY.,,(.)).
Now,
(Y/,.,,, (.) I,~IY...(~))
f[ .
=
e,,..(co.O)e'"~] [p,,,..,(co.O)."~]
X
e
sin 2 ~b de
de
= [~'Pt.(¢osO)P~,ra,(cosO)sin2 O dO]
1
Letting/~
~---
cos 0 and taking into account the Kronecker
~, the matrix element becomes:
l
(~,,..,(~)I,~IY.,.(.)) =
1
Plrtt(~)Pl'rn"l'l(~)(1--#2)l/2
d~
(4.1)
But from a recursion relation for the Legendre polyno-
mial we have
Pl, m+~(/~)(1 --/z2) 1/2 = k0/~+l,m -
klPt-1,,,*,
where
ko = (l - m + 1)(1 - m + 2)
2/+1
~gl = (I "b $ -- 1)(/+ m)
2l+ 1
Using this relation in equation (4.1) we obtain
(]~,.,.,(.)1,,1~6.,.(.))
1
= ko~t,,t+l - kl6t,,l-1.
The last equality follows from the orthogonality for the
Legendre polynomials. Since
z = Re(u)
y = Ira(u)
Now we may save a bit of work by setting
u~ z +iy
=
sin
Oe i÷.
we have
(Y,,,..,(.)I=I~.,.(.)) = (~,.,.,(.)l,.IY,.,.(.))
(~,.,.,(.)lylr,,.,(~)) = o.
169
@SIGGRAPH'84
Now to compute the z matrix element we get
(~1,,.,(s)lzlh,.(s))
= [/" e,,..,(~osO)~=(~osO)cos Os~ O dO]
x[f[e'("-"')+d¢,]
= [ fo" Pvm,(eosO)Pt=(eosO)cos Osin O dO]
~m, tr~l 2 7~
(4.2)
Now f:om a recursion relation for the Legendre poly-
nomial
Pt,~
we have
where
~Pl~,~(~)
=
k2 Pl+ l,m(~) --
k3Pl-l,m(J ~)
l+m
k2----
2/+1
l-m+l
k3--
2/+1
Substituting into (4.2) we obtain
(h,=,(s)Mh=(s))
/2
~ ev.,.(.)[k2~+i,=(~) - k3et-i,~(~)l d~
= [k2,~t,,l+l --
ka~v,~-l]~,27r,
where the last equality follows from the orthogonality
of the
Pt,~.
4.4 Matrix elements for the phase integral
We assume the phase function to vary with the phase
angle only. In this case we may expand the phase
function into Legendre polynomials
p(cosO) = £
~kPl=(cosO)
k=O
and substitute into the matrix expression
(Y~,,.,Cs) lv I~,,.(s))
= L,,=. L,,=.
X Yl=(s)
ds d$
= f
s'a)Yt,~(s)dsda.
k
(4.3)
Now, a ~ in polar coordinates is
8 ~ =
COS '7
= cos 0 cos 0 + sin O sin 0 cos(¢ --
~).
This allows us to apply Laplace's formula,
l
/~(cos~) -- 4~
21 + 1 ~-4. Ytm(e, ~b)r;,,n(-O,-~).
m.~ --|
Using this identity in (4.3) we obtain
{~,..., (s) lp IY.,=(a))
4~
k
'[/ ][/
]
X ~ lq,,,,,(a)Y~p(a) da
Y*t,,,(s)Ykp(s) ds
p~--k
k
4~r
= ~ wk 2k
+------1 ~
6,,k&~,p6u=~,,,,.
k p~--k
42r
k
4a"
-- -- ~7l ~ll' ~nz t m.
2l+1
So the phase function matrix is diagonal with respect
to spherical harmonics: no scattering occurs between
different spherical harmonics. Each diagonal element
is given by the Legendre expansion coefficients ~k-
§5 Generating density models
There are many ways to generate volume density
models for the above procedure. ¥oss (1983) has used
fractal densities with great success. We show a num-
ber of images based on these. In our images we follow
Voss in setting the densities with 1/f noise generated
by a 3 dimensional FFT. They make convincing clouds.
Unfortunately, it is unlikely that this method will elicit
realistic dynamical behavior.
A second set of models which appears promising is
Reeves' particle systems (Reeves 1983). We can use his
techniques to fill the density array by interpolation.
Ray tracing can then be used to render the array.
Flows of ODEs and PDEs can be used to model the
action of flowing water and to model hair and fuzzy
surfaces, as well as trees. These methods are obvious
generalizations of Reeves' method.
Finally we mention actual physical models of the at-
mosphere to generate motion studies of clouds.
170
Computer Graphics Volume 18, Number 3 July 1984
5.1 A Cloud Model for Generating Density Functions
A numerical model for cumulus convection is used to
generate three-dimensional optical density functions.
The model incorporates the equations of motion, con-
tinuity, condensation, and evaporation. It models the
convective motions of the atmosphere, the latent heat
of vaporization of water, and frictional effects. Coriolis
effects due to the rotation of the Earth are ignored.
The cloud simulation commences in a convectively un-
stable atmosphere with high relative humidity. A con-
stant heat source is applied at the base of the model,
representing sunlight heating the earth. In the model,
a warm layer of air forms close to the ground, and
starts to rise. The cloud starts forming as soon as
moist air rises enough to become supersaturated. The
output of the model is the mixing ratio of liquid water
in the atmosphere at each 3D grid point. The liquid
water mixing ratio is directly interpreted as the opti-
cal density of the cloud. An image is then generated
from these optical densities using the diffuse rendering
algorithm.
The following symbols are used to represent atmos-
pheric quantities:
u, v, w wind velocities in the z, y, and z
directions, respectively
V the velocity vector consisting of the
components (u, v, w)
F the friction vector consisting of the
components (F=, Fy, F=)
0 potential temperature
q total water mixing ratio
q~ liquid water mixing ratio
Nine equations define the model. The first three
equations define the acceleration of an air parcel.
Acceleration is determined from the momentum of the
airflow, frictional effects, and from buoyancy.
0u
--
=
-V- Vu
- F=
Of
Ov
= -v. v,, -
Ow
--
=
-V. Vw
-
F=
+
0
Ot
The buoyancy term is proportional to the potential
temperature of the air parcel, $. Potential tempera-
ture is defined as the temperature an air parcel would
have if it were brought down to sea level. It is more
convenient to use in the model instead of absolute tem-
perature for the following reason: as an air parcel as-
cends, its temperature will decrease due to the decreas-
ing pressure, and must be recalculated at each altitude
of the air parcel. However, the potential temperature
of the air parcel will remain constant. Therefore, it
is computationally more efficient to use potential tem-
perature instead of absolute temperature in the model.
Potential temperature effectively measures the amount
of heat energy contained in an air parcel.
The change in potential temperature is determined by
the advection of temperature into the local region and
the heat released by condensing cloud vapor. The
term "advection" is used to describe the change of a
parameter at a fixed location due to transportation by
the winds. Thus an increase in potential temperature
due to transportation of warm air into the local region
is called advective warming. An external heat source
such as sunlight is represented by the variable Q:
08 L.. Oql
-- =-V.VS+----+Q.
Ot cp Ot
Lo~ is the latent heat of vaporization of water.
¢v
is
the specific heat of air at constant pressure.
Frictional effects are approximated by a simple relation
yielding an exponential decay of wind velocities with
time:
F=
lv
~f
where t S is the friction timescale.
The equation of continuity constrains the motions of
the air parcels. The requirement is that air is neither
created nor destroyed at any given location, which
implies that
V.V=O.
The density of air is assumed to be constant over the
scale of the model. A corollary of this requirement is
that the upward velocity over any horizontal plane in
the model must average to zero.
The change in water mixing ratios is determined by
the advection of water and the amount of evaporation
and condensation which takes place. Evaporation is
assumed to take place until the air is saturated or all
the liquid water is evaporated. Condensation takes
place whenever the air is supersaturated. The change
in total water content is simply determined by the
advection of water:
Oq
0-~ ---- -V. Vq
The saturation mixing ratio at any given level is an
exponential function of altitude:
q8 ~ A exp -~z
171
@SIGGRAPH'84
where A and a are exponential scaling constants, qo
is interpreted to be the mass ratio of water to air at
saturation for a given volume of air. The constants are
determined by the boundary conditions that q, = 0.02
at the bottom level of the model, and = 0.002 at
the top of the model. The liquid water mixing ratio is
determined by the amount of water present in the air
parcel in excess of the saturation mixing ratio:
qt = max(q -- q., O).
An important advantage of using a physical model for
clouds is that the cloud evolves realistically with time.
This approach lends itself to realistic cloud animation
whereas other modelling approaches do not automati-
cally produce realistic cloud behavior. An animation
of an evolving cumulus cloud is discussed in the next
section.
§6 Computer Results
Figures 1 through 4 show the low albedo rendering
technique with fractal volume densities. Figure 1 is
defined on a 16 X 16 X 16 grid, while 2 and 3 show
a cloud fractally generated on a 128 X 128 X 16grid.
These frames were computed at 512 X 512 resolution
on an IBM4341 processor. CPU times ranged from 1
to 4 hours. Figure 4 shows a fractal cloud in combina-
tion with a fractally generated mountain at 256 X 250
resolution, on the same machine this frame consumed
6 hours of CPU time.
Figures 5 through 10 show a cumulus cloud at various
stages of development. The optical densities were cal-
culated using the above model on a VAX 11/780 us-
ing a three dimensional grid of (10 by 10 by 20) grid
elements. A simple forward-differencing scheme was
used to integrate the above differential equations in
time. Each time step took around 10 cpu seconds
to compute, representing roughly one second of cloud
evolution. The cloud was allowed to evolve for several
minutes to generate the images shown. Rendering was
done on an IBM4341 at 512 X 512 resolution, with CPU
times of 2 hours each.
§7
Summary
This paper has presented new methods for the syn-
thesis of images which contain volume densities. We
have found that single scattering is a poor approxima-
tion for clouds when more general viewing geometries
are used. We have offered a new method for solving the
scattering equations in an approximate manner suit-
able for computer graphics. We have also presented
equations which will model the dynamic behavior of
clouds.
§8 References
Anselone, P.M., and Gibbs, A.G., 1974: Convergence
of the discrete ordinates method for the transport
equation, Conztructive and Computational method8
for differential and integral equations, Springer Verlag
Lecture notes in math 430.
Appel, A., 1968: Some techniques for shading machine
renderings of solids, 1968 SJCC, 37-45.
Blinn, J.F., 1982: Light reflection functions for simula-
tion of clouds and dusty surfaces. Proe. SIGGRAPH82.
In Comput. Gr. 16,3, 21-29.
Chandrasekhar, S., 1950: Radiative Transfer, Oxford
University Press.
Clark, T.L., 1979: Numerical Simulations with a three-
dimensional cloud model: lateral boundary condition
experiments and multicellular severe storm simula-
tions. J. of the Atmospheric Sciences, 38, 2191.
Courant, R. and Hilbert, D., 1953: Metho& of Math-
ematical Phyaica v.1, Interscience, New York.
Dahlquist, G., and Bjork, A., 1974: Numerical Metho&,
Prentice Hall, New York.
Goldstein, E. and Nagle, R. 1971: 3D visual simulation,
Simulation 16, 25-31.
Kajiya, J.T., 1983: Ray tracing procedurally defined
objects, SIGGRAPH83, Comput. Gr. 17,3, 91-102.
Kajiya, J.T., 1982: Ray tracing parametric patches,
SIGGRAPH82, Comput. Gr. 16,3, 245-254.
Keller, H.B., 1960a: Approximate solutions of trans-
port problems, SIAM J. Appl. Math. 8, 43-73.
Keller, H.B., 1960b: On the pointwise convergence of
the discrete ordinates met~hod, SIAM J. Appl. Math.
8,
560-567.
Max, N., 1983: Panel on the simulation of natural
phenomena, Proc. SIGGRAPH83, In Comput. Gr.
17,3, 137-139.
S chlesinger, R.E., 1975: A three-dimensional numerical
model of an isolated deep convective cloud: Preliminary
results. $. of the Atmospheric Sciences, 32, 934-957.
Schlesinger, R.E., 1978: A three-dimensional numerical
model of an isolated thunderstorm, part I: comparative
experiments for variable ambient wind shear. J. of the
Atmospheric Sciences, 35, 690-713.
Schlesinger, R.E., 1980: A three-dimensional numerical
model of an isolated thunderstorm, part II: dynamics
172
Computer Graphics Volume 18, Number 3 July 1984
of updraft splitting and mesovortex couplet evolution.
J of the Atmospheric Sciences, 37, 395.
Simpson, J., Van Helvoirt, G., MeCumber, M., 1982:
Three-dimensional simulations of cumulus congestus
clouds on GATE day 261. Y. of the Atmospheric
Sciences, 39, 126.
Reeves, W.T., 1983: Particle systems--a technique for
modeling a class of fuzzy objects, ACM Trans. on
Graphics, 2,2.
Voss, R., 1983: Fourier synthesis of gaussian fraetals:
1If noises, landscapes, and flakes, Tutorial on State of
the Art Image Synthesis v.10, SIGGRAPH83.
Wallace, J. M., and Hobbs, P. V., 1977: Atmospheric
Science, Academic Press, pp.359-407.
Whirred, T., 1980: An improved illumination model
for shaded display, Comm. ACM23, 343-349.
173
@SIGGRAPH'84
=1
pf; ~.. 5
,~i ~;. q
~~;. Jo
174
... Monte Carlo Methods. Kajiya and Von Herzen [1984] were the first to use path tracing for numerically estimating radiative transfer in volumes [Chandrasekhar 1960]. Their technique was later extended by constructing paths in a bidirectional manner [Lafortune and Willems 1996], mutating paths using the Metropolis-Hastings algorithm [Pauly et al. 2000], and importance sampling of low-order scattering [Georgiev et al. 2013;Kulla and Fajardo 2012]. ...
Preprint
We present a technique for efficiently synthesizing images of atmospheric clouds using a combination of Monte Carlo integration and neural networks. The intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming aerosols make rendering of clouds---e.g. the characteristic silverlining and the "whiteness" of the inner body---challenging for methods based solely on Monte Carlo integration or diffusion theory. We approach the problem differently. Instead of simulating all light transport during rendering, we pre-learn the spatial and directional distribution of radiant flux from tens of cloud exemplars. To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source. The descriptor is input to a deep neural network that predicts the radiance function for each shading configuration. We make the key observation that progressively feeding the hierarchical descriptor into the network enhances the network's ability to learn faster and predict with high accuracy while using few coefficients. We also employ a block design with residual connections to further improve performance. A GPU implementation of our method synthesizes images of clouds that are nearly indistinguishable from the reference solution within seconds interactively. Our method thus represents a viable solution for applications such as cloud design and, thanks to its temporal stability, also for high-quality production of animated content.
... For reflective imaging, NeRF models a scene by mapping spatial position (x, y, z) and viewing direction (θ, ϕ) along camera rays to RGB color (r, g, b) and volume density (σ). This allows NeRF to produce high-quality novel views by integrating color and densities along rays through volumetric rendering [23]. Numerous works have aimed to improve NeRF's efficiency [5,6,9,10], and extend its application scope, such as generative modeling, unbounded scenes, and RGB-D synthesis. ...
Preprint
This paper introduces ρ\rho-NeRF, a self-supervised approach that sets a new standard in novel view synthesis (NVS) and computed tomography (CT) reconstruction by modeling a continuous volumetric radiance field enriched with physics-based attenuation priors. The ρ\rho-NeRF represents a three-dimensional (3D) volume through a fully-connected neural network that takes a single continuous four-dimensional (4D) coordinate, spatial location (x,y,z)(x, y, z) and an initialized attenuation value (ρ\rho), and outputs the attenuation coefficient at that position. By querying these 4D coordinates along X-ray paths, the classic forward projection technique is applied to integrate attenuation data across the 3D space. By matching and refining pre-initialized attenuation values derived from traditional reconstruction algorithms like Feldkamp-Davis-Kress algorithm (FDK) or conjugate gradient least squares (CGLS), the enriched schema delivers superior fidelity in both projection synthesis and image recognition.
... 3D Representations. Neural radiance field (NeRF) [27] is a compelling solution for 3D representations, which is based on the standard volumetric rendering [16] and alpha compositing techniques [33], building the implicit representations using the multi-layer perceptron (MLP). Followup works adapt NeRF to various domains, such as sparseview reconstruction [8,47,54], acceleration [28], generative modeling [29,40], text-to-3D generation [32,49], antialiasing [3,4], medical image super resolution [7], and RGB-D scene synthesis [1]. ...
Preprint
3D Gaussian Splatting (3DGS) has recently created impressive assets for various applications. However, the copyright of these assets is not well protected as existing watermarking methods are not suited for 3DGS considering security, capacity, and invisibility. Besides, these methods often require hours or even days for optimization, limiting the application scenarios. In this paper, we propose GuardSplat, an innovative and efficient framework that effectively protects the copyright of 3DGS assets. Specifically, 1) We first propose a CLIP-guided Message Decoupling Optimization module for training the message decoder, leveraging CLIP's aligning capability and rich representations to achieve a high extraction accuracy with minimal optimization costs, presenting exceptional capability and efficiency. 2) Then, we propose a Spherical-harmonic-aware (SH-aware) Message Embedding module tailored for 3DGS, which employs a set of SH offsets to seamlessly embed the message into the SH features of each 3D Gaussian while maintaining the original 3D structure. It enables the 3DGS assets to be watermarked with minimal fidelity trade-offs and prevents malicious users from removing the messages from the model files, meeting the demands for invisibility and security. 3) We further propose an Anti-distortion Message Extraction module to improve robustness against various visual distortions. Extensive experiments demonstrate that GuardSplat outperforms the state-of-the-art methods and achieves fast optimization speed.
... Technically, NeRF models the underlying 3D scene as a continuous radiance field F : (x, θ) → (c, σ) parameterized by a Multi-Layer Perceptron (MLP) Θ, which maps a spatial coordinate x ∈ R 3 together with the viewing direction θ ∈ [−π, π] 2 to a color c ∈ R 3 plus density σ ∈ R + tuple. To form an image, NeRF performs the ray-based rendering, where it casts a ray r = (o, d) from the optical center o ∈ R 3 through each pixel (towards direction d ∈ R 3 ), and then leverages volume rendering [14] to compose the color and density along the ray between the near-far planes: ...
Preprint
Underwater images suffer from colour shifts, low contrast, and haziness due to light absorption, refraction, scattering and restoring these images has warranted much attention. In this work, we present Unsupervised Underwater Neural Radiance Field U2NeRF, a transformer-based architecture that learns to render and restore novel views conditioned on multi-view geometry simultaneously. Due to the absence of supervision, we attempt to implicitly bake restoring capabilities onto the NeRF pipeline and disentangle the predicted color into several components - scene radiance, direct transmission map, backscatter transmission map, and global background light, and when combined reconstruct the underwater image in a self-supervised manner. In addition, we release an Underwater View Synthesis UVS dataset consisting of 12 underwater scenes, containing both synthetically-generated and real-world data. Our experiments demonstrate that when optimized on a single scene, U2NeRF outperforms several baselines by as much LPIPS 11%, UIQM 5%, UCIQE 4% (on average) and showcases improved rendering and restoration capabilities. Code will be made available upon acceptance.
... In recent years, reconstructing 3D representations from 2D plane visual inputs has experienced significant advancements, driven by the development of NeRF [33]. NeRFbased methods represent spatial scenes by optimizing multilayer perceptrons (MLPs) and generate novel views through volume rendering [18]. Subsequent research has enhanced both the training and rendering efficiency through gridbased designs [13,14,34,50]. ...
Preprint
Full-text available
The recent development of 3D Gaussian Splatting (3DGS) has led to great interest in 4D dynamic spatial reconstruction from multi-view visual inputs. While existing approaches mainly rely on processing full-length multi-view videos for 4D reconstruction, there has been limited exploration of iterative online reconstruction methods that enable on-the-fly training and per-frame streaming. Current 3DGS-based streaming methods treat the Gaussian primitives uniformly and constantly renew the densified Gaussians, thereby overlooking the difference between dynamic and static features and also neglecting the temporal continuity in the scene. To address these limitations, we propose a novel three-stage pipeline for iterative streamable 4D dynamic spatial reconstruction. Our pipeline comprises a selective inheritance stage to preserve temporal continuity, a dynamics-aware shift stage for distinguishing dynamic and static primitives and optimizing their movements, and an error-guided densification stage to accommodate emerging objects. Our method achieves state-of-the-art performance in online 4D reconstruction, demonstrating a 20% improvement in on-the-fly training speed, superior representation quality, and real-time rendering capability. Project page: https://www.liuzhening.top/DASS
Article
This paper introduces particle systems--a method for modeling fuzzy objects such as fire, clouds, and water. Particle systems model an object as a cloud of primitive particles that define its volume. Over a period of time, particles are generated into the system, move and change form within the system, and die from the system. The resulting model is able to represent motion, changes of form, and dynamics that are not possible with classical surface-based representations. The particles can easily be motion blurred, and therefore do not exhibit temporal aliasing or strobing. Stochastic processes are used to generate and control the many particles within a particle system. The application of particle systems to the wall of fire element from the Genesis Demo sequence of the film Star Trek H: The Wrath of Khan [10] is presented.
Article
The development of an isolated convective storm in a sheared environment is studied with an anelastic three-dimensional numerical model. Comparative experiments are run for three vertical profiles of ambient wind; no ambient wind, positive speed shear but no directional shear, and positive speed shear with veering. The cases are compared in regard to airflow, pressure and thermal patterns. Results are presented for three preliminary ″midget″ experiments with not too many grid points.
Article
Simulations with a three-dimensional numerical cloud model are presented for airflow over a bell-shaped mountain and for a multicellular severe storm.A comparison of results using the Orlanski (1976) and Klemp and Wilhelmson (1978) treatments for the normal velocities shows that physical modes can be computationally excited using the latter's treatment with the result of very large horizontally averaged vertical velocities.Cell splitting occurs for the model calculations and the analysis indicates the splitting is caused by an entrainment effect which may be an artifact of the experimental design.An analysis of subgrid/resolved scale kinetic energy shows that this ratio is much smaller for the current severe storm simulations than that found by Lipps (1977) for his trade wind cumuli simulations.A comparison of some general features of the multicellular severe storm with observational data is presented.
Article
The mature stage of an isolated convective storm in sheared surroundings is studied by means of an anelastic three-dimensional numerical model. Liquid precipitation and turbulence are included in parameterised form. Three comparative experiments are run with different vertical profiles of ambient wind: no ambient wind, uni-directional shear, and multi-directional shear dominated by strong low level veering, the first shear profile being the west east projection of the second. The cases are compared in regard to airflow, pressure, potential temperature and liquid water. ( (A)
Article
This paper describes a visual simulation technique by which fully computer-generated perspective views of three-dimensional objects may be produced. The method is based on a relatively simple geometric modeling technique for the mathematical representa tion of the three elements essential to the picture- taking process, namely, a camera, a light source, and the object or objects to be photographed. Once these three basic components have been defined, geometric ray tracing is employed to compute a "picture" of the object as it appears in the simu- Zated camera. In essence, individual light rays are traced from their source to the surface of the object. The reflected component of each ray is computed and traced to its point of intersection with the film plane. Thus, each reflected ray pro vides the intensity at a single point on the pic ture, and, when a sufficient number of points have been computed, the entire area of intensity data may be displayed on a cathode ray tube. Several examples of the pictorial output of this process are shown, and the application to the computer- generated films is discussed.
Article
We present new algorithms for efficient ray tracing of three procedurally defined objects: fractal surfaces, prisms, and surfaces of revolution. The fractal surface algorithm performs recursive subdivision adaptively. Subsurfaces which cannot intersect a given ray are culled from further consideration. The prism algorithm transforms the three dimensional ray-surface intersection problem into a two dimensional ray-curve intersection problem, which is solved by the method of strip trees. The surface of revolution algorithm transforms the three dimensional ray-surface intersection problem into a two dimensional curve-curve intersection problem, which again is solved by strip trees.