Professional Documents
Culture Documents
I'll explain how I implemented the procedural clouds using 3D hardware, and some ways to make it scale
across a range of processor and graphics card performance and capabilities. First though, I'd like to point
out some observations about clouds and some of their properties. (Yes, I did eventually go look at the sky
before putting together the demo ). We won't be able to model all of these things, but it's worth noting them
to give us a starting point. Fortunately, since I live in Oregon, there was no shortage of clouds to snap some
pictures of. I've included a few below, and made some notes on each.
Figure 1A
• The sky behind the clouds has some color to it. Usually blue, though it could be black at night. (See
figure 1). (Sunsets and sunrises have gradients that go from yellow/red to blue, as we'll see in a later
photo)
• Thin clouds are white. As they get thicker, they turn gray. This isn't just srcalpha-invsrcalpha
transparency, but rather indicates that there are two things that have to be modeled, the amount the
background is being obscured, and the amount of light the clouds emit in the direction of the viewer
(which is light reflected/refracted from all directions).
• Clouds are for the most part, randomly turbulent. The shape of the patterns can change greatly, often
with altitude. Low-lying clouds tend to be thick and billowing, and higher level clouds tend to be thin
and uniform.
• Clouds at lower altitudes tend to obscure light from above more than reflect light from below, and are
also usually thicker, and thus are usually darker. Clouds at higher levels are nearly always whiter.
Figure 1B
• As you turn toward the sun the clouds tend to be brighter.
• There are sometimes visible transitions in cloud patterns, such as along a weather front.
• Atmospheric haze makes the sky and clouds in the distance fade out to a similar color.
Figure1C
• Clouds have thickness, and thus become lit or take on light or appear lit?. Those away from the sun
tend to be brighter on one side and darker on another.
• Clouds at sunrise and sunset tend to reflect light from below more than transmit light from above.
• At sunrise and sunset, more colors of the spectrum are reflected, thus the sky color tends to be a
gradient from blue to yellow or red, and the clouds tend to be lit with light of orange or red.
• The sky's cloud layer isn't a plane (or a cube for that matter), but rather a sphere. We just look at it
from close to it's circumference, and so often mistake it for a plane.
We're certainly a long way from modeling all of these things. Also, this doesn't begin to list the
observations we'd make if we were able to fly up, into and through the clouds. Limiting ourselves to a view
from the ground, we'll see how many we can model in a real time application later. First, some background
on some of the techniques we'll use.
Making Noise
If you've been reading this publication and others on a regular basis, you've undoubtedly heard talk of
procedural textures, and in particular, procedural textures based on Perlin noise. Perlin noise refers to the
technique devised by Ken Perlin of mimicking various natural phenomena by adding together noise
(random numbers) of different frequencies and amplitudes. The basic idea is to generate a bunch of random
numbers using a seeded random number generator (seeded in order to be able to reproduce the same results
given the same seed), do “some stuff” to them, and make them look like everything from smoke to marble
to wood grain. Sounds like it shouldn't work, but it does.
This is best illustrated with an example. Take the example of a rocky landscape. When looking at it's
altitude variation, low frequencies exist (rolling hills), as well as medium frequencies (boulders, rocks) and
very high frequencies (pebbles). By creating a random pattern at each of these frequencies, and specifying
their amplitude (e.g.mountains are between 0 and 10000 feet, boulders between 0 and 100 feet, pebbles
under 2 inches), we can add them together to get the landscape. (See figure 2 for a one dimensional
example of this .)
Figure 2 - Waves of different frequencies and amplitudes being summed together. In this case a regular
function because the input functions are regular.
Taking the one dimensional example to two dimensions, we'd get a bitmap that could represent a
heightmap, or in our case cloud thickness. Taking it to three dimensions, we could either be representing a
volume texture, or the same two dimensional example animated over time.
Aside from the random number generator, the other thing necessary is a way to interpolate points between
sample values. Ideally, we'd want to use a cubic interpolation to get curves like those in the graph, but we
are going to use a simple linear interpolation. This won't look as good, but will let us use the hardware to do
it.
As I mentioned earlier, what intrigued me about the original software-rendered demo was that many of the
steps involved (smoothing noise, interpolating between noise updates, combining octaves) seemed like a lot
of per-pixel cost. Cost that could be done on the graphics card using alpha blending, bilinear filtering, and
by rendering to texture surfaces. A remaining question was whether the simple four-tap sampling of the
bilinear filter would be adequate for doing the smoothing. I figured I'd attempt it, and as I think you'll see,
the results are acceptable.
In order to render to a texture surface, one has to create a surface that can be both used as a render target
and as a texture (there's a DirectDraw surface caps flag for each). If the application is using a Z buffer, it
must attach one to the texture surface as well.
Then for each frame, the application does one BeginScene/EndScene pair per render-target. After rendering
to the texture and switching back to the back buffer, the application is free to use the texture in that scene.
I use this technique pretty extensively in this demo, but there's no reason it can't be done by rendering to the
back buffer and then blitting to a texture surface for later use. In fact, OpenGL doesn't expose the ability to
render to a texture, so you'll have to blit to the textures if this is your API of choice. This is also the work
around used on hardware that doesn't support render-to-texture.
We need to generate noise at several different frequencies. I chose to do four, though less might suffice on
lower end systems. By representing the four octaves as four textures of different resolutions (say 32x32
through 256x256) which eventually will all be upsampled to the same size (by mapping them to textures of
larger sizes), I can achieve the desired result automatically. The bilinear filtering does the interpolation for
me, and a small size texture just ends up being a lower frequency than a larger size texture. A cubic filter
would be better at approximating the curve I should get, but the results from the bilinear are acceptable.
The noise textures are updated at different periods, with the lowest frequency, stored in the 32x32 texture,
being updated the least frequently (I used an interval of 7 seconds). The higher the frequency, the more
frequently it is updated. This makes sense if you think about it. The lowest frequency represents large,
general cloud formations which change infrequently. The highest frequency represents small, wispy bits in
the clouds which change more rapidly. You can see this in action if you ever see time-lapse photography of
clouds.
I used frequencies that were multiples of two (for the sake of simplicity) and because the results were
pretty good. However, it's not necessary to do so. Interesting results can be achieved by using frequency
combinations other than just multiples of two times the input frequency.
Before generating the smoothed noise, we save off the previous version generated at the last update. We'll
use this when interpolating between updates in our next step.
Interpolant = TimeSinceLastUpdate/UpdatePeriod
CurrentOctave = PreviousOctave*(1-Interpolant) + LatestOctave*Interpolant
As I was using a series of frequencies that are multiples of two, I chose to make their contributions
multiples of two, such that each octave’s contribution was half that of the next lowest frequency. This can
be expressed as:
Color = 1/2 Octave0 + 1/4 Octave1 + 1/8 Octave2 + 1/16 Octave3 +….
This was easy to code and produced good results. However, the weighting could certainly be changed to
change the look of the end result.
I chose to render this in multiple passes, by starting with the highest frequencies and working my way up to
the lowest ones, each time modulating the render target buffer contents with a monochrome color
representing an intensity of 0.5.
Unfortunately, we start to run into some issues with dynamic range and precision. I'll talk about these later,
but it's worth noting here that we multiply the resultant texture by itself to increase the dynamic range and
contrast.
At this point, the noise looks pretty good (Figure 5 shows the sum of the four octaves of noise.). It's very
turbulent, is animated, and the artifacts from the simple linear interpolation aren't too apparent. However,
we have some more to do before it looks like clouds.
Unfortunately, we lose some dynamic range here. We can use a modulate2X or equivalent to get it back,
but we still lose precision.
This method is attractive, because the mapping is easy to do (a planar mapping will suffice), with none of
the pinching that occurs at the poles of a tessellated sphere. Because the bilinear filtering has been
wrapping when sampling texels, the cloud texture tiles nicely. We can vary the amount of tiling to change
the look of the clouds.
The rest is fairly easy. Fill the background to the desired sky color (blue in this case), blend the clouds over
it, add a glow around the sun, and of course all 3D graphics demos are required to have a lens flare. As an
interesting effect, we can use the cloud texture to vary the amount of lens flare. Since we know how the
texture maps to the sky dome, and the angle at which the sun is coming, we can figure out exactly which
texel in the cloud texture maps to the sun's position on screen. We can then upsample that over a pair of
triangles and use it as a mask for the lens flare.
I implemented two different methods of blending the clouds. The first is just a straight transparency, which
looks okay, but doesn't mimic the obscuring of the light that happens. The second method does this, via a
second pass of the cloud texture which has been clamped at a different level. Figure 7 shows off our final
result.
Figure 7: The cloud texture mapped onto a sky dome.
For one, we could do an embossing effect using an extra pass to brighten the clouds on the sides facing the
sun, and darken them on the sides away from the sun. However, this would require that we not tile the
texture, which in turn would probably mean that we'd need to use a higher resolution texture to begin with.
Alternatively, we could do an additional render-to-texture pass to tile the texture out onto one that would
then be mapped once onto the whole sky.
Multiple cloud layers could be mapped onto another geometry mesh for added detail. This could then scroll
at a different velocity (as many games do with static cloud textures).
Another thing that would be interesting is to have the clouds interact with the environment, such as clouds
parting around a mountain peak, or being disturbed by a plane. This could be done by a combination of
deforming the mesh the clouds are mapped onto and modifying the texture itself. However, it might take a
lot of work to get it to look just right.
Another improvement would be to add a gradient background for sunsets and sunrises, and tint some
portion of the clouds.
Finally, something we could do to make the demo altogether more natural and random looking is to
animate any (or all) of the various parameters over time , using a smoothed noise function. This could be
used to animate the cloudiness factor to vary the amount of cloud over time, or to animate the speed of the
clouds, or the turbulence. The only limit is your imagination (or in my case, the time to implement it all)!
Multi-Texture Meltdown
As I alluded to earlier, an unfortunate problem when doing any effect that requires a significant amount of
multi-texture and multi-pass rendering, is the lack of precision that can quickly result in visual artifacts.
Where possible, 32-bit textures can be used to minimize (but not always eliminate) this problem. At the
same time, it's important to only do so where necessary to minimize the texture memory consumption.
Another problem is the fact that the dynamic range of textures is limited to the 0 to 1 range. This makes it
difficult when dealing with anything that gets amplified over multiple stages. For example, the higher
frequency noise textures ended up only contributing one or two bits to the end result due to this problem.
Making it Scale
If you're writing games for the PC platform, being able to implement a snazzy effect isn't enough. You also
have to be able to make it scale down on basic machines or on video cards with less available memory.
Fortunately, this procedural cloud technique lends itself well to scaling in several respects.
On the lowest end systems, you can either use a static texture or generate the texture only once. Other areas
for scalability include making updates to the noise less frequently, using fewer octaves or using lower
resolution or lower color depth textures.
Doing it Yourself
Generating procedural clouds can allow for unique skies that change over time and with other factors in the
environment. This can improve the look of skies over static cloud textures that never change with time.
Additionally, they can save on storage, or download size for Internet applications.
Hopefully, some of the techniques presented here will allow you to implement similar things in your
applications.
Additional Resources
• Modeling and Texturing, A Procedural Approach - second edition. David S. Ebert, Editor. AP
Professional, 1994. ISBN 0-12-228730-4
• Hugo Elias & Matt Fairclough's procedural cloud demo -
http://freespace.virgin.net/hugo.elias/models/m_clouds.htm
• Haim Barad's Procedural Texture Using MMX article -
http://www.gamasutra.com/features/programming/19980501/mmxtexturing_01.htm
• Ken Perlin's Noise Machine website - http://www.noisemachine.com/
• Kim Pallister's article, “Rendering to Texture Surfaces Using DirectX7”
http://www.gamasutra.com/features/19991112/pallister_01.htm