17. Physically Based Reflectance for Games

Transcription

17. Physically Based Reflectance for Games
5/3/2006
1
1
Physically-Based Reflectance
for Games
Naty Hoffman
Jan Kautz
Dan Baker
Naughty Dog
University College
London
Firaxis Games
The latest version of these course notes, together with other supplemental material,
can be found on the course web site at
http://www.cs.ucl.ac.uk/staff/J.Kautz/GameCourse/
2
Agenda
• 8:30 - 8:40: Introduction (Naty)
• 8:40 - 9:15: Reflectance (Naty)
• 9:15 - 10:15: Game Development (Dan, Naty)
• 10:15 - 10:30: Break
• 10:30 - 11:15: Rendering (Point Lights) (Naty, Dan)
• 11:15 - 12:00: Rendering (Environment Maps) (Jan)
• 12:00 - 12:15: Conclusions / Summary (All)
3
Physically-Based Reflectance
for Games
8:30 - 8:40: Introduction
Naty Hoffman
4
Motivation
These images look noticeably better, “more real” than most rendered images,
certainly more than current real-time rendered images. In games, the ideal is to
achieve the same level of visual realism. This is similar to the goal of film rendering,
but there are some important differences.
5
Motivation for Game Developers
• Understand physical principles of reflectance
• Physically principled rendering methods
– Not “physically correct” (no such thing)
• A means, not a goal
– In games, like film, the goal is to “look right”
– Under unpredictable conditions, without tweaks
– Physically principled methods can assist
6
Motivation for Academic Community
• Realistic real-time rendering techniques are
a major area of research
• Often, this research is underutilized in the
game development community
• Understanding the constraints on rendering
techniques used in games can help develop
more directly usable techniques
7
Focus of this Course
Let’s take a second look at these scenes we saw in the ‘motivation’ section.
8
Focus of this Course
• There are three main phenomena relevant to
rendering these scenes
– Light is emitted
• By sun, artificial lights, etc.
– Light interacts with the rest of the scene
• Surfaces, interior volumes, particles, etc.
– Finally, light interacts with a sensor
• Human eye or camera
Due to time constraints, we have chosen to focus in this course on a specific aspect
of the physical phenomena underlying the behavior of light in the scene.
9
Focus of this Course
• There are three main phenomena relevant to
rendering these scenes
– Light is emitted
• By sun, artificial lights, etc.
– Light interacts with the rest of the scene
• Surfaces, interior volumes, particles, etc.
– Finally, light interacts with a sensor
• Human eye or camera
We focus on the interaction of light with the scene,
10
Focus of this Course
• There are three main phenomena relevant to
rendering these scenes
– Light is emitted
• By sun, artificial lights, etc.
– Light interacts with the rest of the scene
• Surfaces, interior volumes, particles, etc.
– Finally, light interacts with a sensor
• Human eye or camera
Specifically with solid objects, not atmospheric particles or other participating media.
11
Focus of this Course
• There are three main phenomena relevant to
rendering these scenes
– Light is emitted
• By sun, artificial lights, etc.
– Light interacts with the rest of the scene
• Surfaces, interior volumes, particles, etc.
– Finally, light interacts with a sensor
• Human eye or camera
We will focus most on surface interactions.
12
Focus of this Course
• Reflectance interaction of light with
a single surface point
• Interior interactions,
interreflections, etc.
abstracted into a
reflectance model
The reflectance model relates the light exitant from a point to the light incident at
that point. Here we se light reflected directly from the surface, as well as light which
penetrates and undergoes some scattering before being re-emitted. All the light is
coming out of the small yellow circle, which we abstract as a single surface point.
The fundamental assumption here is that light entering at other points does not
effect the light exiting this one, which lets us abstract the relationship between light
entering and exiting at that point into a reflectance model.
13
Focus of this Course
• Reflectance abstraction tied to scale
– Abstracted phenomena must occur below the
scale of observation
• For example, in satellite imagery forests are
often treated as reflectance models
– Interreflection and occlusion of light by trees,
branches and leaves are abstracted
14
Focus of this Course – Physics
• Geometric optics
– Light travels in straight lines (rays)
– No wave effects (diffraction, interference,
polarization, etc.)
• No emission effects
– No thermal emission (incandescence)
– No fluorescence, phosphorescence, etc.
15
Phenomena Not Covered
Visible-Scale Global
Thin-film
Diffraction
Translucency Illumination Interference
IMAGE BY
H. W. JENSEN
IMAGE BY
M. KO
IMAGE BY
H. HIRAYAMA,
K. KANEDA,
H. YAMASHITA,
Y. YAMAJI, AND
Y. MONDEN
IMAGE BY
P-P. SLOAN
These are all outside the domain of our course. There are real-time techniques for
rendering these phenomena, which can be researched by developers interested in
incorporating them into their games.
16
Physically-Based Reflectance
for Games
8:40 - 9:15: Reflectance
Naty Hoffman
17
Reflectance
• Types of Reflectance
• Reflectance Theory
• Reflection Model Foundations
In this section, we will first discuss various types of reflectance from a more intuitive,
visual or qualitative standpoint. Afterwards we will go into the quantitative aspects of
reflection modeling – first covering the theory behind reflection models and then
discussing the building blocks from which most reflection models are built.
18
Types of Reflectance
IMAGE BY M. ASHIKHMIN,
S. PREMOŽE AND P. SHIRLEY
Here we see a montage of everyday objects exhibiting various types of reflectance.
In this section we will go over the basic types of reflectance, and qualitatively
describe the physical phenomena underlying their salient visual characteristics.
19
Types of Reflectance
• Mirror Surfaces
– Substances
• Metals, Homogeneous and Inhomogeneous Dielectrics
– Surface and Body Reflectance
• Non-Mirror Surfaces
– Surface Roughness and Scale
– Microscopically Rough Surfaces
– Structured Surfaces
When discussing types of reflective surfaces such as we have just seen, we will first
discuss mirror (effectively perfectly smooth) surfaces, and then other kinds of
surfaces.
When discussing mirror surfaces we will discuss how various substances (metals,
homogeneous dielectrics, inhomogeneous dielectrics) reflect light differently, and
the difference between surface and body (or volume) reflectance.
20
Mirror Surfaces
•
Perfectly smooth surface
•
Incident ray of light only
reflected in one direction
N
θi
θi
The geometrically simplest surface is a perfectly smooth, or mirror surface. In this
case a ray of incident (incoming) light is reflected in a single direction, which is the
direction of incidence mirrored about the surface normal N.
21
Perfectly Smooth?
• Of course, in nature no surface can actually be
perfectly smooth
• However, irregularities significantly smaller than
visible light wavelengths (~400-700nm) can be
ignored
Perfect smoothness is an unrealizable abstraction – if nothing else the individual
atoms of the surface form ‘bumps’. However, due to the fact that visible light has
wavelengths in a well-defined range (roughly 400 to 700 nanometers), any surface
which only has irregularities below this scale can be considered ‘perfectly smooth’
for the purpose of reflecting visible light. Note that a surface which can be treated as
‘perfectly smooth’ for reflecting visible light may not necessarily be treated so for
reflecting shorter wavelength electromagnetic radiation such as ultraviolet or XRays.
22
Mirror Surfaces
•
Small, bright lights
reflect as tiny bright
spots
•
Fine environment
details can be seen in
reflection
What are the main visual characteristics of a mirror surface? Individual small, bright
light sources are reflected in much the same way they would appear if we were to
look at them directly, namely as small, extremely intense spots. In general, the
environment is reflected in a recognizable manner, even fine detail can be seen in
the reflection.
23
Substances
• Lacking any structural detail, smooth
surfaces are differentiated by their substance
• We will distinguish between two main types:
– Metals, or conductors
– Dielectrics, or insulators
• Semiconductors form in theory a third type
but we will not discuss them
Besides their surface structure or geometry, surfaces are differentiated by their
substance or chemical composition. For reflectance purposes, it is useful to classify
such substances into two broad groups: metals (conductors) and dielectrics
(insulators), since their visual properties differ significantly. Semiconductors form a
third class, which we will not discuss, firstly because they do not often appear in
bulk form in game scenes, and secondly because their visual properties are often in
between those of metals and dielectrics and thus can be interpolated from an
understanding of both.
24
Reflectance
• More precisely defined later, for now “the
percentage of incident light which is
reflected”
– Rather than transmitted through the surface,
absorbed within it, etc.
• We will only consider reflectance values over
the visible range of frequencies
Reflectance outside the visible range can be significantly different, but this is not of
interest for rendering purposes.
25
Fresnel Equations
•
Surface reflectance
depends on
– Incidence angle
(increases to 100%)
– Refractive index (in turn
depends on wavelength)
– Polarization
•
Spectral reflectance at
θ=0° is a material
characteristic
IMAGES BY R. COOK AND K. TORRANCE
For mirror surfaces, the surface reflectance (note the qualification of “reflectance”
with “surface” in this case, which shall become clearer in a few slides) obeys the
Fresnel equations.
Here as an example, we see a 3D graph of the surface reflectance of copper as a
function of incidence angle and wavelength (over the visible spectrum) – nonpolarized light is assumed. In addition, we see the spectral reflectance or
reflectance color as a function of incidence angle (increasing to the right) on the
color strip below the graph.
As the incidence angle increases (going to more glancing angles), the reflectance
increases until (at the limit) it is 100% at all wavelengths (this is true for all
substances, not just copper). Note that although the general trend is increasing
reflectance with increasing angle of incidence, the increase is not monotonic across
all wavelengths. There is a dip on the red side before it goes up (causing the color
to shift to blue just before it goes white).
Most CG ignores polarization, but it can be visually significant in some cases
(skylight is partially polarized).
All smooth surfaces have a 100% white reflectance at glancing angles, but the
reflectance for light at normal incidence is a function of the refractive index and is a
characteristic of the material. Surface reflectance numbers quoted in the following
slides are always for normal incidence.
26
Metals
•
•
•
•
Transmitted light absorbed;
converted to heat
50%+ reflective at visible
wavelengths
Some metals have almost
constant reflectance over
visible spectrum: Steel ~55%,
Silver and Aluminum >95%
Other metals (Gold, Copper)
have strong wavelength
dependence at normal
incidence
The first class of substances we will discuss are metals. Metals are both highly
reflective and highly absorbent of visible light; light not reflected is absorbed quickly
by the ‘sea’ of free electrons in the metal, its energy converted to heat.
Metals tend to sharply decrease in reflectance with increasing frequency after a
certain point. For most metals this point occurs above the visible range, so their
reflectance is roughly constant over this range, leading to a characteristic “colorless”
appearance. For other metals such as copper and gold this point occurs near the
high (blue) end of the visible range, leading to their characteristic “yellowish” or
“reddish” appearance. This will color the reflections in the material.
27
Dielectrics
• Low surface reflectance (water, glass,
plastic, ceramic, etc. <5%)
Again, note that like other reflectance numbers, these are at normal incidence – at
glancing angles it is significantly higher. Most dielectrics have a colorless surface
reflection, caused by the fact that their surface reflectance does not significantly
vary over the visible range. Note that here again we qualify the reflectance as
“surface reflectance”, this will become clear in a few slides.
28
Homogeneous Dielectrics
• Low, colorless reflectance
and low absorption
• Transparent
We have seen that dielectrics are typically not very reflective. In a pure
homogeneous form, they are not very absorptive either – most incident light just
gets transmitted through, so they are highly transparent. Examples are water, glass,
crystal, transparent plastic, etc.
29
Inhomogeneous Dielectrics
•
Transmitted
light
undergoes
absorption and
scattering until
completely
absorbed or
re-exiting the
surface
Most dielectrics are not pure, especially opaque ones. They have impurities of other
materials included in them, these usually are responsible for most of the scattering
and absorption of visible light in dielectrics.
30
Surface vs. Volume Interactions
• Inhomogeneous
dielectrics have two
relevant modes of light
interaction
– Surface (reflection /
refraction at the surface
interface)
– Volume (scattering /
absorption by internal
inhomogeneities)
Why have we specifically called out inhomogeneous dielectrics here? Volume
interactions with light are not visually interesting for both metals and homogeneous
dielectrics, but for opposite reasons. In metals the volume interactions with visible
light are so strong that all transmitted light is quickly absorbed and none escapes. In
homogeneous dielectrics the volume interactions with visible light are typically so
weak that the transmitted light is not significantly attenuated.
31
Surface vs. Volume Interactions
• Inhomogeneous
dielectrics have two
relevant modes of light
interaction
– Surface (reflection /
refraction at the surface
interface)
– Volume (scattering /
absorption by internal
inhomogeneities)
Here to clarify, we look at each type of interaction separately. First the surface
interaction.
32
Surface vs. Volume Interactions
• Inhomogeneous
dielectrics have two
relevant modes of light
interaction
– Surface (reflection /
refraction at the surface
interface)
– Volume (scattering /
absorption by internal
inhomogeneities)
Then the volume interactions (also called interior interactions, and often subsurface
scattering although they include absorption as well as scattering phenomena). Note
that both types of light-matter interaction occur where there is some kind of
discontinuity in the optical properties of the medium. Note that these
inhomogeneities can be substantial (particles of some foreign material) or structural
(irregularities in molecular structure). If they are of a different substance, then their
different optical properties may cause the light exiting the surface due to volume
interactions to be quite different in color than that exiting due to surface interactions.
33
Volume Interactions as Reflectance
• ‘Reflectance’ is the
relationship between
incident and exitant
light at a surface point.
• To model volume
interactions as
reflectance, focus on
light incident / exitant
on a surface point
It would seem that reflectance, which is inherently surface-oriented, would preclude
modeling volume or interior interactions between light and matter. However,
focusing on what’s happening at the surface, and more specifically at a single point
(or a small area that can be approximated as a single point) enables using
reflectance to model volume interactions as well as surface ones. We look at the
scattered light only after it has re-exited the surface, at a point close to the point of
original entry (we will next try to give a feeling for what ‘close’ means in this
context). We ignore the details of what happens in the interior.
34
Volume / Body Reflectance
• If light re-exits the surface at a
large distance (compared to the
scale of interest), cannot model
as reflectance
IMAGE BY H. W. JENSEN
We will introduce a new term here, “volume” or “body” reflectance, as distinct from
surface reflectance. Although all reflectance is modeled as occurring at a surface
point, this distinguishes between reflectance originating from volume interactions
and that originating from surface interactions. Here we see a case where we cannot
model the volume interactions as reflectance at all, since they occur over a scale
which is large compared to the scale of interest. This kind of macroscopic
subsurface interaction is outside the scope of this talk.
35
Volume / Body Reflectance
• If light re-exits the surface at a
small distance (compared to the
scale of interest), can model as
reflectance
IMAGE BY H. W. JENSEN
And here we see a case where the volume interactions are well modeled as
reflectance, since they occur over a scale which is small compared to the scale of
interest. Note also that there is a clear visual cue as to the size of an object from the
scale of scattering.
36
Body Reflectance of
Inhomogeneous Dielectrics
•
Although most dielectrics have
very low surface reflectance,
body reflectance varies widely
–
–
–
–
–
–
Fresh snow: about 80%
Bright white paint: about 70%
Weathered concrete: about 35%
Stone: average about 20%
Earth (soil): average about 15%
Coal: close to 0%
Body reflectance can be seen as a the result of a ‘race’ between scattering and
absorption - whether the light is absorbed before it has been scattered enough to
re-exit the surface. The more frequently the light is scattered, the shorter the
average distance traveled before re-exiting the surface. Over a shorter distance the
light has less opportunity to be absorbed.
37
Body vs. Surface Reflectance
• Relative contribution of surface and body
reflectance in inhomogeneous dielectrics
varies with angle of incidence
– Only light which was not reflected at surface is
available to contribute to body reflectance
– As angle of incidence increases, contribution of
surface reflectance increases and that of body
reflectance decreases
In a sense, surface reflectance has “first dibs” on the incident light since it occurs
first.
38
Body vs. Surface Reflectance
IMAGE BY E. LAFORTUNE
39
Body vs. Surface Reflectance
IMAGE BY E. LAFORTUNE
40
Body vs. Surface Reflectance
IMAGE BY E. LAFORTUNE
41
Types of Reflectance
• Mirror Surfaces
– Substances
• Metals, Homogeneous and Inhomogeneous Dielectrics
– Surface and Body Reflectance
• Non-Mirror Surfaces
– Surface Roughness and Scale
– Microscopically Rough Surfaces
– Structured Surfaces
Now we will discuss surfaces which are not perfectly smooth – they have some kind
of microscopic roughness or structure which affects their reflectance. We will
discuss the relationship between surface roughness and scale, surfaces which have
random (rough) microgeometry, and then surfaces which have structured
microgeometry and how this affects their reflectance.
42
Non-Mirror Surfaces
• If the surface is not perfectly smooth, then
the reflectance behavior is affected by the
surface geometry as well as its substance
• Smoothness / Roughness is scaledependent
Remember that ‘perfectly smooth’ for visual purposes means ‘smooth down to a
scale below the wavelength of visible light’.
43
Surface Roughness and Scale
• Example:
– Smooth at visible scale
Smoothness is scale-dependent is several ways. To be relevant to visual
phenomena, it has to take place on a scale at or above that of a wavelength of
visible light. To be seen as contributing to a surface’s reflectance behavior and not
its shape, it has to take place at a scale below that at which the scene is being
rendered, or in other words below visible scales. Also, a surface can have varying
roughness at different scales and this may further affect reflectance behavior.
Here we see a surface which appears to be smooth.
44
Surface Roughness and Scale
• Example:
– Smooth at visible scale
– Rough below visible
scale
IMAGE BY S. WESTIN
If we pick a small spot on the surface and magnify it, we will see that the surface
exhibits considerable roughness on the microscopic scale.
45
Surface Roughness and Scale
• Example:
– Smooth at visible scale
– Rough below visible
scale
– At smaller scale still,
smooth again
Zooming in again the surface to an even smaller scale (but still much larger than a
single wavelength of visible light), we see the surface appears to be locally smooth.
46
Surface Roughness and Scale
• Example:
– Smooth at visible scale
– Rough below visible
scale
– At smaller scale still,
smooth again
– Finally rough at atomic
scale (below visible light
wavelength)
Finally we see the surface at the atomic scale. It is quite rough but this will have no
affect of the reflection of visible light since this scale is much smaller than a single
wavelength, exhibiting considerable roughness.
47
Microscopically Rough Surfaces
•
These surfaces have microscopic facets with a continuous
distribution of surface normals
–
•
These facets ‘spread’ the light reflecting from them
In most surfaces distribution of micro-normals is not
uniform, but peaks at the macroscopic surface normal
This category includes all surfaces which are not mirror surfaces. Although these
surfaces appear smooth at the visible scale, they contain micro-scale surface
elements with varying surface normals. For most surfaces the distribution of surface
normals is a smooth distribution over the hemisphere which peaks strongly at the
macroscopic surface normal. Microscopically rough surfaces can exhibit a
continuum of roughness, here we see a relatively smooth surface where the
reflected light is only spread out a little bit.
48
Microscopically Rough Surfaces
•
These surfaces have microscopic facets with a continuous
distribution of surface normals
–
•
These facets ‘spread’ the light reflecting from them
In most surfaces distribution of micro-normals is not
uniform, but peaks at the macroscopic surface normal
And here we see a rougher surface where the reflected light is spread out more.
Later in the course we will quantify the relationship between the surface roughness
and the distribution of the reflected light.
49
Microscopically Rough Surfaces
Another way to think about this is to look at a bumpy curved surface. In this image,
each bump has an individual highlight.
50
Microscopically Rough Surfaces
When we make the bumps smaller (or look at the surface on a larger scale), the
individual highlights becomes more difficult to see but still affect the overall
appearance of the material. Note that the normal distribution remains the same
throughout this image sequence, only the scale changes.
51
Microscopically Rough Surfaces
Here the bumps are smaller still.
52
Microscopically Rough Surfaces
At this scale, an overall pattern can be clearly seen where the individual bump
highlights are clustered most densely at the center and become more spread out
toward the outside.
53
Microscopically Rough Surfaces
At this scale the individual highlights are almost invisible, and the overall pattern
(which looks very much like a single large highlight) stands out.
54
Microscopically Rough Surfaces
Finally at this scale the bumps are not visible at all and the surface appears smooth.
The effect of the bumps can be seen in the falloff of the single large highlight.
55
Microscopically Rough Surfaces
•
Microscopic roughness also causes:
– Shadowing
Besides causing the surface to have a continuous distribution of normals (rather
than a single surface normal), the micro-geometry affects reflectance in other ways.
‘Shadowing’ refers to some of the micro-facets blocking light from others. The
shadowing depends on the exact shape of the micro-geometry, but not directly on
the normal distribution. In this image, the area with the black dashed arrows is
shadowed.
56
Microscopically Rough Surfaces
•
Microscopic roughness also causes:
– Shadowing
– Masking
‘Masking’ refers to some micro-facets obscuring others from the view position. This
also depends on properties of the microgeometry other than the normal distribution.
In this image, the area with the black dashed arrows is masked.
57
Microscopically Rough Surfaces
•
Microscopic roughness also causes:
– Shadowing
– Masking
– Interreflections
Finally, some of the light that was shadowed or masked is reflected off the surface.
Light may undergo several bounces before it reaches the eye.
58
Microscopically Rough Surfaces
•
The
reflectance of
a surface is a
function both
of its surface
physics and
its microscale
structure
IMAGE BY S. WESTIN
Just to re-emphasize. This is one of the most important concepts from this part of
the course.
59
Microscopically Rough Surfaces
• Small, bright lights
reflect as highlights
• Blurry reflection of
environment
These surfaces commonly exhibit glossy reflections. The highlights’ size and shape
mostly depends on the distribution of microfacet normals.
60
Microscopically Rough Metals
•
Reflectance is
dominated by the
primary surface
reflection
•
In rougher surfaces,
secondary reflections
create an additional
diffuse reflection
– More strongly colored
Secondary reflections from rough colored metals are more saturated than the
primary reflection since each bounce effectively multiplies the spectral reflectance
into the result again. Burnished gold is an example of this, where the gold has a
richer yellow color due to being rougher.
61
Microscopically Rough Dielectrics
•
Highlights tend to be
weaker in dielectric
surfaces due to lower
reflectance
– For the same reason,
secondary reflections
are less noticeable
– Diffuse mostly due to
body reflectance
As seen for smooth dielectric surfaces, rough dielectric surfaces also exhibit a
tradeoff between body and surface reflections at glancing angles (the only
difference is that here the surface reflection is blurred).
62
Extremely Rough Dielectrics
• Distribution of normals
is nearly uniform
• Diffuse reflection, with
some retroreflection
IMAGE BY T. HIMLAN
The roughest surfaces have micro-normals uniformly spread over the hemisphere
so that light reflected from the surface is spread evenly in all directions.
For dielectrics, this erases the coherence which is the important visual distinction of
surface reflections over body reflections, and the two can be lumped together into a
single diffuse reflection. In fact, for many of these surfaces, there is no longer a
useful distinction between surface and body; the ‘surface’ is a ‘sponge of atoms’
which blends seamlessly with the interior.
So although for metals, extremely rough surfaces can be treated as simply an
extreme case of microscopically rough surfaces, for dielectrics it is useful to treat
these surfaces as a distinct case.
Rough dielectrics include surfaces such as dust, rough clay, unpolished stone and
concrete. These are surfaces that we think of as “matte” or “diffuse”. Such surfaces
tend to appear flat (e.g. the full moon) and contrary to expectations, do not usually
obey Lambert’s law.
63
Extremely Rough Dielectrics
•
Retroreflective tendencies caused by
foreshortening of shadowed parts of surface when
eye near light
If we imagine each facet being essentially Lambertian (which is a reasonable
approximation to what is going on in these surfaces), then when the light and view
directions are very different the surfaces we can see are the ones which are more
dimly lit.
64
Extremely Rough Dielectrics
•
Retroreflective tendencies caused by
foreshortening of shadowed parts of surface when
eye near light
When the light and view directions are similar the surfaces we can see are the ones
which are more strongly lit.
65
Extremely Rough Dielectrics
• Lights reflect diffusely
• No environment details
visible
66
Structured Surfaces
• So far the surfaces we have seen have been
smooth or had random, unstructured
microgeometry
• Surfaces with regular, structured
microgeometry exhibit interesting reflective
characteristics
These surfaces may be metallic or dielectric; since the microstructure is the
interesting thing about them we will not distinguish the two cases.
67
Anisotropic Surface
• When the surface
microstructure is
anisotropic (not the
same in all directions),
then the reflectance
exhibits directionality
• Examples: surfaces
such as wood, brushed
metal which have an
oriented structure
PHOTOMICROGRAPH BY T. HIMLAN
One way in which the microgeometry can be structured is if it is not the same in all
directions – if it is anisotropic. This causes the reflection to exhibit directional
behavior.
68
Anisotropic Surface
• Highlights are stretched
• Environment is
directionally blurred
69
Retroreflective Surface
• Microgeometry is
constructed to reflect
most of the light back in
the incident direction
Most strongly retroreflective materials are artificial materials designed for use in
street signs, projection screens, etc. Some rare natural surfaces also exhibit strong
retroreflective behavior (such as cat’s eyes).
70
Other Structured Surfaces
IMAGES BY M.
ASHIKHMIN,
S. PREMOŽE AND P.
SHIRLEY
Fabrics are usually structured surfaces, and many of them exhibit interesting
reflective properties.
71
Reflectance
• Types of Reflectance
• Reflectance Theory
• Reflection Model Foundations
We will now discuss the behind quantitative reflection models.
72
Reflectance Theory
• Radiometric Theory
• The BRDF
Here we will lay the theoretical groundwork needed to understand reflectance from
a physical standpoint. First we will discuss radiometric theory, and then the concept
of the BRDF.
73
Radiometric Theory
• Radiometry
– The measurement of radiant energy
– Specifically electromagnetic radiant energy
– In CG, specifically energy in the visible portion of
the EM spectrum (~380nm-780nm)
• This part of the talk will use geometric optics,
not wave optics
As we have seen at the start of the course, omitting wave optics means that we are
not discussing certain phenomena such as thin-film interference and diffraction.
74
Radiometric Quantities
• Radiant Flux Φ
– Total radiant power
– Power
– Watts
In this example, we are looking at a window. The power of all the light pouring
through all parts of the window, in all directions, is radiant flux.
75
Radiometric Quantities
• Radiant Exitance
(Radiosity) B
– Flux area density, exitant
from a point
– Power per surface area
– Watts / meter2
The area density of the power exitant (coming out of) a single point on the window
is radiant exitance (also called radiosity).
76
Radiometric Quantities
• Irradiance E
– Flux area density,
incident to a point
– Power per surface area
– Watts / meter2
Irradiance is very similar to radiosity, but it measures incident (incoming) light rather
than exitant light.
77
Radiometric Quantities
• Radiant Intensity I
– Flux directional density,
in a given direction
radiant power
– Power per solid angle
– Watts / steradian
In radiant exitance was the surface density of flux at a point. Radiant intensity is the
directional density of flux in a given direction.
A solid angle is a 3D extension of the concept of an angle – it is a sheaf of
directions in 3-space. Just as an angle can be measured in radians as the length of
an arc on a unit circle, a solid angle can be measured as the area of a patch on a
unit sphere. Solid angles are measured in steradians, of which there are 4π in a
sphere.
Radiant intensity is particularly relevant when examining a very distant (or very
small) light source, where the variation over direction is more important than over
surface area.
78
Radiometric Quantities
•
Radiance L
– Radiant power in a
single ray
– Power per (projected)
surface area per solid
angle
– Watts / (meter2
steradian)
Unlike radiosity and irradiance, radiance is not tied to a surface, but to a ray.
Radiance is important because the final pixel RGB color is basically derived directly
from the radiance in the rays through the pixel sample locations.
Flux is over all points of a surface in all directions. Radiosity is the surface flux
density out of a single point in all directions, and radiant intensity is the directional
flux density in a single direction, through all points. Radiance combines the
specificity of radiosity and radiant intensity – it measures light through a single point
in a single direction, in other words a single ray of light.
79
Radiometric Quantities
•
Radiance L
– Radiant power in a
single ray
– Power per (projected)
surface area per solid
angle
– Watts / (meter2
steradian)
The surface area here (unlike the other quantities) is projected surface area, or
surface area perpendicular to the ray direction.
80
Light Direction
•
•
Until this slide, the light
arrows have pointed in
the direction the light is
going
From now on, we will use
‘light direction’ vectors
which point towards the
light
θi
N
θi
N
θi
θi
Until now, we have been showing the physics of light bouncing around. From now
on, we are talking about the math and implementation and there it is both customary
and convenient to have the ‘light direction’ pointing TO the light.
81
Radiance and Irradiance
E=
∫ L cosθ dω
i
i
i
ΩH
dE = Li cos θ i dωi
dωi
N
θi
ωi
Let us look at a surface point, and how it is illuminated by an infinitesimal patch of
incident directions with solid angle dωi. Since this patch of incident directions is
infinitesimal, it can be accurately represented by a single incident direction ω i and
we can assume that the incident radiance from all directions in this patch is a
constant Li. This patch contributes an amount of irradiance equal to Li times dωi,
times the cosine of the angle θi between the incident direction and the surface
normal. If we integrate this over the hemisphere ΩH (centered on the normal), we
get the total irradiance. Note that for a light source to illuminate a surface (contribute
to its irradiance), it needs to both have a non-zero radiance and subtend a non-zero
solid angle. Finally, note that the cosine here is only valid over the hemisphere and
is assumed to be clamped to 0 outside it. This is true for all cosine terms we will see
in this talk.
82
The (Clamped) Cosine Factor
N
θi
A cos θi
A
Where did the (clamped) cosine factor come from? The cosine is there because
radiance is defined relative to an area perpendicular to the ray, and irradiance is
defined relative to an area parallel to the surface. Another way of looking at it is that
the same radiance, coming in at a more oblique angle, contributes a smaller amount
to the irradiance because it is ‘spread out’ more. We can also see here why it is
clamped – if the incident direction is under the surface then there is no contribution.
83
Reflectance Theory
• Radiometric Theory
• The BRDF
Now we will discuss the concept of the BRDF.
84
The BRDF
• Bidirectional
Reflectance
Distribution Function
dLe (ωe )
f r (ωi , ωe ) =
dE (ωi )
– Ratio of irradiance to
reflected radiance
ωe
ωi
dωi
ωi is the direction to the incident irradiance, and ωe is the direction to the exitant
reflected radiance. For every such pair of directions, the BRDF gives us the ratio
between irradiance and exitant radiance. Since the incident direction and the
excitant direction are both 2D quantities (a common parameterization is to use two
angles: elevation θ relative to the surface normal and rotation φ about the normal),
the BRDF is a 4D function.
Since the BRDF is radiance (power/(area x solid angle)) divided by irradiance
(power/area), its units are inverse solid angle, or steradians-1.
85
The BRDF
•
Incident, excitant directions defined in the
surface’s local frame (4D function)
ωe
N
θe
φe
θi
ωi
T
φi
Directions defined relative to the local surface normal and tangent.
86
The BRDF
•
Isotropic BRDFs are 3D functions
ωe
N
θe
θi
ωi
φ
For most surfaces, the relation of the incident and exitant directions to a local
surface tangent doesn’t matter (these surfaces are isotropic and have no local
preferred direction). So instead of the rotations between each of these two
directions and a tangent vector, the BRDF depends on the rotation between the two
directions, which removes one degree of freedom. Note that the tangent vector is no
longer needed.
87
The Reflection Equation
f r (ωi , ωe ) =
dLe (ωe )
dE (ωi )
dE = Li cos θ i dωi
From the definition of a BRDF and the relation between irradiance and radiance, we
get:
88
The Reflection Equation
f r (ωi , ωe ) =
Le (ωe ) =
dLe (ωe )
dE (ωi )
dE = Li cos θ i dωi
∫ f (ω , ω )L (ω )cosθ dω
r
i
e
i
i
i
i
ΩH
The reflection equation. This means to get the exitant radiance in a direction ωe, we
need to integrate the incident radiance, times the BRDF, times the cosine of the
angle with the normal, over all incoming directions in the hemisphere around the
surface normal.
89
The BRDF
• The laws of physics impose certain
constraints on a BRDF
• To be physically plausible, it must be:
– Reciprocal
– Energy-Conserving
90
Reciprocity
•
More properly, Helmholtz Reciprocity
f r (ωi , ωe ) = f r (ωe , ωi )
All this means is that the surface must reflect light the same way in both directions –
if incoming and outgoing directions are changed, the reflectance must remain the
same.
91
Energy Conservation
•
DirectionalHemispherical
Reflectance
dB
= ∫ f r (ωi , ωe ) cos θ e dωe
R(ωi ) =
dE (ωi ) Ω H
•
R(ωi) must be less or
equal to 1 for all ωi
The directional-hemispherical reflectance is the ratio of differential radiant exitance
to differential irradiance. It tells us how much of the incoming radiant energy from a
given direction is absorbed and how much is reflected.
The reflected light energy cannot be more than the incident light energy, which
means that the directional-hemispherical reflectance must be less or equal to 1. 1
means a perfectly reflective surface with no absorption at the given incident angle.
92
Bihemispherical Reflectance
•
Also known as albedo
B 1
ρ = = ∫ R(ωi ) cos θ i dωi =
E π ΩH
1
π
∫ ∫ f (ω , ω )cosθ cosθ dω dω
r
i
e
i
e
i
e
ΩH ΩH
The bihemispherical reflectance (also called albedo) is the ratio between radiant
exitance and irradiance. Like the directional-hemispherical reflectance, it is between
0 and 1, where 1 indicates a surface with no absorption. It is an overall measure of
reflectivity – how much radiance flux striking the material from all angles is reflected
vs. how much is absorbed. It can also be seen as the cosine-weighted average of
the directional-hemispherical reflectance. It is computed by integrating the BRDF
over all incident and exitant directions. The 1/π is a normalization factor, this is the
first of many times we will see it. The reason it is so ubiquitous is that integrating the
cosine factor over the hemisphere yields pi.
93
Reflectance
• Reflectance values such as R and ρ are
restricted to a range of 0 to 1
– 0 is perfectly absorbent
– 1 is perfectly reflective
• This does not apply to the BRDF!
– For example, in the center of a tight highlight the
BRDF value will be quite high
A reflectance value is a ratio of reflected light to incident light. By definition, it must
be between 0 and 1 (at least for a non-emissive surface).
The BRDF is a distribution function – if the distribution it describes is highly nonuniform (which it will be for a smooth surface) then it can take arbitrarily high values.
94
Wavelength Dependence
• Note that the BRDF and reflectance values
are wavelength-dependent
• For the purposes of most real-time rendering
this means that they are all RGB triples
• Since reflectance values are between 0 and
1, they can be thought of as RGB colors
In high-fidelity offline rendering many more than three spectral samples are often
used which would require reflectance quantities to be represented as long vectors,
but in real-time rendering RGB triples are almost always used.
95
The Reflection Equation with Point /
Directional Lights
• Important case for real-time rendering
– Lighting from infinitely small / distant light sources
– Ambient / indirect lighting is computed separately
– Point lights characterized by intensity I and position
– Directional lights characterized by direction and I/d2
Il
L point e (ωe ) = ∑ 2 f r (ωl , ωe ) cos θ l
l dl
Point and directional lights are abstractions, since they imply infinite energy density.
However, they are useful approximations for highly localized or distant light sources.
In many scenes, these contribute most of the lighting. These lights are best
characterized by radiant intensity rather than by radiant exitance or radiance. A
directional light is a point light which is so distant that its direction and distance can
be taken as constant over the rendered scene.
The result shown here can be derived by considering a small uniformly glowing
sphere with radius rl and flux Φl, illuminating a relatively distant surface point. Since
the subtended solid angle is extremely small, we assume that radiance, BRDF and
cosine factor are constant and can be taken out of the rendering equation integral,
which leaves the solid angle. For a small distant sphere, the solid angle can be
approximated as π(rl/dl)2 where dl is the distance from the sphere center to the
surface point. This results in: Le=Llπ(rl/dl)2cosθlfr(ωe,ωl).
For this sphere, Bl=Φl/(4πrl2)= πLl, from which follows that Ll=Φl/(4π2rl2). Combined
with the previous result, we get Le=(Φl/(4πdl2))cosθlfr(ωe,ωl). Since rl is no longer
present in the result, it will hold as a limit when rl goes to 0.
For a point light source, Il=Φl/(4π) which combined with the previous result gives us
the result shown in the slide above.
Note: although Il/dl2 seems to be power / (area x solid angle), it is actually power /
area, the same as irradiance and radiant exitance (solid angles, being
dimensionless quantities, often confuse unit analysis).
96
Scale and the BRDF
• Reflectance Models are intimately tied to
scale
• The complexity of many BRDFs originates in
statistically modeled subpixel structure
We’ve seen a little bit of this in the earlier discussion of microscopically rough
materials.
For example, the BRDF of an object like a tree varies depending on whether we are
looking at a single leaf, an entire tree, or a forest. In each case, the structure
covered by a single pixel is very different.
97
Scale and the BRDF
• The same features may be modeled as
– Geometry
– Bump maps
– BRDFs
• Depending on scale
• Smooth transitions between these
representations pose interesting issues
98
Reflectance
• Types of Reflectance
• Reflectance Theory
• Reflection Model Foundations
We will now discuss the building blocks from which most reflection models are built.
99
Reflection Model Foundations
• Lambert
• Fresnel Equations
• Microfacet Theory
In this section, we will discuss the basic building blocks used to build most reflection
models. First we will discuss the Lambert BRDF, then Fresnel’s equations and
finally microfacet theory.
100
Lambert
• Constant BRDF
• Pure Lambert impossible
• Usually used for dielectric body reflectance term
A Lambertian surface reflects the same radiance in all directions. Remember that
the well-known Lambertian cosine factor is actually part of the reflectance equation
and not the BRDF. A perfectly Lambertian surface is an abstraction that does not
exist in nature.
101
Lambert
• The Lambertian BRDF is a constant value over all
incident and exitant directions
• BRDF equal to bihemispherical reflectance (albedo)
over π
ρ=
1
π
∫ ∫f
ΩH ΩH
rLambert
cos θ i cos θ e dωi dωe = πf rLambert
f rLambert =
ρ
π
Given that the Lambertian BRDF is a constant, what is it’s value? We can calculate
the bihemispherical reflectance, or albedo, and see that the BRDF is equal to the
bihemispherical reflectance over pi. This is useful, since the bihemispherical
reflectance is an intuitive number, between 0 and 1, which tells us how reflective the
surface is overall. Note since the bihemispherical reflectance is wavelengthdependent, it is an RGB triple, usually thought of as the material’s diffuse color.
102
Reflection Model Foundations
• Lambert
• Fresnel Equations
• Microfacet Theory
Now we will discuss Fresnel’s equations.
103
Fresnel Equations
•
Surface reflectance
depends on
– Incidence angle
(increases to 100%)
– Refractive index (in turn
depends on wavelength)
– Polarization
•
Spectral reflectance at
θ=0° is a material
characteristic
IMAGES BY R. COOK AND K. TORRANCE
This is a recap of a previous slide. Note here only that the main effect is to gradually
change from the spectral reflectance at normal incidence to 100% reflectance at all
wavelengths, as the incidence angle goes from normal to glancing. The shift is not
monotonic however, which causes the blue shift seen on the left just before it goes
to white.
104
Fresnel Equations
• Full equations expensive to compute, need
(possibly complex) refractive index data for
different wavelengths
• Schlick approximation:
RF (θ i ) = RF (0 ) + (1 − RF (0 ))(1 − cos θ i )
5
• RF(0) is the directional-hemispherical
reflectance at normal incidence
The Schlick approximation is accurate to within a few percent, is much cheaper to
compute, and has a much more intuitive parameter: RF(0), which is the reflectance
at normal incidence. Note that the reflectances here are all directionalhemispherical.
RF(0) is commonly thought of as the materials’ specular color. It is relatively high for
metals, and low for dielectrics. The cosine factor (like all others in this course) is
clamped to zero.
This approximation is quite good, though it does miss some effects like color-shifts
before glancing angles.
105
Fresnel Equations
• RF(0) is high for metals
– Steel ~0.55, Silver, Aluminum ~0.95
– Colored metals
• Gold: from ~0.6 for blue to ~0.9 for red
• Copper: from ~0.4 for blue to ~0.85 for red
• RF(0) lower for dielectrics
– Water, glass, plastic, etc. ~0.05
– Diamond ~0.15
Colorless metals have RF(0) which is almost constant over the visible spectrum.
Colored metals tend to have higher RF(0) for longer wavelengths. Dielectrics are
usually colorless and have low RF(0).
106
Reflection Model Foundations
• Lambert
• Fresnel Equations
• Microfacet Theory
Finally in this section, we discuss microfacet theory.
107
Microfacet Theory
• Models some effects of microgeometry
– Single-bounce surface reflectance
– Shadowing
– Masking
• Doesn’t model
– Interreflections
– Body reflectance
Microfacet theory is a useful framework for modeling microscopically rough
surfaces. It does not model diffuse effects such as surface interreflections and body
reflectance, and for this reason microfacet BRDFs usually add a separate term
(most often Lambertian) to model the diffuse reflectance.
108
Microfacet Theory
• Surface modeled as flat microfacets
– Each microfacet is a Fresnel mirror
• Normal Distribution Function (NDF) p(ω)
– p(ω)dω is fraction of facets which have normals in
the solid angle dω around ω
– ω is defined in the surfaces’ local frame
– There are various different options for p(ω)
We will look at various options for modeling the NDF later. As we shall see, the NDF
is the most visually important feature of a microfacet model.
109
Microfacet Theory
•
For given ωi and ωe, only facets oriented to reflect
ωi into ωe are active
•
These facets have normals at ωh
ωi
ωh
ωe
ωe
ωi
ωh
ωe
ωi
ωh
ωe
ωi
ωh
ωe
ωi
ωh
ωi
ωh
ωe
ωi
ωh
ωe
Since each facet is a perfectly smooth mirror, it has to be oriented exactly right to
reflect ωi into ωe to participate in the reflectance at all. ωh is the half-angle vector,
which is the vector exactly half-way between ωi and ωe.
110
Microfacet Theory
•
Fraction of active microfacets is p(ωh)dω
•
Active facets have reflectance RF(αh)
•
αh is the angle between ωi (or ωe) and ωh
ωi
ωh
ωe
ωe
ωi
ωh
ωe
ωi
ωh
ωe
ωi
ωh
ωe
ωi
ωh
ωi
ωh
ωe
ωi
ωh
ωe
Only the microfacets with normals facing along ωh can participate in the reflection.
We can find the proportion of microfacets which have their normals in an
infinitesimal patch of directions (solid angle dω) around ωh by using the NDF. We
assume the microfacets are themselves mirrors, so we can use Fresnel to get their
reflectance.
111
Half-angle Vector
ωh
N
αh
ωe
αh
ωi
θh
Here is an illustration of some of the various angles relating to the half-angle vector
which appear in BRDFs.
112
Half-angle Vector
ωh
N
αv
B
αu
T
φh
Here are some more angles relating to the half-angle vector, these are used in
anisotropic BRDFs. In addition to the surface normal N which we have seen before,
this diagram also includes the surface tangent T and bitangent B.
113
Microfacet Theory
p(ωh )G (ωi , ωe )RF (α h )
f r (ωi , ωe ) =
4 K p cos θ i cos θ e
• G(ωi,ωe) (the geometry factor) is the
fraction of microfacets which are not
masked or shadowed
• Kp is a constant with a value dependent on
the microgeometry structure
From the previously seen relations and equations, we can derive this form for the
microfacet BRDF (detailed derivations can be found in the literature). We will see a
few different options for modeling the geometry factor later in this talk. Note that the
geometry factor may contain parts that will cancel out or modify the cosine factors in
the denominator.
114
Microfacet Theory
• p(ω) is the most important parameter
• Controls roughness, anisotropy, exact shape
of highlight
• A variety of different functions can be used
• It can even be painted into a texture
– Analogous to painting the highlight
The NDF is the most important parameter. We will learn more about hand-painted
NDFs later in the course.
115
Isotropic Normal Distribution
Functions
n +1 n
• “Phong”
pPh (θ ) =
cos θ
2π
• Gaussian
• Beckmann
pG (θ ) = kG e − (c θ )
G
pB (θ ) =
2
1
e
2
4
m cos θ
⎧ tanθ ⎫ 2
-⎨
⎬
⎩ m ⎭
2
⎛
⎞
cTR
⎟⎟
• Trowbridge-Reitz pTR (θ ) = kTR ⎜⎜
2
2
⎝ cos θ cTR − 1 + 1 ⎠
(
2
)
Here are some examples of isotropic normal distribution functions that appear in the
literature. These include an NDF derived from the Phong reflectance model, the
Gaussian distribution, as well as distributions from papers by Beckmann and
Trowbridge and Reitz. Since they are isotropic, they are all parameterized by θ, the
angle between the microscopic and microscopic surface normals. They each have
parameters which control the smoothness of the surface (n for Phong, cG for
Gaussian, m for Beckmann and cTR for Trowbridge-Reitz). In addition, the Gaussian
and Trowbridge-Reitz NDFs have normalization factors (kG and kTR respectively)
which are not shown here and need to be computed.
116
Microfacet vs. Reflection BRDFs
ωh
ωe
N
ωi
θh
Some BRDFs use a formulation which seems superficially similar to microfacet
BRDFs, but which is actually significantly different.
Here we show again the half-angle direction ωh. The value of the microfacet normal
distribution function in this direction gives us a first approximation of the reflectance,
and determines the size and shape of the highlight. For an isotropic NDF, this is a
function of the angle θh with the macroscopic surface normal, which is therefore an
important BRDF parameter.
117
Microfacet vs. Reflection BRDFs
N
ωr i
ωe
ωi
αr
Reflection BRDFs (such as the Phong BRDF) use the reflection direction ωri, which
is the incident direction ωi reflected about the surface normal. Instead of θh, they are
parameterized by αr (the angle between ωri and the exitant direction ωe). This angle
has no clear physical interpretation.
118
Microfacet vs. Reflection BRDFs
N
ωi
ωe
αr
ωr e
Note that αr is also sometimes described as the angle between ωi and ωre, the
reflection of ωe about the surface normal – the two descriptions are equivalent.
119
Microfacet vs. Reflection BRDFs
N = ωh
ω r i= ω e
ωre= ωi
Note that θh and αr are equal to 0 in the same situation – when ωe is equal to ωri (or
ωi is equal to ωre), then ωh is at the surface normal. This means that both types of
BRDF will place the center of the highlight at the same location. Physically, this is
when the microfacets which are oriented the same as the overall surface (which is
almost always the peak of the microfacet distribution) are oriented to reflect the
incident light into the exitant direction. Although the center of the highlight will be in
the same location, the two types of BRDF will not have the same shape of highlight.
120
Microfacet vs. Reflection BRDFs
N
ωr i
ωi
αr
With a reflection BRDF, regardless of the angle of incidence, exitant directions with
equal values of αr will form circular cones around ωri. This will cause the highlights
to be round in shape.
121
Microfacet vs. Reflection BRDFs
ωh
ωe
ωr i
2θh
ωe
N
θh θh
ωh
ωi
2θh
With a microfacet BRDF, changing the half-angle direction by θh in the plane formed
by ωi and the surface normal will cause the equivalent exitant directions to change
by 2θh, regardless of the angle of incidence.
122
Microfacet vs. Reflection BRDFs
ωh
ωe
θh
N
ωh
θh
ωi
However, changing the half-angle direction by θh in a direction perpendicular to the
plane formed by ωi and the surface normal will cause the equivalent exitant
directions to change by a smaller angle. This angle will decrease as the angle of
incidence increases. This can be visualized as rotating the pale green circular arc
around the dashed blue axis (which is collinear with ωi) – the points on the arc
closer to the axis will move less.
123
Microfacet vs. Reflection BRDFs
N
Since the variation in the angle of exitance remains the same in the plane of
incidence, but decreases perpendicular to it with increasing incidence angle, this
means that the highlights are not round, but narrow.
124
Microfacet vs. Reflection BRDFs
N
As the angle of incidence increases, the highlights become increasingly narrow.
This is a very different visual prediction than that of a reflection BRDF.
125
Microfacet vs. Reflection BRDFs
As we can see from these photographs, reality matches the prediction of the halfangle BRDFs. Reflections from rough surfaces at glancing angles are narrow and
not round.
126
Physically-Based Reflectance
for Games
9:15 - 10:15: Game Development
Dan Baker & Naty Hoffman
127
Game Development
• Game Platforms (Dan)
• Computation and Storage Constraints (Dan)
• Production Considerations (Naty)
• The Game Rendering Environment (Naty)
In this section, we will first discuss current and next-generation platforms on which
games run. Next we will discuss the computation and storage constraints game
developers face when developing for these platforms, the various production
considerations which affect reflectance rendering in games, and finally the rendering
environment within which reflection models are used in games.
128
Game Platforms
In this section, we discuss the platforms on which games run. First we shall give a
brief overview of the rendering pipeline on the relevant platforms, and then we shall
detail some of the characteristics of current and next-generation game platforms.
129
Modern Game Hardware
• Hardware is different, but trends are similar
– Multiple CPUs
– Custom GPU, but all similar
– Programmable Shading
– Bandwidth is getting pricier
– Computation is getting cheaper
To some extent, the various game platforms are converging. They all use multiplecore central processing units and GPUs with similar architectures.
130
Basic Hardware Architecture
CPU
GPU
Memory
Memory
Basic block diagram of the hardware architecture of modern game platform. On
some systems, like Xbox 360, the GPU and CPU memory is merged.
131
Shader Model 3 (Xbox360/PS3/D3D9)
storage
programmable logic
Constants
Vertex
Setup
Vertex
Shader
Filtering
Vertex Index
Buffer Buffer
Texture
fixed logic
Constants
Setup
Rasterizer
Pixel
Shader
Blend
Filtering
Texture
Depth Render
Stencil Target
Memory
This represents the main functional units in the GPU pipeline as it exists today. The
Xbox 360, the Playstation 3 and current PCs all share this GPU architecture. For
the purposes of this talk, we are most interested in the two programmable shader
units. Currently there is a programmable vertex shader or vertex program, and a
programmable pixel shader or fragment program. The rest of the pipeline is fixedfunction (but highly configurable) logic.
132
Shader Model 4 (D3D 10)
storage
Memory
Input
Assembler
Constants
Constants
Vertex
Shader
Geometry
Shader
Filtering
Vertex Index
Buffer Buffer
programmable logic
Texture
Filtering
Texture
fixed logic
Constants
Setup
Rasterizer
Stream out
Stream
Buffer
Pixel
Shader
Output
Merger
Filtering
Texture
Depth Render
Stencil Target
Memory
This shader model is the one exposed in Direct3D 10. Hardware supporting this
model should be available later in 2006. Most of the differences with future
hardware are related to memory access and placement, however the introduction of
the Geometry Shader fundamentally alters the kinds of lighting we may chose to
perform, since we now can light at a triangle level if we chose to.
133
Game Development
• Game Platforms (Dan)
• Computation and Storage Constraints (Dan)
• Production Considerations (Naty)
• The Game Rendering Environment (Naty)
Now we shall discuss the computation and storage constraints game developers
face when developing for these platforms, in particular as they pertain to rendering
reflection models.
134
Computation and Storage
Constraints
Platform
Xbox 360
Playstation 3
PC (2007)
CPU
3.2 GHz
3.2 GHz
3 GHz
3 Cores
1 Core + 7 SPUs 2 Cores
GPU
shader 3
shader 3
shader 4
Memory
512 MB
512 MB
2048 MB
Bandwidth 21.2 GB/s 25 GB/s
(256 GB/s) 25 GB/s
6 GB/s
60 GB/s
First, we discuss the computation constraints game developers face when
developing for these platforms, in particular as they pertain to rendering reflection
models. Next, we will discuss the relevant storage constraints.
The PS3 and PC both have two buses, a graphics bus and a CPU bus. The first
number is the CPU bus, the second the GPU. The Xbox 360 has one bus, and highspeed embedded RAM for the back-buffer (for which the bandwidth number is given
in parenthesis).
Some of these computational resources are reserved for operating system tasks
(some SPUs on the PS3, one thread of one core on the Xbox 360, and a variable
amount on the PC).
135
CPU Differences
• Total CPU power similar, but PC’s take
~30% hit due to OS
• CPU and system memory are shared by ALL
other game systems, physics, AI, sound, etc.
• Typically end up with far less then 50% of
CPU for Graphics
Cache behavior is also a big problem with multiple CPUs and other tasks.
Specifically, other processes and jobs can pollute the working set, vastly slowing
down a processors performance. Latency varies from platform to platform, but is on
the order of 100-500 cycles.
136
Polygon / Topology Shading
• In offline rendering, shading is typically performed
on polygons (micropolygons)
• GPUs currently perform shading on individual
vertices, and then pixel fragments
– Polygon or topology computations done on the CPU
• Shader model 4 adds geometry shaders on GPU
• On PS3, these computations can be done on SPUs
• Performance characteristics are still unclear
The shader model exposed on current GPUs allows for stream computation on
individual vertices or fragments only, so any computations which are performed on
triangles or otherwise require topology or connectivity information must be restricted
to fixed-function hardware, or performed on the general-purpose CPU. In the near
future, GPUs supporting shader model 4 will have the geometry shader which
enables programmable triangle / topology computations. On the Playstation 3, the
SPUs can fulfill a similar role.
137
Vertex Processing
• Most games are not vertex bound
• Much more likely to be memory bound
• Performance can be gained by moving
operations from pixel to vertex shader
It is, however, easy to be vertex bound with high overdraw scenes. In these
situations, a Z prepass blocks pixels from being shaded when not visible, but a
vertex is always shaded, thus, in the future expect most vertex shaders to primarily
transform geometry only.
138
Pixel Pushing
• ~500 instruction pixel shaders are here!
– Well, almost…
• Instructions ~1 cycle, usually 4 way SIMD
– 500 cycles per pixel at 800x600, 60fps = 14.4
billion pixel cycles / sec
• 500 MHz GPU with 32 shader cores
• Predicated on ability to do Z-Prepass
– Not valid for translucent materials
500 instructions is high end, but recall that with flow control, in a typical scenario not
all of these instructions are executed. Still, 500 cycles is feasible on high end
hardware.
139
But…
• Computation follows Moore’s law, but
memory does not
• Texture loads expensive – bandwidth is high
• Pixel shaders must execute together for
gradients – so a cluster of pixels pays the
price of the most expensive one
• Latency hiding mechanism isn’t free
140
Shader Execution
1
2
3
4
Math
Math
Math
Math
Math
Math
Math
Math
Tex
Tex
Tex
Tex
Math
Math
Math
Math
Math
Math
Math
Math
Shader Core
Suspended Pixels
In this theoretical GPU, we have one shader core and 3 suspended pixels. The
shader core starts by executing shader instance #1, which corresponds to a specific
pixel on the screen.
141
Shader Execution
2
1
3
4
Math
Math
Math
Math
Math
Math
Math
Math
Tex
Tex
Tex
Tex
Math
Math
Math
Math
Math
Math
Math
Math
Shader Core
Suspended Pixels
When the GPU hits the Texture instruction, it halts the pixel thread, and puts it in
the suspended pixels. The program counter and register state is halted at that
texture instruction. Shader instance 2 is now loaded into the core.
142
Shader Execution
3
1
2
4
Math
Math
Math
Math
Math
Math
Math
Math
Tex
Tex
Tex
Tex
Math
Math
Math
Math
Math
Math
Math
Math
Shader Core
Suspended Pixels
Now, shader instance 3 is loaded into the shader core, and executed until the
texture instruction.
143
Shader Execution
4
1
2
3
Math
Math
Math
Math
Math
Math
Math
Math
Tex
Tex
Tex
Tex
Math
Math
Math
Math
Math
Math
Math
Math
Shader Core
Suspended Pixels
Finally, the shader core gets to pixel instance #4, here we have 3 partially executed
pixels waiting to be completed.
144
Shader Execution
1
2
3
4
Math
Math
Math
Math
Math
Math
Math
Math
Tex
Tex
Tex
Tex
Math
Math
Math
Math
Math
Math
Math
Math
Shader Core
Suspended Pixels
Finally, the shader core reloads pixel #1, and the texture load data should now be
ready, having been sent across the bus
145
Textures and Data
• Artists like to make very high resolution
textures filled with parameters for each texel
• Bad as GPU costs are, texture costs are
worse
• A character model might have four
2048x2048 textures - 64 MB of RAM!
• Could only have 6 characters at this rate!
Storage constraints are often more important in Console development then in PC
development, since there is no guarantee of a hard drive, RAM is limited, and seek
time on a DVD is slow. Additionally, high texture use causes bandwidth loads which
can be a performance bottleneck.
146
Typical Texture Memory Budgets
• Last generation, typical character ~1-4 MB
• This generation action game: 10-20 MB (<
20 characters)
• For MMORPG, still about 1-4 MB because of
the large number of unique characters
• For strategy games per-unit budget < 1 MB!
• Terrain – 20-40 MB, often just a window of a
giant texture
These figures are composite numbers, but represent fairly typical scenarios found in
real world situations.
147
Game Development
• Game Platforms (Dan)
• Computation and Storage Constraints (Dan)
• Production Considerations (Naty)
• The Game Rendering Environment (Naty)
Now we shall discuss the various production considerations which affect reflectance
rendering in games.
148
Production Considerations
• The Game Art Pipeline
• Ease of Creation
First we will discuss the production pipeline for creating game art, ion particular as it
pertains to reflection models. Then we shall discuss issues relating to reflection
models which affect the ease of art creation.
149
Modern Game Art Pipeline
• Initial Modeling
• Detail Modeling
• Material Modeling
• Lighting
• Animation
This is a somewhat simplified and idealized version of the modern game
development pipeline. In reality, production does not proceed in an orderly fashion
from one stage to the next – there is frequent backtracking and iteration. Also, not
all stages are present for all types of data – the lighting stage is typically performed
for rigid scenery and animation is typically performed for deforming characters, so
both are almost never performed on the same model. We will discuss different types
of game models later in the course.
150
Initial Modeling
• Performed in a general-purpose package
– Maya, MAX, Softimage, Lightwave, Silo
• Modeling geometry to be used in-game
• Creating a surface parameterization for
textures (UV mapping)
The software packages in which the initial modeling is performed are traditionally
the general-purpose workhorse of game art development, although many of their
functions are being taken over by a new class of specialized detail modeling
packages. The in-game geometry (as opposed to detail geometry which is only
used to create textures) is edited here. This geometry needs to be amenable to
level-of-detail (LOD), animation, surface parameterization and efficient in-game
processing and rendering – these all impose constraints on how the geometry can
be modeled.
Creating a surface parameterization is an important part of the artist’ task. This
parameterization is used for the various textures which will be attached to the
model. Automatic tools for generating these parameterizations are rarely sufficient.
In the past, these parameterizations included large amounts of repetition to save
texture memory, now the growing emphasis on normal maps (which can rarely be
repeated over a model) means that most parameterizations are unique over the
model (or at least over half a model in the common case of bilaterally symmetrical
models). This may also differ between scenery and character models.
151
Initial Modeling
Here we see an example of initial modeling of a game character’s head in Maya.
The in-game geometry can be seen, as well as the UV parameterization. Note that
the parameterization is highly continuous. This is more important than low distortion,
and is the reason why parameterizations are so commonly generated by hand.
Discontinuities in the UV parameterization will require duplicating vertices along the
discontinuity, and may also cause undesirable rendering artifacts in some cases if
they are too prevalent throughout the model.
As mentioned before, the construction of the in-game model and its
parameterization must obey multiple constraints and requires much skill and
experience to perform well.
152
Modern Game Art Pipeline
• Initial Modeling
• Detail Modeling
• Material Modeling
• Lighting
• Animation
The next step is detail modeling. This has become more important in recent years.
153
Detail Modeling
• In the past this included only 2D texture
painting, usually performed in Photoshop
• Now a new class of specialized detail
modeling packages has arisen
– ZBrush (the pioneer), Mudbox
– Enable powerful sculpting / painting of surface
geometry detail, and 3D painting of colors
• 2D texture painting apps still used
154
Detail Modeling
Here we see the same character, after he has been exported from Maya and
imported into ZBrush. ZBrush was the first detail modeling package and is still by far
the most popular.
155
Detail Modeling
The relatively low-resolution in-game model has its resolution dramatically
increased and smoothed, then fine geometric details are painted / sculpted into the
model. These details will not be present in the in-game geometry, instead they will
be “baked” into textures of various kinds.
156
Detail Modeling
The most common type of texture used to represent the added geometric detail is
the tangent-space normal map. This is a texture where the colors represent the
coordinates of surface normal vectors in the local frame of the in-game (original,
lower-resolution) mesh. This local frame is referred to as a tangent space. Detail
modeling applications have various controls as to how the normal map is created
and in which format it is saved. The generation of such a normal map can be seen
as a process of sampling the surface normals of the high-detail model onto the
surface parameterization of the low-detail model.
Here we see another reason why the surface texture parameterization created by
the artist in the previous step is important. This parameterization is not only used to
map textures such as the normal map onto the surface; it is also used to define the
local surface frame into which the surface normals are resampled.
The normal map extraction is not always performed in the detail modeling
application – sometimes the high-detail geometry is exported, and both it and the
low-detail geometry are imported into a specialized normal map generation
application, sich as Melody from NVidia. General-purpose modeling packages such
as Maya also provide normal map generation functionality.
157
Detail Modeling
Here we see the resulting normal map texture which was generated from ZBrush. In
this common representation, the red, green and blue channels of the texture
represent the X, Y and Z coordinates respectively of the surface normal vector in
tangent space. The -1 to 1 range of the coordinates has been remapped to the 0 to
1 range of the texture channels. The common purple color we see here is RGB =
{0.5,0.5,1}, which represents a surface normal of {0,0,1} in tangent space, namely a
surface normal which is aligned with that of the underlying low-resolution surface.
Divergences from this color represent areas where the surface normal diverges
from that of the low-resolution surface.
Note that many other representations of the normal map are possible. For example,
recently it has become quite common to only store the X and Y components of the
normal, generating the Z component in the shader based on the fact that the normal
vector is known to lie on the upper hemisphere.
158
Detail Modeling
Many other surface properties besides surface normals are stored in textures. The
most common (and until recently, the only) such texture is the diffuse color or
spectral albedo texture. We see here the color texture which in an early state of
work in Photoshop (still by far the most common 2D texture editing application).
More recently, many other surface properties are stored in textures. Any BRDF
parameter can be stored in a texture, although if the parameter affects the final
radiance in a highly non-linear manner, the hardware texture filtering will not
produce the correct result. There will be more discussion on this subject later in the
course.
A highly continuous parameterization will make it much easier to paint the texture,
which is yet another reason why this is a desirable property.
159
Detail Modeling
Here is the result of applying the work-in-progress texture we just saw to the model
in ZBrush. It is possible to work closely between the 2D texture painting application
and the 3D detail modeling application, changes in one will be quickly mirrored in
the other so the artist has immediate feedback of the results of their changes.
Detail modeling applications also allow painting surface colors directly onto the
surface of the model, which is starting to become a popular alternative to 2D
painting tools. In most cases, both options are used together since each type of tool
is convenient for performing certain operations.
160
Detail Modeling
Here we see the final painted texture in Photoshop,
161
Detail Modeling
And in ZBrush.
162
Modern Game Art Pipeline
• Initial Modeling
• Detail Modeling
• Material Modeling
• Lighting
• Animation
The next step is material modeling. This is an important step for this course, and
uses the results of the previous step.
163
Material Modeling
• This is usually performed back in the
general-purpose modeling application
• The various textures resulting from the detail
modeling stage are applied to the in-game
model, using its surface parameterization
• The shaders are selected and various
parameters set via experimentation
(“tweaking”)
This is where the artist will choose from the available shaders, so if there are
shaders supporting different reflectance models or BRDFs the artist will decide
which is most applicable to the object. The entire object does not have to sue the
same shader, although there are significant performance advantages to maximizing
the amount of geometry which can be drawn with a single shader.
164
Material Modeling
Here we see the in-game model with the normal map generated from ZBrush and
the color map painted in Photoshop applied. The open dialog allows the artist to
select the shaders used, and once selected, to select which textures are used and
various other parameters which are not derived from textures. For example, in this
shader although the diffuse color is derived from a texture, the specular color is a
shader setting. However, this color is multiplied with a per-pixel scalar factor derived
from the Alpha channel of the color texture. The specular power is also a setting
and not derived from a texture. In general, deriving parameters from textures makes
the shader more expressive, enabling the artist to represent different kinds of
materials in a single shader (which as we have mentioned, has some advantages).
On the other hand, increasing the amount of textures used uses up more storage,
and may also cause the shaders to execute considerably more slowly.
165
Modern Game Art Pipeline
• Initial Modeling
• Detail Modeling
• Material Modeling
• Lighting
• Animation
The next step is lighting. This refers to “pre-lighting”, which is the pre-computation of
lighting values.
166
Lighting
• Actually pre-lighting
– Pre-computation of the effects of static lights on
static geometry, usually with global illumination
• In the past, just irradiance data “baked” from
a GI renderer or custom tool
• Now more complex data often used
– A simpler example is ‘ambient occlusion’ data
This is only done for certain kinds of geometry and materials, and we will discuss it
further in a later section of the course. One noteworthy detail about lighting tools is
that they are sometimes used to generate “ambient occlusion” factors into vertices
or textures for use in later rendering.
167
Modern Game Art Pipeline
• Initial Modeling
• Detail Modeling
• Material Modeling
• Lighting
• Animation
Finally, we have animation. This is only relevant for certain kinds of models
(scenery is rarely animated). The two most common kinds of animation used in
games are bone deformation and blend shapes (also known as morph targets).
168
Animation
Here we see the final character being animated in Maya, using blend shapes or
morph targets. The animation is most commonly performed in the general-purpose
modelign application.
169
Background and Foreground
• Game scenes are commonly separated into:
• Background
– Mostly static
– Scenery (rooms, ground, trees, etc.)
• Foreground
– Objects which move and / or deform
– Characters, items, (sometimes) furniture, etc.
We have mentioned several times that different kinds of models are sometimes
handled differently in the art pipeline. These are the main types of game models.
Note that which category an object can belong to is somewhat dependent on the
specific game; in one case a chair may be static and a background object, in
another game the chair may be movable by the character and act as a foreground
object.
170
Background
• Background is sometimes further separated:
• Unique, seamlessly connected parts
– Terrain, walls / floors, etc.
• Instanced discrete objects
– Rocks, trees, (sometimes) furniture, etc.
Note that furniture may be foreground or instanced background, it will usually not be
unique background however.
171
Production Considerations
• The Game Art Pipeline
• Ease of Creation
Now we shall discuss issues relating to reflection models which affect the ease of
art creation.
172
BRDF Parameters
• Preferably, BRDF parameters should clearly
control physically meaningful values
– Directional-hemispherical reflectance of surface
reflection (“specular color”)
– Albedo of body reflection (“diffuse color”)
– Surface roughness
173
BRDF Parameters
• Parameters which map to physical quantities
enable artists to explore parameter space
– Changing parameters such as surface reflectance,
body reflectance and roughness independently
– Avoiding inadvertently creating materials which
reflect significantly more energy than they receive
– Mimicking the appearance of actual materials
when desired
174
BRDF Parameters
• In previous game generations, these
features were less important
– Visuals tended to be more stylized
– Lighting environments less complex
• limited to a 0 to 1 range of values
• Simplistic indirect / ambient light models
– Games now use more realistic lighting and scenes,
which requires more care in material creation
We will show specific BRDFs and show how to make their parameters more
physically meaningful in a later section.
175
Shift-Variance
• Also important for ease of use
• Very few game objects are composed of
purely homogeneous materials
• At least some of the BRDF parameters must
be easily derivable from textures
176
Shift-Variance
• It is also important to be able to combine the
BRDF easily with normal mapping
• Anisotropic BRDFs may require even more
control over the local surface frame per-pixel
– “Twist maps” which vary the tangent direction
– More on this later in the course
• Some BRDF rendering methods have
difficulty supporting shift-variance
Since shift-variance is so important for artist control, BRDF rendering methods
which preclude it will have limited applicability in games.
177
Game Development
• Game Platforms (Dan)
• Computation and Storage Constraints (Dan)
• Production Considerations (Naty)
• The Game Rendering Environment (Naty)
Finally in this section, we shall discuss the rendering environment within which
reflection models are used in games.
178
The Game Rendering Environment
• Lighting Environments
• Bump, Twist and Relief Mapping
First we shall discuss the different kinds of lighting environments which are used to
light reflection models in games. Next, we discuss issues relating to per-pixel detail
methods such as bump, twist and relief mapping as they affect the implementation
of reflection models.
179
Lighting Environments
• Game environments are lit by many different
sources of light
– Outdoor, daytime environments by sun and sky
– Indoor environments by a combination of artificial
lighting and sunlight
• Surfaces are lit by indirect as well as direct
lighting
180
Lighting Environments
• The shaders used in games do not support
arbitrary lighting
• Lighting is simplified in some way, usually
into several terms which are handled in
different ways by shaders
• The simplified lighting used by a shader is
called its lighting environment
This is a repeat of a previous slide. We bring it here again because this division is
very significant for how lighting is handled in games.
181
Background and Foreground
• Game scenes are commonly separated into:
• Background
– Mostly static
– Scenery (rooms, ground, trees, etc.)
• Foreground
– Objects which move and / or deform
– Characters, items, (sometimes) furniture, etc.
This is a repeat of a previous slide. We bring it here again because this division is
very significant for how lighting is handled in games. Lighting is often handled quite
differently on background and foreground geometry.
182
Prelighting
• Static and dynamic lights often separated
• Static lighting can be precomputed
– On static (background) geometry: lightmaps or
“pre-baked” vertex lighting
– On dynamic (foreground) geometry: light probes
– Can include global illumination effects
• Dynamic lights can be added later
– Due to the linearity of lighting
A common trend in game development is to move as much computation as possible
to “tools time” – pre-compute data using custom tools which is later used in the
game. This usually involves a computation vs. memory tradeoff which must be
carefully considered, but in many cases it is worth doing.
In many games, much of the lighting is assumed not to change (most games do not
feature changing sun and sky light with time of day, and do not often allow changing
interior lighting during gameplay). The affect of these static lights can be computed
ahead of time and stored in different ways.
183
Prelighting and Global Illumination
• Since prelighting is computed ahead of time,
global illumination is commonly used
• Similar to global illumination for rendering
– Difference is final data generated (surface lighting
information and light probes vs. screen pixels)
184
Prelighting on Background
• Usually (low-resolution) textures or vertex
data
• Combined with (higher-resolution, reused)
texture data
• Effective compression
– Memory limits preclude storing unique highfrequency lighting data over the entire scene
The low-resolution lighting information is usually stored over the scene in a unique
manner (no repetition or sharing, even between otherwise identical objects).
Memory limitations therefore constrain this information to be stored at a low spatial
frequency. For this reason it is very advantageous to combine it with higherfrequency data which can be repeated / reused between different parts of the scene
(such as albedo textures or normal maps).
185
Prelighting on Background
• In the past, irradiance values
– Combined with higher-frequency albedo data
– Diffuse (Lambertian) surfaces only
• More complex lighting is becoming common
– Pioneered by Half-Life 2 (Valve, 2004)
– Directional information allows incorporating
• High-frequency normals
• Other (not general) BRDFs
The use of simple prelighting (irradiance values only) dates back to around 1996
(Quake by id software). The use in games of prelighting incorporating directional
information was pioneered by Valve with the release of Half-Life 2 in 2004. There
separate irradiance information is stored for each of three surface directions which
form an orthonormal basis. The actual surface normal is combined with this data in
the pixel shader to compute the final lighting. This lighting information is used for
diffuse lighting only in Half-Life 2, however extensions for certain types of specular
reflectance have been suggested. In general, arbitrary BRDFs cannot be rendered
with this type of lighting data.
Directional prelighting is currently an active area of research and development in the
game industry (and hopefully outside it; see “Normal Mapping for Precomputed
Radiance Transfer” by Peter-Pike Sloan, SI3D 2006).
186
Prelighting: Light Probes
• Approximate light field
• Sampled at discrete
positions in scene
• Radiance or irradiance
as function of direction
– Spherical harmonics,
environment maps, etc.
• Foreground objects
interpolate, then apply
The foreground objects move throughout the scene, are not seamlessly connected
to other objects and are usually small in extent compared to the spatial variations in
the scene lighting. For these reasons, a common approximation is to treat the
prelighting as distant lighting and compute it at the center of the object. This is
usually achieved by interpolating between scattered (regular or irregular) spatial
samples called light probes. Each light probe stores a function over the sphere,
either of incident radiance (as a function of incident direction) or of irradiance (as a
function of surface normal direction). The position of the object center is used to
interpolate theses to get an interpolated light probe to use for lighting the object.
Similarly to background prelighting data, this is usually applied to Lambertian
surfaces, with occasional extensions for some limited types of specular reflectance.
Commonly to handle highly specular reflections, high-resolution environment maps
are used, with very sparse spatial sampling (perhaps a single map per scene).
187
Dynamic Lights
• Lights that change in position, direction,
and/or color
– Flashlight, flickering torch on wall, etc.
• In games, almost always point or directional
lights
If a lights can change in position, direction (such as a flashlight carried by a
character) or perhaps just in its color and intensity (such as a flickering torch on a
wall) then it cannot be handled via prelighting.
188
The Reflection Equation with Point /
Directional Lights
• Important case for real-time rendering
– Lighting from infinitely small / distant light sources
– Ambient / indirect lighting is computed separately
– Point lights characterized by intensity I and position
– Directional lights characterized by direction and I/d2
Il
L point e (ωe ) = ∑ 2 f r (ωl , ωe ) cos θ l
l dl
This is a repeat of a previous slide. We bring it again here to discuss a common
issue with how point / directional light intensity values are commonly represented in
games. Note that arbitrary BRDFs are relatively easy to handle with this kind of
lighting, since the BRDF just has to be evaluated at a small set of discrete
directions.
189
Light Intensities
• Lambertian surface lit by single point light
– Radiometric version
Il ρ
Le = 2 cos θ l
dl π
– “Game version”
il
Le = 2 ρ cos θ l
dl
il ≡
Il
π
Before we continue our discussion of point and directional lights, there is an issue
which should be noted. The intensity values commonly used for point and
directional lights (both in game engines and in common DCC applications such as
MAX and Maya) are closely related, but not equal to the radiometric radiant intensity
quantity. The difference will become clear if we examine a Lambertian surface lit by
a single point light, both as a radiometric equation and in the form commonly used
in game engine. Here we see that the light “game intensity” value (here denoted by
il) differs from the light’s radiant intensity value (Il) by a factor of π.
This factor needs to be noted when adapting reflectance models from books or
papers for use in games. If the standard “game intensities” are used, then the BRDF
needs to be multiplied by a factor of π (this multiplication will tend to cancel out the
1/π normalization factors which are common in many BRDFs). Game developers
should also note the presence of this factor when using other kinds of lighting than
point lights, since it may need to be removed in such cases. Another option is for
game developers to use the radiometrically correct quantities at least during
computation (using them throughout the pipeline could be difficult since lighting
artists are used to the “game intensity” values).
190
Point Light Distance Attenuation
• Another divergence
from radiometric math
• Unlike the 1/dl factor, fd
usually clamps to 1.0
• Reasons:
– Aesthetic, artist control
Le =
Il
f r (ωl , ωe ) cos θ l
dl
Le = il f d (d l )πf r (ωl , ωe ) cos θ l
Le = il (d l )πf r (ωl , ωe ) cos θ l
– Practical (0 at finite dl)
– Simulate a non-point light
Another way in which the math typically used in games for lighting with point lights
differs from the radiometric math for point lights is in the distance attenuation
function. A straight inverse square function is rarely used, usually some other
‘distance attenuation’ function is used. These functions are typically clamped to
1 close to the light (unlike the inverse square factor, which can get arbitrarily
large values close to the light) and decrease with increasing distance, usually
reaching 0 at some finite distance from the light (again unlike the inverse square
factor).
Such ‘tweaked’ distance attenuation factors have a long history and are not unique
to games, being in the earliest OpenGL lighting models and being used in DCC
applications as well as other types of production rendering (such as movie
rendering). Note that games will often use these distance attenuation functions
in the prelighting stage as well as for dynamic runtime lights.
There are three main reasons for using a function other than a straight inverse
square falloff:
1) More control by the artist to achieve a certain ‘look’ for the scene
2) Performance – if the light’s effect drops to 0 at a finite distance, then it can be
left out of lighting computations for objects beyond that distance
3) Actual light sources are not zero-size points and have some spatial extent.
These lights will have more gradual falloffs than point lights, and this visual
effect can be simulated by tweaking the falloff function.
191
Spotlights and Projected Textures
• Real light sources usually have some
angular variation in emitted radiance
• This is often simulated by applying an
angular falloff term, modulating the light by a
projected texture, etc.
Usually this will result in the light only emitting radiance within a cone or frustum. A
light which illuminates in all directions (also called an omni light) can also project a
texture using a cube map, this can be useful to simulate some types of light fixtures.
192
Other Point / Directional Lights
• Many other variations are possible
– Vary the direction to the light to simulate various shapes of
light such as linear lights
– Combine various distance and angular falloff terms
• In the end, these all boil down to different ways of
computing il and ωl for the rendering equation:
Le = il (d l )πf r (ωl , ωe ) cos θ l
For an example of a complex point light variation used in film rendering, see
“Lighting Controls for Computer Cinematography” (Ronen Barzel, Journal of
Graphics Tools 1997).
The computation of the direction and intensity of the light are typically performed in
the pixel shader, and the result is combined with the BRDF to compute the final
lighting result (more details about this later in the course).
193
Shadows
• Dynamic point and directional lights cast
sharp shadows
• To incorporate shadows, modulate il by an
occlusion factor
194
Shadows
• Games usually use some variation on depth
maps for shadows from dynamic lights
– Stencil shadow volumes also (more rarely) used
• To simulate non-point lights, soft shadows
are often desirable
– Usually achieved by multiple depth map lookups
195
Shadows
• Other approaches used for light occlusion
(though not for point / directional lights)
include ambient occlusion terms and
precomputed radiance transfer (PRT)
• Some of these will be discussed later in the
course
196
Dynamic Texture Lighting
• We saw environment maps used as
prelighting
• They can also be rendered dynamically
• Other types of texture-based lighting
possible
– Planar reflection maps
Environment maps are textures parameterized by directions on the sphere and
used for distant lighting. A planar reflection map is a texture which represents the
scene reflected about a plane, this would be used to render reflections on a planar
surface. Other types of texture-based lighting are also used.
197
Combining Prelighting and Dynamic
Lights
• Sometimes as easy as just adding them
• But there are often complications
– Dynamic objects casting shadows from lights
which were included in prelighting
– Efficiency issues (divergent representations for
prelighting and runtime lights may cause
duplication of computation)
– Different interactions with BRDFs
An example of the first issue is an outdoor scene with sunlight. We would want to
include the sunlight in the prelighting on static objects to take account of indirect
illumination resulting from sunlight, but we also want dynamic objects to case
shadows from the sun on static objects, which is difficult if the sunlight has already
been accounted for in the prelighting. There are various solutions to this problem,
and similar problems, which are outside the scope of this course.
An example of divergent representations for prelighting and runtime light is a
foreground object lit both by a spherical harmonic light probe and several point
lights. If the object is Lambertian, the spherical harmonic coefficients for the lights
can often be combined with those from the light probe interpolation, thus resulting in
improved efficiency.
An example of prelighting and dynamic lights interacting differently with BRDFs:
again a foreground object lit by a combination of spherical harmonic light probes
and point lights. If the object is not Lambertian, the reflectance models used for the
prelighting and runtime lights will differ.
198
Light Architecture
• Usually games will have a general structure
for the light environments supported
– This will affect shaders, lighting tools, and data
structures and code used for managing lights
199
Light Architecture
• Light architectures vary in the number of
lights supported in a single rendering pass
– Pass-per-light (including a separate pass for
indirect / prelighting)
– All lights in one pass: feasible on newer hardware,
but potential performance and combinatorial issues
– Approximate to N lights: like previous, but excess
lights are merged / approximated
Pass-per-light architectures have the advantage of simplicity. Shaders only have to
be written to support a single light, and only one variation needs to be written for
each type of light. The object has to go through the entire graphics pipeline for each
light affecting it, which can introduce significant overhead. On the other hand,
efficiencies can be gained by only rendering the part of the screen covered by the
current light. Certain shadow approaches (such as stencil shadows) require a passper-light approach.
All-lights-in-one pass architectures only process the object once, regardless of
lighting complexity. There are two approaches to writing shaders for such an
architecture: a single shader using dynamic branching (which is slower), or
compiling a shader variation for each possible light set (which may cause a
combinatorial explosion). Limits on the number of textures may restrict the number
of lights casting shadows.
A common approach is to assume a standard light set (for example, an ambient
term and three point lights) and in cases where the actual set of lights affecting an
object is more complex than the standard set, merge the excess lights together, or
discard them, or otherwise approximate the actual light set with a standard light set.
This architecture can yield the highest (and more importantly, the most consistent)
performance, but introduces complexity and possible temporal artifacts (‘pops’) in
the approximation process.
200
The Game Rendering Environment
• Lighting Environments
• Bump, Twist and Relief Mapping
Now we shall discuss issues relating to per-pixel detail methods such as bump,
twist and relief mapping as they affect the implementation of reflection models.
201
The BRDF
•
Incident, excitant directions defined in the
surface’s local frame (4D function)
ωe
N
θe
φe
θi
ωi
T
φi
This is a repeat of a previous slide. This will be relevant for the discussion of bump,
twist and relief mapping.
202
Local Frames for BRDF Rendering
• Isotropic shaders only require a surface
normal direction or vector
• In the past, this has usually been stored on
the vertices
• More recently, bump or normal mapping has
become more common
– Storing normal information in textures
– Local frame now varies per-pixel
203
Local frames for BRDF Rendering
• Also, some anisotropic surface shaders
(such as hair) utilize twist maps
– Rotate the local frame per-pixel
• Although normal and twist maps create a
per-pixel local frame, in practice
computations can still occur in other spaces
such as world space or the ‘unperturbed’
tangent frame
204
Twist Maps
• Just as normal maps can have various
representations, so can twist maps
– Angle of rotation of tangent about normal
– Cosine and sine of angle
– Etc.
• Unlike normal maps (which are becoming
ubiquitous) twist maps are relatively rare
205
Relief Mapping
• Recent (and popular) development
• Texture coordinates are perturbed to account
for relief or parallax effects
• Various flavors exist
• Affects the reflectance computation, since all
texture data must use perturbed texture
coordinates
206
Physically-Based Reflectance
for Games
10:15 - 10:30: Break
207
Physically-Based Reflectance
for Games
10:30 - 11:15: Reflectance
Rendering with Point Lights
Naty Hoffman & Dan Baker
208
Reflectance Rendering with Point
Lights
• Analytical BRDFs (Naty & Dan)
• Other Types of BRDFs (Dan)
• Anti-Aliasing and Level-of-Detail (Dan)
In this section, we shall discuss the practical considerations for rendering
reflectance models with point lights. First we will cover the most relevant analytical
BRDF models, discussing implementation and production considerations for
rendering each with point lights. Next we will discuss non-analytical (hand-painted
and measured) BRDF models and their implementation and production
considerations for rendering with point lights. Finally, we shall discuss issues
relating to anti-aliasing and level-of-detail for reflection models, as rendered with
point lights.
209
Analytical BRDFs
• Common Models (Naty)
• Implementation and Performance (Dan)
• Production Issues (Dan)
First, we shall discuss common reflection models used for rendering with point
lights. Then, we discuss implementation and performance issues for these models.
Finally, we discuss specific game production issues relating to these models.
210
Analytical BRDFs with Point Lights
• Typically directly evaluated in shaders
• Requirements for BRDFs are different than
for Monte Carlo ray-tracers
– Which require the BRDF to be strictly energy
conserving and amenable to importance sampling
• Important factors:
– Ease of evaluation, meaningful parameters
Global illumination rendering algorithms have specific requirements from BRDFs.
Real-time rendering, in particular direct evaluation of BRDFs with point lights, has
quite different requirements. We have discussed in previous sections the
importance of having BRDF parameters map to physically meaningful values.
211
Reflectance Models
• Phong
• Blinn-Phong
• Cook-Torrance
• Banks
• Ward
• Ashikhmin-Shirley
• Lafortune
• Oren-Nayar
These are the reflection models we will be discussing. This collage is composed of
pieces of images that we shall see (with attributions) in the following section of the
talk.
212
Phong
• Commonly encountered form (sans ambient term)
• What is the physical meaning and correct range of
parameters ks, kd and n?
(
Le = il (d l ) k d cos θ l + k s (cos α r )
n
)
We write this as a point light lighting equation using ‘game intensity’, which is the
most common usage. The significance and physical interpretation of the parameters
is unclear, which makes them hard to set properly.
213
Phong
• Written as a BRDF
– We can see that the first term is essentially identical to a
Lambert BRDF
k s (cos α r )
f r (ωi , ωe ) =
+
π
π cos θ i
kd
n
Comparing the previous Phong lighting equation to the “game point lights” version of
the rendering equation we saw previously, we can see rewrite the Phong reflection
model in BRDF form. In this form, the meaning of kd is clear:
214
Phong
• Written as a BRDF
– We can see that the first term is essentially identical to a
Lambert BRDF, so kd is diffuse albedo
– Could ks be a reflectance quantity also?
k s (cos α r )
+
f r (ωi , ωe ) =
π
π cos θ i
ρd
n
Comparing the previous Phong lighting equation to the “game point lights” version of
the rendering equation we saw previously, we can see rewrite the Phong reflection
model in BRDF form. In this form, the meaning of kd is clear: it is the diffuse albedo.
This diffuse component of reflectance would mostly be due to the body reflections,
although it could also represent multiply-scattered surface reflections, or some
combination of the two. If we can derive a reflectance quantity from ks, it will make
the BRDF more useful for artist control.
215
Phong
• Let’s look at the specular term in isolation
k s (cos α r )
f rs (ωi , ωe ) =
π cos θ i
n
• Its directional – hemispherical reflectance is
Rs (ωi ) =
∫
ΩH
k s (cos α r )
cos θ e dωe
π cos θ i
n
216
Phong
• Or:
Rs (ωi ) =
ks
π cos θ i
n
(
)
cos
α
cos θ e dωe
r
∫
ΩH
• Goes to infinity at grazing angles – not good!
• Surfaces using this BRDF will be too bright
and appear to ‘glow’ at grazing angles
For real-time rendering, we don’t need strict conservation of energy, so we don’t
need the directional-hemispherical reflectance to be strictly equal or less than 1.0.
However, a BRDF where the reflectance goes to infinity is going to be noticeably
visually wrong.
217
Phong
• First step in making the Phong specular term
physically plausible, with the goal of more
realistic visuals – remove the cosine factor:
f rs (ωi , ωe ) =
ks
(cos α r )
n
π
• Or add a cosine factor to the original form
(
Le = il (d l ) k d cos θ l + k s (cos α r ) cos θ l
n
)
The only reason we had an inverse cosine factor in the BRDF in the first place was
the lack of a cosine factor in the specular term in the original form of the reflectance
model. Once we add that, the inverse cosine factor goes away from the BRDF. This
has several good effects.
218
Phong
• More physically plausible
Rs (ωi ) =
ks
π
∫ (cos α )
n
r
cos θ e dωe
ΩH
2k s
max(Rs (ωi )) = Rs (0 ) =
n+2
The new specular term is more physically plausible. For one thing, it is now
reciprocal. This is not an important thing in itself for real-time rendering (it is for
some ray-tracing algorithms) but it gives us a better feeling about the BRDF being a
good model of physical reality. More importantly, it is now possible to normalize the
BRDF and get one which is energy-conserving, and more importantly still, has
meaningful reflectance parameters.
The directional-hemispherical reflectance of the specular term is now greatest at
normal incidence (when the angle of incidence is 0). If we assume that this is the
Fresnel reflectance of the substance which the material is composed of, then this
makes sense. The decrease of reflectance towards glancing angles could be seen
as a simulation of shadowing and masking effects. Taking this value as being equal
to the Fresnel reflectance at normal incidence RF(0), we get a new form of the
Phong specular term:
219
Phong
n+2
n
f rs (ωi , ωe ) =
RF (0 )(cos α r )
2π
• Instead of the mysterious ks, we now have RF(0)
• The (n+2)/2π term is a normalization factor
• n can be seen as controlling surface roughness
• Now artist can control specular reflectance and
roughness separately
Instead of a mysterious ks term, we now have RF(0), the Fresnel reflectance of the
surface boundary at normal incidence, which is physically meaningful. We know
roughly what values of this make sense for various kinds of materials. This form of
the specular term has several advantages: if we expose RF(0) and n as shader
parameters, the artist can control the specular reflectance and roughness directly,
changing one without changing the other. For high values of n, the normalization
term will cause the BRDF to reach large values towards the center of the highlight,
which is correct (the original form of the Phong term never exceeded a value of 1).
220
Phong
• Complete normalized BRDF
n+2
n
f r (ωi , ωe ) =
+
RF (0 )(cos α r )
π
2π
ρd
• If ρd + RF(0) ≤ 1 at all frequencies, the BRDF will
conserve energy
Energy conservation can be a good guideline to setting BRDF parameters. In this
case, if the artist wishes to ensure realism, they would not set the reflectance
parameters to add up to more than 1 so the surface would not appear to “glow”
(emit more energy than it receives).
221
Phong
n = 25
n = 50
n = 75
n = 100
Original Form
Normalized Form
In this example, a bright red plastic surface is modeled. The normalized form of
Phong (bottom row), is using RF(0) = 0.05, (approximately correct for plastic). The
maximum value of ρd which conserves energy is 0.95, which is set for the red
channel, and 0 for green and blue (maximally bright and red). For comparison, the ks
and kd values used by the original form of Phong (top row) were calculated to yield
the same result as the normalized Phong material for the leftmost images (lowest
cosine power).
n increases from left to right, to model increasing smoothness (same values used in
top and bottom rows). The other BRDF parameters are held constant.
It can be seen that in the bottom images the highlight grows much brighter as it gets
narrower, which is the correct behavior – the exitant light is concentrated in a
narrower cone, so it is brighter. In the top images the highlight remains equally
bright as it gets narrower, so there is a loss of energy and the surface seems to be
getting less reflective.
To model an otherwise unchanging material with increasing smoothness using the
original Phong model, ks would have to be increased with n. This is inconvenient for
an artist using a shader based on this model. Note that for high values of n, ks would
have to be increased past 1, making it difficult to use a simple color-selection UI.
Using the normalized Phong model, it is easy to independently control specular and
diffuse reflectance using color-selection UI, and the smoothness power using a
simple scalar slider for cosine power.
222
BRDF Normalization
• Required for global Illumination convergence
– This is usually not a concern for games
• Important for realism
– For parameters to correctly control the reflectance
• With normalized terms, all parameters can
be varied throughout their range
– With the correct visual result
What we did to the Phong BRDF is an example of BRDF normalization. This is
important for other reasons than GI convergence (even when games use GI-based
pre-computation, this is almost always for Lambertian surfaces). We have seen the
advantages of using a BRDF with normalized terms.
223
BRDF Normalization
• For each un-normalized BRDF term (diffuse,
specular)
– Compute max(R(ωi))
– Divide term by max(R(ωi))
– Multiply by physically meaningful reflectance
parameter: ρd, RF(0), RF (αh)
A value slightly smaller than the maximum directional-hemispherical reflectance can
be used, since we are not striving for exact energy conservation. In many cases,
there is no analytical expression for max(R(ωi)), so an approximation is used.
224
Phong
n+2
n
f r (ωi , ωe ) =
+
RF (0 )(cos α r )
π
2π
ρd
• It is energy-conserving, reciprocal, has meaningful
reflectance parameters
• However, the exact meaning of n is not clear
• It is a reflection-vector BRDF
We know that n is related to surface roughness, but how exactly? Also, remember
the disadvantages of reflection-vector BRDFs as opposed to microfacet (half-anglevector) BRDFs.
225
Microfacet vs. Reflection BRDFs
N
ωr i
ωi
αr
This is a repeat of a previous slide, as a reminder of the difference between
reflection and microfacet BRDFs: reflection BRDFs have round highlights,
226
Microfacet vs. Reflection BRDFs
N
while microfacet BRDFs have highlights which get increasingly narrower at glancing
angles.
227
Blinn-Phong
n+4
n
f r (ωi , ωe ) =
+
RF (0 )(cos θ h )
π
8π
ρd
• A microfacet (half-angle) version of Phong
• Now the meaning of n is clear – it is a parameter of
a normal distribution function (NDF)
This BRDF simply replaces the angle between the reflection and exitant vectors
with the angle between the half-angle vector and the surface normal. This is a
physically meaningful quantity – it is the evaluation of the normal distribution
function at the half-angle direction. Now we can see that the cosine raised to a
power is simply a normal distribution function, and the relationship between n and
the surface roughness is clear. Note that we had to re-compute the normalization
factor for this BRDF.
228
Phong vs. Blinn-Phong
• Significance
Phong
Blinn-Phong
of difference
depends on
circumstance
– Significant for
floors, walls,
etc.
– Less for
highly curved
surfaces
For light hitting a flat surface at a highly oblique angle, the difference between
Phong and Blinn-Phong is quite striking. For a light hitting a sphere at a moderate
angle, the difference is far less noticeable. Note that to get a similar highlight, n for
Blinn-Phong needs to be about four times the value of n for Phong.
229
Blinn-Phong
• As a microfacet BRDF, Blinn-Phong can be easily
extended with full Fresnel reflectance
n+4
n
f r (ωi , ωe ) =
+
RF (α h )(cos θ h )
π
8π
ρd
• This partially models the surface reflectance / body
reflectance tradeoff at glancing angles
– Surface reflectance increases with angle of incidence, but
body reflectance does not decrease
– BRDF no longer energy conserving
This modification makes the BRDF no longer energy conserving. It is still visually
plausible, perhaps more so after this modification so it might be a good idea for
game use (depending on circumstance) .
230
Blinn-Phong Conclusions
• Normalized version well-suited for games
– Meaningful parameters
– Reasonably expressive
– Low computation and storage requirements
– Easy to implement
• Probably all that is needed for a high
percentage of materials
– But doesn’t model some phenomena
The basic structure of a normalized Blinn-Phong shader is also easy to extend by
replacing the normal distribution function, adding Fresnel reflectance variation, etc.
However, it is not easy to extend it to model shadowing and masking without
effectively turning it into a different kind of reflectance model.
231
Cook-Torrance
f r (ωi , ωe ) = (1 − s )
ρd
π
+s
p(ωh )G (ωi , ωe )RF (α h )
π cos θ i cos θ e
• Full microfacet BRDF w. shadowing / masking term
• Reciprocal
• Partial surface/body reflectance tradeoff
– Surface reflectance increases with angle of incidence, but
body reflectance does not decrease
– Not energy-conserving
• Not well-normalized
– significant energy lost via G(ωi, ωe)
We can see this is in the same form as the microfacet BRDF we saw earlier. S is a
factor between 0 and 1 that controls the relative intensity of the specular and diffuse
reflection.
Quite a bit of energy is lost via the geometry factor which is not compensated for –
the actual reflectance is quite a bit lower than the parameters would indicate. On the
other hand, the fact that the specular reflectance increases to 1 while the diffuse
reflectance remains unchanged may add energy at glancing angles. Overall, this
BRDF is fairly plausible under most circumstances.
Any NDF could be used, but the paper by Cook and Torrance recommended using
the Beckmann NDF.
232
Cook-Torrance
• Geometry term
– Models shadowing and masking effects
– Derived based on an isotropic surface formed of long Vshaped grooves
⎧ 2 cos θ h cos θ e 2 cos θ h cos θ i ⎫
,
G (ωi , ωe ) = min ⎨1,
⎬
cos
α
cos
α
h
h
⎩
⎭
This is the first analytical shadowing / masking term to be used in the computer
graphics literature, and even now few others have been published. Note that it is
based on a somewhat self-contradictory surface model.
233
Cook-Torrance
Plastic
Metal
IMAGES BY R. COOK AND K. TORRANCE
234
Cook-Torrance Conclusions
• Models more phenomena than Blinn-Phong
– Shadowing, masking
– However, these cause an implausible energy loss
since interreflections still not modeled
• Parameter effect on reflectance less intuitive
• Computation cost higher
235
Banks
•
Anisotropic version of Blinn-Phong
– Surface model composed of threads
– Uses projection of light vector onto plane perpendicular to
tangent instead of surface normal
ωi
N’
N
T
This is the first anisotropic shader we will discuss. Aside from the new normal
vector, this is almost exactly the same as Blinn-Phong. One more difference is that
when the light is not in the hemisphere about the original surface normal, the
lighting is set to 0 (“self-shadowing” term). This is the only effect the surface normal
has on this model.
236
Banks Conclusions
• Easy to use in games and cheap to evaluate
• A good fit for modeling certain kinds of
anisotropic materials composed of grooves,
fibers, etc.
237
Ward - Isotropic
ρd
⎧ tanθ h ⎫ 2
-⎨
⎬
⎩ m ⎭
e
1
+
f r (ωi , ωe ) =
RF (0 )
2
π 4π m
cos θ i cos θ e
•
Well normalized, meaningful parameters, reciprocal,
energy-conserving
•
No modeling of
–
Shadowing and masking effects
–
Body / surface reflectance tradeoff at glancing angles
This is another “pseudo-microfacet model”. It uses a modified Beckmann NDF, but
no explicit Fresnel or masking / shadowing. It is well normalized however. Note that
this version includes corrections that were discovered since the Ward model was
first published in 1992. It is physically plausible, not too expensive to compute and
has meaningful parameters. Note that the two physical effects that it does not model
tend to counteract each other.
238
Ward – Anisotropic
f r (ωi , ωe ) =
ρd
π
+
e
1
RF (0 )
4π mu mv
⎧⎪
⎛ cos 2 ϕ h sin 2 ϕ h
- ⎨ tan 2θ h ⎜
+
⎜ m
mv
⎪⎩
u
⎝
⎞ ⎫⎪
⎟⎬
⎟⎪
⎠⎭
cos θ i cos θ e
The only change is in the NDF, which is now anisotropic.
239
Ward - Anisotropic
f r (ωi , ωe ) =
ρd
π
+
⎧ ⎛ cosα ⎞ 2 ⎛ cosα
u ⎟
v
⎪⎜
+⎜
⎪ ⎜⎝ mu ⎟⎠ ⎜⎝ mv
-⎨
cos 2θ h
⎪
⎪
⎩
1
e
RF (0 )
4π mu mv
cos θ i cos θ e
2
⎞ ⎫
⎟⎟ ⎪
⎠ ⎪
⎬
⎪
⎪
⎭
This alternative formulation of the math is more implementation-friendly.
240
Ward - Anisotropic
IMAGE BY G. WARD
Although it is missing some of the features of the physically-based BRDFs, the
images still look quite nice.
241
Ward Conclusions
• Reciprocity and precise normalization less
important for game rendering
• Has meaningful parameters
• Expensive to evaluate
• Anisotropic form controls highlight shape well
• Disadvantages seem to outweigh
advantages for game rendering
242
Ashikhmin-Shirley
• Designed to handle tradeoff between surface
and body reflectance at glancing angles
– While being reciprocal, energy conserving and well
normalized
• No attempt to model shadowing and masking
• Sum of diffuse and specular term
– One of few BRDFs which don’t use Lambert
Another ‘pseudo-microfacet’ model. Most BRDFs focus on the specular term and
simply use a Lambertian term for diffuse reflectance. Since the Ashikhmin-Shirley
BRDF was designed to trade off reflectance between the diffuse and specular
terms, a different diffuse term is used.
243
Ashikhmin-Shirley: Diffuse Term
⎛ ⎛ cos θ i ⎞5 ⎞⎛ ⎛ cos θ e ⎞ 5 ⎞
28ρ d
(1 − RF (0))⎜⎜1 − ⎜1 −
f rd (ωi , ωe ) =
⎟ ⎟⎟⎜⎜1 − ⎜1 −
⎟ ⎟⎟
23π
2
2
⎠ ⎠⎝ ⎝
⎠ ⎠
⎝ ⎝
•
Trades off reflectance with the specular term at
glancing angles
•
Without losing reciprocity or energy conservation
244
Ashikhmin-Shirley: Specular Term
f rs (ωi , ωe ) =
(nu + 1)(nv + 1) (cosθ h )n cos ϕ + n sin ϕ
RF (α h )
8π
cos α h max(cos θ i , cos θ e )
u
2
h
v
2
h
Note max(cosθi,cosθe) term in denominator.
245
Ashikhmin-Shirley: Specular Term
f rs (ωi , ωe ) =
(nu + 1)(nv + 1) (cos θ h )
8π
nu cos 2 α u + nv cos 2 α v
1+ cos 2 θ h
cos α h max(cos θ i , cos θ e )
RF (α h )
More implementation-friendly formulation, using mostly cosines (dot products) .
246
Ashikhmin-Shirley
IMAGE BY M. ASHIKHMIN AND P. SHIRLEY
Here we can see how the Ashikhmin-Shirley BRDF allows for controllability of the
anisotropy and tightness of the highlight.
247
Ashikhmin-Shirley
IMAGE BY M. ASHIKHMIN AND P. SHIRLEY
Here we can see how it models the tradeoff between surface and body reflectance
at glancing angles.
248
Ashikhmin-Shirley Conclusions
• Expressive, but simple parameters
– Anisotropy, body-surface reflectance tradeoff
• Expensive to evaluate
• Model can be simplified for games, and
made cheaper to evaluate
– Reciprocity and strict energy conservation less
important for game rendering
This BRDF is not currently used much for games, but it is definitely worth taking a
closer look at. It has several good properties and it can be modified to be cheaper to
evaluate.
249
Lafortune
•
Generalization of normalized Phong (not BlinnPhong) specular term
•
Normalized Phong in vector form:
(
( ))
r
r
n+2
f rs (ωi , ωe ) =
RF (0)dot V, reflect L
2π
n
250
Lafortune
•
In local frame, reflection operator is just
multiplying x and y components by -1:
f rs (ωi , ωe ) =
n+2
RF (0 )((− 1)Vx L x + (− 1)Vy L y + (1)Vz L z )
2π
This requires that the light and view direction are in the local frame of the surface
(which the BRDF definition assumes anyway).
251
Lafortune
•
Generalize to one spectral (RGB) constant Kj and
four scalar constants (Cxj, Cyj, Czj, nj) per term, add
several terms (lobes):
(
f rs (ωi , ωe ) = ∑ K j C x j Vx L x + C y j Vy L y + C z j Vz L z
)
nj
j
By taking the (-1. -1, 1) that the components were multiplied by as general
coefficients, as well as the power term, and generalizing the Fresnel reflectance into
a general spectral coefficient, we get a generalized Phong lobe. The Lafortune
BRDF is formed by adding several such lobes.
252
Lafortune
•
Phong:
•
Lambertian:
•
•
Non-Lambertian K = ρ , C = C = 0, C = (n+2)/2π
d
x
y
z
diffuse:
Off-specular
Cz < -Cx = -Cy
reflection:
Retro-reflection: Cx > 0, Cy > 0, Cz > 0
•
Anisotropy:
•
K = RF(0), -Cx = -Cy = Cz = ((n+2)/2π)1/n
K = ρd/π, n = 0
Cx ≠ Cy
Besides the standard Phong cosine lobe, this can handle many other cases. The
non-Lambertian diffuse given here decreases with increasing angle of incidence,
this helps to model the surface / body reflectance trade-off with glancing angles.
253
Lafortune
• Reciprocal, energy-conserving
• Very expressive, not too expensive to
compute (unless a lot of lobes are used)
• But has very unintuitive parameters
• Best used to fit measured data or some other
model with more intuitive parameters
254
Lafortune
IMAGE BY D. MCALLISTER, A. LASTRA AND W. HEIDRICH
255
Lafortune Conclusions
• Very expressive and general
– Highly controllable scattering direction
• Data heavy and compute light
– Which is not a good tradeoff given the direction
that GPUs are developing in
• Parameters are hard to set values for – best
to fit to them automatically
256
Oren-Nayar
• Microfacet model with Lambertian rather
than mirror microfacets
• σ is the standard deviation of the microfacet
normal angle with the macroscopic normal
f r (ωi , ωe ) =
ρd
π
C B = 0.45
(C A + CB cos ϕ )sin (max(θ i ,θ e )) cos(min(θ i ,θ e ))
σ2
σ 2 + 0.09
CA = 1−
σ2
2σ 2 − 0.66
σ is a roughness parameter. It is the standard deviation of the angle between the
microfacet normals and the macroscopic normal. 0 is smooth (Lambertian), and
increasing the number increases the roughness. Remember that φ is the relative
azimuth angle between the directions of incidence and exitance.
257
Oren-Nayar
•
Normalized, reciprocal, physically based
IMAGE BY M. OREN AND S. NAYAR
258
Oren-Nayar Conclusions
• Correctly models an important class of
materials
• Evaluation cost reasonable, light storage
requirements
• Definitely worth looking at for game use
259
IMAGE BY A. NGAN, F. DURAND & W. MATUSIK
We close this discussion of common analytical BRDFs with two charts from recent research
comparing the accuracy to which various models fit measured BRDFs. The first chart is sorted on
the error of the Blinn-Phong BRDF,
260
IMAGE BY A. NGAN, F. DURAND & W. MATUSIK
And the second is sorted on the error of the Ashikhmin-Shirley BRDF. From these charts we can
see that the Ashikhmin-Shirley and Cook-Torrance BRDFs are a good fit to measured BRDFs,
while Blinn-Phong, Ward and Lafortune are less accurate but still reasonably close.
261
Analytical BRDFs
• Common Models (Naty)
• Implementation and Performance (Dan)
• Production Issues (Dan)
Now we shall discuss implementation and performance issues for these models.
262
Implementation
• Real time graphics is based on rasterizer
• Vertices are transformed to make triangles
(Vertex Shading)
• Triangles are transformed into raster
triangles (optional Geometry Shading)
• Pixels are transformed and placed into
render target (Pixel Shading)
263
Implementing a BRDF
• This means writing GPU shader code
• Can write a program for Vertex, Geometry,
or Pixel shader which will evaluate our BRDF
• Most often, evaluation is done at pixel level
Evaluation of BRDFs at vertex level was common several years ago, but with
current GPU hardware pixel-level evaluation is almost universal.
264
Data needed for the BRDF
• At each pixel rendered, need
– The BRDF parameters
• Reflectance parameters (ρd, RF(0))
• Surface smoothness parameters (n, m, σ)
– For each light: direction ωl, intensity il(dl) at pixel
– Direction from pixel to camera ωe
– If ωi, ωe not in local frame then axes (N, T, B)
• Isotropic BRDFs only need N
There is various data needed to evaluate an analytical BRDF. Note that the light
intensity is a spectral quantity, which in practical terms means that R, G and B
channels are specified. The BRDF model is evaluated in the local frame, so either
the incident and exitant directions need to be given in the local frame, or else the
axes of the local frame need to be given (in whatever space the direction vectors
are specified) so they can be transformed into local space. A full frame is not
required unless the BRDF is anisotropic – isotropic BRDFs need only N.
265
Light Intensity
• The computation of il(dl) at each pixel may
be very simple or quite complicated
depending on the light type
– For a directional light, simply a constant
– For an überlight, requires significant computation
– Shadows add even more complexity
• We ignore these complexities for now
– Outside the scope of BRDF computation
We will ignore the complexities of computing the attenuated light intensity to a large
extent in the following discussion (we will touch upon them in the discussion of
shader combinatorial issues later).
266
Spaces / Coordinate Systems
• There are various spaces, or coordinate
systems, in which directions are specified
– Camera space, screen space
– World space
– Object or pose space (rigid or skinned objects)
– Unperturbed tangent space (vertex local frame)
– Perturbed tangent space (pixel local frame)
The directions of incidence and exitance are ultimately needed in the local frame or
coordinate system. There are various possible spaces or coordinate systems used
in game renderers.
Vertex positions are transformed to camera space and screen space to be
rendered, but reflectance computations are rarely performed there (deferred
rendering systems are an exception).
World space is where the object transforms, the camera and light locations are
initially specified, reflectance computations are rarely performed there for point
lights (although environment map lighting is often performed in world space since
that is where the environment map is defined).
Object / pose space is where the object vertex positions are initially specified.
The normals stored in a normal map (normal texture) are most commonly specified
in a per-vertex tangent space. This space is closely related to the texture mapping
coordinates used for the normal map. For this reason most game models have N, T
and B vectors stored at the vertices (specified in object space). Often only two are
stored and the third derived from them.
Finally, the normal map itself (and a twist map if such is used) perturb the local
frame (in which the BRDF is defined) at each pixel, defining yet another space.
Although the BRDF is defined in the perturbed tangent space, it may be computed
in any space if all vectors have been transformed to that space.
267
Spaces for Analytical BRDF
Computation with Point Lights
• One common scheme:
– ωi, ωe computed at vertices, transformed into
vertex tangent space, interpolated over triangle
– Per pixel, BRDF computed in unperturbed tangent
space with:
• Interpolated ωi, ωe
• N read from normal map
• Optionally T read from twist map
There are various possible schemes for computing analytical BRDFs with point
lights, in terms of the spaces in which the computations are performed and where
transformations between spaces occur. We will discuss two of the most common
schemes.
268
Spaces for Analytical BRDF
Computation with Point Lights
• Another common scheme:
– World space position, tangent space axes
interpolated from vertices over triangle
– Per pixel, BRDF computed in world space with:
• ωi, ωe computed per pixel
• N read from normal map and transformed to world space
• Optionally T from twist map transformed to world space
Of these two schemes, the first uses less interpolated data if only one light
computed, but more if several lights are computed. The two schemes also represent
a tradeoff of pixel shader vs. vertex shader computations. There may also be a
quality difference between the two. The first scheme tends to show artifacts due to
low vertex tessellation.
The second scheme is theoretically more correct, but paradoxically this can lead to
it producing more artifacts in some cases, since models often simulate curved
surfaces via triangles, and the second scheme may reveal the “sham” due to its
greater accuracy. In the end, which of these schemes is preferable depends on the
specifics of the game environment and models, as well as the characteristics of the
platform used.
269
Interpolators
• Computing or storing a value on a vertex and
interpolating it over the triangle can often
save pixel processing
• However, interpolation is strictly linear, which
may cause problems with
– Directions
– Nonlinear BRDF parameters
• Higher vertex counts help
Interpolating directions is quite common. This causes normalized (length 1) vectors
to become non-normalized, but this is easily remedied in the pixel shader by renormalizing them. Second-order discontinuities may sometimes cause more subtle
artifacts. Values on which the resulting color depends in a highly non-linear fashion
may also introduce problems when linearly interpolated. If the scene is highly
tessellated, then these issues are much reduced, but this may not always be
possible due to performance limitations of the vertex processing.
270
Shift-Variant BRDFs
• Besides normal / tangent perturbation, BRDF
parameters may also vary over the surface
• ρd is most commonly stored in textures,
sometimes modulated by vertex values
• RF(0) usually stored in textures as a scalar
(gloss map) to save storage
• Both modulated by material constants
Real-world surfaces exhibit a lot of fine-grained variation. Normal and (less
commonly) twist mapping help game artists model this variation, but this is not
enough – usually the parameters of the BRDF need to be varied as well. This
variance is often achieved by storing these parameters in textures. They are
sometimes stored on vertices, or vertex and texture values can be combined to
reduce visual repetition from texture tiling. Combining these also with material
constants facilitates reusing the same textures over different objects. This can also
save storage, for example in the common case where a spectral (RGB) value for
specular color (RF(0)) is arrived at by multiplying a scalar value stored in a texture
(gloss map) with an RGB material constant.
271
Shift-Variant BRDFs
• Other BRDF parameters more rarely stored
in textures, due to storage costs
– Also, parameters on which the BRDF depends in a
highly non-linear fashion may cause problems with
hardware texture filtering
• But storing a smoothness parameter such as
n in a texture can make a shader significantly
more expressive, helps anti-aliasing
We will discuss issues with non-linearity and anti-aliasing later in the course.
272
Translating BRDF Math to Shaders
• The three-component dot-product is the
workhorse of shader BRDF computation
– Hardware performs it quickly, can be used to
compute cosines between normalized vectors
• Vector normalization used to be expensive,
but is now becoming quite cheap
• pow, sin, tan, exp etc. more expensive
• Texture reads somewhat expensive
Texture reads are particularly notable for their high latency, so multiple dependent
texture reads may have a particularly adverse effect on performance. Also,
273
Translating BRDF Math to Shaders
• We use a different notation for shader code:
– L instead of ωi
– V instead of ωe
– H instead of ωh
– R instead of ωri
Since shader languages do no support Greek letters in variable names, we will use
a slightly different notation when writing shader code.
274
Translating BRDF Math to Shaders
• Most BRDF sub-expressions translate well:
– cos θl becomes dot(L,N)
– cos θe becomes dot(V,N)
– cos θh becomes dot(H,N)
– cos αu becomes dot(H,T)
– cos αv becomes dot(H,B)
– cos αr becomes dot(R,V)
– cos αh becomes dot(H,V) or dot(H,L)
Note that we use θl instead of θi here. This is because when evaluating BRDFs with
point lights, ωi is replaced with ωl.
As we saw when discussing common analytical BRDFs earlier, almost any BRDF
can be reduced to some combination of these sub-expressions.
Note that since most of these expressions are dot products with N, T or B, they can
be seen also as the Cartesian coordinates of some vector in the local frame.
Another thing to note is that in many cases, a vector will appear in dot products the
same number of times above and below a division line. In this case the vector does
not need to be normalized since its length will cancel out (this happens with H in the
Ward and Ashikhmin-Shirley BRDFs).
275
Implementing Blinn-Phong
• The Blinn-Phong BRDF:
n+4
n
f r (ωi , ωe ) =
+
RF (0 )(cos θ h )
π
8π
ρd
• The game point light reflection equation:
Le = il (d l )πf r (ωl , ωe ) cos θ l
We will use Blinn-Phong as an example for implementing an analytical BRDF for
rendering with point lights.
First we convert the normalized form of the Blinn-Phong BRDF into the form of the
game point light reflection equation.
276
Implementing Blinn-Phong
n+4
⎛
n⎞
Le = il (d l )⎜ ρ d +
RF (0 )(cos θ h ) ⎟ cos θ l
8
⎝
⎠
• Simplified:
(
)
Le = il (d l ) ρ d + nK S (cos θ h ) cos θ l
n
This can be further simplified in two ways: the division by 8 can be factored into the
tools, thus providing the shader with a specular reflectance constant KS which has
already been divided by 8. Also, since this shader will almost always be used with
quite large values of n, n+4 can be approximated as n.
277
A GPU Function
float3 BlinnPhong(float3 L, float3 V, float3 N,
float3 Kd, float3 Ks, float n)
{
float3 H = normalize(V+L);
float3 C = (Kd+n*Ks*pow(dot(H,N),n))*
dot(N,L);
return C;
}
(
)
Le = il (d l ) ρ d + nK S (cos θ h ) cos θ l
n
This is an example of a reflectance function implemented in HLSL or Cg (C-like
languages used to write GPU shaders). Here we see that although the BRDF is
defined in the perturbed tangent space (local frame), the computation occurs in
some other space since N is used (if the vectors were in the perturbed tangent
space, we would just need to take the z coordinate of H rather than performing a
dot-product between it and N). This is typical of BRDF computation in games.
We also can see that this function is written under the assumption that the vectors
passed to it have already been normalized so they are of length 1. Otherwise they
would need to be normalized in the function before performing the dot products.
This function does not perform a multiplication with the light intensity, so it must be
called from some other function which will perform this multiplication, probably after
computing the light intensity value.
278
Pixel Shader (Shift-Invariant BRDF)
float4 PixelShader(VSOUT In) : COLOR
{
float3 V = normalize(In.V);
float3 L = normalize(In.L);
float3 N = tex2d(normalMap, In.texCoords);
N = normalize(N * 2 – 1);
float4 Out;
Out.rgb = il * BlinnPhong(L,V,N,Kd,Ks,n);
Out.a = 1;
return Gamma(saturate(Out));
}
Here we see a simple example of a pixel shader calling the function we have just
seen. The input to the pixel shader is the output from a fragment program, this is
passed in as a function argument and is a varying parameter which means that it is
not the same over the material. In this case the vertex shader output is passed in
through a structure which contains interpolated view and light directions (which
need to be renormalized after interpolation) and texture coordinates (the texture
sampler is a global variable).
The normal is read from this texture, and remapped from the 0 to 1 range of the
texture to the -1 to 1 range required for normal vectors. It is then renormalized
(needed since texture filtering will alter the length). Finally the normal is passed in to
the BRDF function together with the view and light directions. The result is
multiplied by the light intensity (which is also a global variable). The resulting color is
clamped to the 0-1 range for display, converted to gamma space by being passed
to another function, and finally output (the output is marked with a binding semantic
of COLOR which indicates that it is written to the output color in the render target.).
In this shader, Kd, Ks and n are accessed as global variables which implies that
they are material constants (per-vertex or per-pixel varying values need to be read
from interpolators or textures). This indicates that it implements a shift-invariant
BRDF. This shader also uses a very simple lighting model (single light, no ambient)
which implies either a simple lighting model in general, or that it is intended to be
used in a pass-per-light lighting setup.
279
Gamma Space
• Gamma is a nonlinear space for radiance
– Historically derived from response of CRT displays
– Roughly perceptually uniform
– Useful for storing display radiance values (in 0 to 1
range) in a limited number of bits
– Render targets usually gamma, 8 bits / channel
– Equivalent linear precision would require 11+ bits
– Transfer function roughly: xLinear = xGamma2.2
Gamma is the result of a happy coincidence: the nonlinear response of CRT
displays is very close to the inverse of the nonlinear response of the human eye to
radiance values. This yields a convenient, perceptually uniform space for storing
display radiance values which can be sent directly to the CRT. Even now that CRT
displays are no longer in common use, the perceptual uniformity of the space
makes it useful. There are various variants of gamma space, but this transfer
function has been standardized in Recommendation ITU-R BT.709 (the function
shown is in fact slightly simplified from the full Rec. 709 transfer function).
Recent hardware has the ability to support high-dynamic-range (HDR) render
targets. These are usually in linear space, not gamma space so if rendering to them
gamma conversion is not needed.
280
Gamma Space
• Textures use image editing software
– So historically have been stored in gamma space
– Gamma space useful for textures containing nonradiance values such as ρd and RF(0)
• For direct display during editing
• For perceptually uniform low-bit-rate (e.g. 8 bpc) storage
• Most GPUs support conversion from gamma
to linear on texture read
Although textures are not strictly display images and do not contain display radiance
values, they have historically been edited and stored in in gamma space, and this is
still useful for certain types of textures. Since most newer hardware can usually be
set up to perform conversion of texture values from gamma to linear on read, this
does not often need to be performed in the shader.
281
Gamma Conversion
float4 Gamma(float4 linear)
{
return sqrt(linear);
}
• Square-root is cheaper than a power function on
most GPUs
• Not exactly transfer function, but close enough
Some hardware also has functionality to automatically convert from linear to gamma
on writing to the render target – in this case gamma conversion on output does not
need to be performed in the pixel shader either. In our example, gamma conversion
is performed in the pixel shader.
282
Effect File
float3 Kd <string SasUiControl="ColorPicker">;
…
sampler normalMap;
…
struct VSIN
…
struct VSOUT
…
VSOUT VertexShader(VSIN In)
…
float4 PixelShader(VSOUT In) : COLOR
…
The effect file is at a higher level of organization than a single shader, and contains
several shaders, structure definitions, and definitions of global variables with
additional annotations which define things such as what user interface is exposed to
the artist for setting the value of the variable. Space is too short to show a complete
effect file, so we show some excerpts. The effect file also contains other entities
which are outside the scope of this course.
283
Pixel Shader (Shift-Variant BRDF)
float4 PixelShader(VSOUT In) : COLOR
{
…
float4 spec = tex2d(specMap, In.texCoords);
float n = spec.w;
float3 Ks = spec.rgb;
float3 Kd = tex2d(diffMap, In.texCoords);
float4 Out;
Out.rgb = il * BlinnPhong(L,V,N,Kd,Ks,n);
Out.a = 1;
return Gamma(saturate(Out));
}
Due to lack of space we skip the beginning of the shader, which is the same as the
shift-invariant one. Here we see that various BRDF properties which used to be
read from global variables, such as Kd, Ks and n, are now read from textures. In
this example all the BRDF parameters are read from textures. It is also common
that some BRDF parameters are read from textures and others are read from global
variables (material constants), or even from vertex values (varying vertex shader
outputs).
284
Anisotropic BRDF with Normal Map
• Normal map permutes N; T and B are
orthogonalized to new N, also vary per-pixel
• Straightforward approach: transform V and L
into per-pixel (perturbed) tangent space
– 3 dot products + 3 more per light
• Computation in unperturbed tangent space
may be cheaper in some cases
– Depends on BRDF and number of lights
It may be tempting to ignore the perturbation in T and B resulting from that of N, but
that yields visually poor results. Examining the exact BRDF math and looking at the
number of lights will indicate which approach is cheaper.
285
Anisotropic BRDF with Normal Map
float4 PixelShader(VSOUT In) : COLOR
{
…
float3 T = normalize(In.T);
T = normalize(T – dot(N,T)*N);
float3x3 inv = float3x3(T, cross(N,T), N);
Lp = mul(L, inv);
Vp = mul(V, inv);
float4 Out;
Out.rgb = il * AnisoBRDF(Lp,Vp,…);
Out.a = 1;
return Gamma(saturate(Out));
}
Here we show the straightforward approach for clarity. Gramm-Schmidt is used to
create an orthonormal frame after N is perturbed. Since B is created on the fly, it
doesn’t need to be passed in from the vertex shader. inv is the inverse of the
transform from perturbed to unperturbed tangent space since it is the inverse
transpose and it is orthonormal (note that we use pre-multiplication). As in the
previous shaders, L and V are interpolated vectors in the unperturbed tangent
space, Lp, Vp are in the perturbed tangent space.
286
Anisotropic BRDF with Normal and
Twist Maps
• If the twist map is stored as a 3-vector, then
we no longer need to orthogonalize
– Just read T out of the texture
• However, storage may be a problem
• Naively stored, color + normal + twist will be
12 bytes per texel
• We will discuss texture compression later
12 bytes per texel is quite a lot if we remember the tension between the need for
high-resolution textures and the limited storage budget. Usually, hardware texture
compression is used, which introduces several new issues.
287
Performance
• Different reflectance models, different costs
– Computation
– Parameter storage
– Strong hardware trend: computation getting
cheaper, storage (relatively) more expensive
• And different benefits
– Ease of content creation
– Accuracy, expressiveness
288
Implementation Tradeoffs for
Analytical BRDFs with Point Lights
• Direct Evaluation
– Evaluate with ALU instructions in the GPU
• Texture Evaluation
– Factor into textures, use as lookup table
• A combination of the 2
– Factor some sub-expressions into textures
– Evaluate others with ALU
There are various options for implementing analytical BRDFs with point lights. We
need to evaluate an expression, and this can always be done using ALU
instructions. However, in some cases it may be preferable to factor all or part of the
expression into textures, pre-computing tables of values into them and looking them
up later.
289
BRDF Costs, Shift-Invariant
Model
Texture costs
ALU costs
Blinn-Phong Direct
0
7
Blinn-Phong factored
1
2
Banks Direct
0
12
Banks Factored
1
5
Ashikhmin/Shirley
0
40
Ashikhmin/Shirley
factored
4
10
Lafortune Direct
0
10 + 5 *n
Cook-Torrance
0
35
290
BRDF Costs, Shift-Variant
Model
Texture costs
ALU costs
Blinn-Phong Direct
1
15
Blinn-Phong factored
2
10
Banks Direct
1
25
Banks Factored
2
18
Ashikhmin/Shirley
2
50 (60)*
Ashikhmin/Shirley
factored
6
30
Lafortune Direct
2
30 + 5*Lobes
Cook-Torrance
1
40
*data cost is similar, so Ashikhmin/Shirley looks attractive, and also BlinnPhong.
291
Analytical BRDFs
• Common Models (Naty)
• Implementation and Performance (Dan)
• Production Issues (Dan)
Now, we shall discuss game production issues relating to these reflection models.
292
Production Issues
• Performance table looks promising, but
things are never this simple…
• Each evaluation handles a single point light
• But, game shaders process multiple lights (of
different types), shadow maps, other content
• Shaders get long fast! Must multiply cost of
BRDF with each light to be evaluated.
293
Shader Combinatorics
• Writing a shader for evaluating a BRDF with
a particular light set is straightforward
• Writing a shader which can deal with
different numbers and types of lights is hard
• In Renderman, lighting and material are
distinct shaders – we have no such luxury!
• Solutions: Ubershader, Shader Matrix
Shader combinatorial issues introduce major production problems. Many games
have build processes for the shader system which can take hours to compile. A
BRDF GPU function ideally will be created in a such a way as to facilitate this.
294
Ubershader
• Build one monolithic shader with all options
• Problem: size and complexity of the shader
• Flow control may be an issue, but static flow
control is getting cheaper
• However, register load is based on the
worst-case path, so number of pixels in flight
reduced to the worst case
295
Shader Matrix
• Create a Matrix which contains a set of
shaders
• For a given set of lights and light types,
compile offline a shader for that combination
• Works well – but creates thousands of
shaders, which use CPU and memory to
process
296
Hybrid Techniques
• Use Ubershader as fallback
• Shader matrix as a cache for commonly
used shaders
• Get the best of both worlds
297
Texture Management
• Expressive shaders use a lot of texture data
– Multiple texture reads – lowers performance
– Storage requirements high
• Can often pack data into one texture
– Example: store normals as only x and y, pack
power and gloss coefficients in other two channels
– Shader needs to manage unpacking of channels,
making authoring more challenging
298
Texture Compression
• Texture formats vary considerably
– Bits / pixel, floating / fixed point, compression, etc.
• Hardware compression techniques available
– DXT: 4-8 bits per texel – good for color data, poor
for normals, no high dynamic range support
• New Normal Compression in upcoming
hardware! Not as efficient as DXT, but gives
better results
299
Texture Reparameterization
• Artists often create large textures with considerable
low-detail areas, high percentage of wasted space
• Tools exist to reparameterize texture coordinates
based on texture content, can reduce texture
resolution without reducing quality
• Difficult to use with BRDF parameter data
– Nonlinear error metrics must be used which are tailored to
the specific BRDF
300
Reflectance Rendering with Point
Lights
• Analytical BRDFs (Naty & Dan)
• Other Types of BRDFs (Dan)
• Anti-Aliasing and Level-of-Detail (Dan)
Now we shall discuss non-analytical (hand-painted and measured) BRDF models
and their implementation and production considerations for rendering with point
lights.
301
Other Types of BRDFs
• Hand-Painted BRDFs
• Measured BRDFs
First, we shall discuss various types of hand-painted BRDFs. Then, we discuss
measured BRDFs, with two classes of rendering techniques: factorization and
approximation. Finally, we discuss implementation and production issues for the
previously mentioned techniques.
302
Hand-Painted BRDFs
• We have discussed how sub-expressions of
BRDFs may be put into a texture
• This opens the possibility of having an artist
paint the texture to customize the material
appearance
• There are several possibilities which offer
expressiveness and result in plausible
BRDFs
303
NDF Maps
f r (ωi , ωe ) =
ρd
π
+ K N RF (0 ) p(ωh )
• The normal distribution function (NDF) is the most
visually significant component of microfacet-type
BRDFs, controlling the highlight shape and size
• Here we see a generalization of Blinn-Phong to a
general NDF (KN is a normalization factor)
• The NDF can be painted into a 2D texture
– KN computed automatically in tools
The value of KN is automatically calculated from the NDF texture in tools to ensure
that the BRDF is both reciprocal and energy-conserving. Being able to effectively
paint the highlight gives the artist full control over surface smoothness and
anisotropy.
304
NDF Maps
• Other BRDF models may use an NDF map,
the extended Phong is just an example
• NDFs are high dynamic range by nature
– The normalization helps increase range
• NDFs are not spectral quantities
– But a colored NDF map can simulate colored
facets with different normal distributions
– Like some fabrics
305
NDF Maps
• For isotropic BRDFS, NDF map is onedimensional: p(θh)
• Opens possibility of combining with other
kinds of hand-painted BRDF information
306
Fresnel Ramps
• As we have seen earlier, the change in color
and magnitude of reflectance with angle of
incidence is sometimes complex
• RF(αh) can be stored in a 1D texture or ramp
– Computed from material properties, or painted by
an artist to achieve interesting visual effects
– Can be combined with many BRDF models
307
NDF + Fresnel Texture
• A 2D texture mapped by θh on one axis and
αh on the other allows hand-painting both
arbitrary 1D NDFs and Fresnel functions.
• As well as interactions between the two
• Texture still needs to be processed for
normalization
308
Other Types of BRDFs
• Hand-Painted BRDFs
• Measured BRDFs
Now we will discuss techniques for rendering measured BRDFs.
309
Measured BRDFs - Factorization
• An isotropic BRDF is a 4D function which
can be measured
• McCool et al’s Homomorphic Factorization
converts this into a product of 2D functions,
each represented by a 2D texture table
f (ωi , ωe ) ≈ p (ωi )q (ωh ) p (ωe )
310
Implementation of Factorization
•
In HLSL, Factorization is simple:
float3 BlinnPhong(float3 L, float3 V)
{
float3 H = normalize(V + L);
float3 C = texCUBE(textureP, V) *
texCUBE(textureQ, H) *
texCUBE(textureP, L);
return C;
}
McCools factorization technique is easy to implement, requiring just a few texture
lookups. There are various ways to parameterize a texture by a direction, but the
most straightforward one is to use the cube-map support in the GPU, which enables
using the direction directly to lookup the texture.
311
Measured BRDFs – Approximation
• [McAllister 2002]: measured BRDFs and fit
them to Lafortune model at each texel
– Shift-variant Lafortune model
• Could fit any model in this way with the right
number of lobes
• But… Data heavy, need texture per lobe
312
Why Measured BRDFs Aren’t Used
in Game Rendering
• Shift-invariant
– A measured BRDF may be rich and complex, but it
must be homogeneous over the surface
• Not amenable to artist control
– A serious drawback in game production
Artists productivity is very important, so BRDF models which are easy to explain,
with parameters that are intuitive to manipulate, are preferred for game
development.
313
Reflectance Rendering with Point
Lights
• Analytical BRDFs (Naty & Dan)
• Other Types of BRDFs (Dan)
• Anti-Aliasing and Level-of-Detail (Dan)
Finally in this section, we shall discuss issues relating to anti-aliasing and level-ofdetail for reflection models, as rendered with point lights.
314
Anti-Aliasing and Level-of-Detail
The upper left image is what filtering does to the model, while the lower right image
is what it should look like. Notice the hatching patterns caused the linear filtering of
BRDF data (in this case Blinn-Phong).
315
Anti-Aliasing and Level of Detail
• BRDF models are designed for an infinitely
small point
• But pixels cover area, sometimes quite large
• Traditionally, problem mitigated by
performing filtering on input data
• Filtering input data for a BRDF isn’t generally
meaningful
316
Sampling woes
• Problem 1: Sample points aren’t stable –
small changes in the reading of a sample
point can result in drastic rendering
differences
• Problem 2: MIP/Linear filtering changes the
BRDF. A BRDF composed of a large number
of smaller BRDFs is a lot more complex then
the average of the input values
317
Pixel Level Evaluation
Pixel shader will be evaluated at each one of these points
318
Pixel Level Evaluation, Shift
Shifting the triangle causes the sample points to change – this will result in
different values being passed into the pixel shaders. Because BRDFs are
non-linear, small changes in the input can cause massive difference in the
output.
319
What Does a Sample Point Mean?
What happens
when this pixel
is evaluated?
320
Texture Filtering Review
B
Sample Color
=
A*B *
A*(1-B) *
(1-A)*B *
(1-A)*(1-B)*
+
+
+
+
(1-B)
A
(1-A)
Don’t point sample from our resources, we run through a texture filter to
even out the samples
321
Problem1 : Texture swimming
The color of this pixel changes drastically as the sample points moves
across the texture. By averaging over disparate normals, we end up with an
intermediate vector which aligns to the blue direction vector. When this
happens, the pixel may suddenly illuminate because of the tight lobe of this
BRDF. Thus, here is a situation where a pixel goes from black to white to
black all while the texture moves only 1 pixel. This kind of thing is referred to
as shimmering, because the pixels light on and off much like noise in an
analog TV signal.
322
Problem 2, lower resolutions?
Roughly a quarter as many pixels get executed at this resolution. This has a
significant impact on the overall appearance of the BRDF if the parameters
are not filtered appropriately.
323
MIP Level
A Common Hack: Level of Detail
By lowering the
amplitude
Of the displacement,
effectively decreasing
to a less complex
model as the model
moves further away.
This will prevent
aliasing, but isn’t
accurate
This is a common trick in many games, which also fade away the BRDF as an object gets further
away.
324
Better Common Hack: Level of Detail
MIP Level
Power = 32
Power = 24
Power = 18
Power = 12
Power = 7
Decreasing the power for each MIP level by about 30% greatly increases visual quality. This
number, however, is just an average, and varies considerably depending on the content.
325
Scale Independent Lighting
• Ideally, screen size of an object should not
affect its overall appearance
• As screen size decreases
– High frequency detail should disappear
– Global effects should stay the same
• This is the reasoning behind MIP mapping
• Avoid drastic changes in image as sample
points change
326
Scale Independent Lighting
A low resolution rendering of an object should look like a
scaled down version of the high resolution one
Both objects are the same globally, as the image on the right is just a reduced version which is then
rescaled. This property is important, artists like to create very large high resolution maps, which
look good on their high resolution monitors, but may look poor on the actual finished product.
327
MIP Mapping
r
t
A pixel on the screen is the sum of all the texels which contribute to it. The red
ellipse in the right image represents the region of the normal map that the pixels in
the red square on the left image on the rendered image map to.
328
MIP mapping for diffuse
• For a simple diffuse case, the lighting calculating
can be approximately refactored
• The normal integration can also be substituted by a
MIP map aware texture filter
∑ ( L ⋅ N )T W
r
r
r
≈ dot ( L, tex 2 D(normalmap, t )) * tex 2 D(colormap, t )
r
∑ ( L ⋅ N )A W
r
r
r
r
≈ ( L ⋅ ∑ N rWr ) * ∑ ArWr
r
r
Here, we note that for diffuse lighting, or a linear BRDF, the best course of action is to not
normalize the normal from the sampler. Because of the distributive property of dot products, this
calculation ends up being similar to the ‘correct’ method of integrating the sample points. This
technique, however, does not compensate for the clamping to 0 which would take place if each
texel value was rendered independently.
329
Nonlinear BRDFs
p
W
(
N
⋅
H
)
≠ ( H ⋅∑ W r * N r )
∑ r r
p
r
∑W
r
( N r ⋅ H ) p ≠ ( H ⋅tex 2 D ( normalmap , t )) p
r
Blinn-Phong isn’t Linear
Looking at just the specular component of Blinn-Phong with a normal that varies
(the tex2D(r) part), we can see that we cannot use a linear filter. We can think of
this as shading on the texels rather than the pixels.
330
Solutions
• Mipmapping Normal Maps (Toksvig)
• Normal Distribution Mapping (Olano & North)
• Roughness Pyramids (Schilling)
• Multiresolution Reflectance Filtering (Tan et. al)
• Texture Space Lighting, BRDF Mipmap (Baker)
• SpecVar Maps (Conran)
331
Texture Space Lighting
• As in the Reyes algorithm, sampling is
decoupled from rendering
• Rendering into an object’s atlased texture
the linear color from the BRDF (Le(ωe))
• Then create MIP chain
• Render object with this MIP chain
332
Texture Space Lighting
• Triangles rasterized using texture coordinate as
screen position
• Left image shows normal sampled at each point
• Right image shows the computed lighting
Here, we see a lit texture and the normal map which generated it. This texture is a mapping of the
Stanford bunny. As the model moves through the world, the texture on the right will change as it is
illuminated by the lighting environment.
333
Texture Space Lighting
• Easy to Implement
• Can be used with any BRDF – Just drop it in
and it will work
• Performance problems result - each object is
rendered at a locked, often high resolution
• Invisible pixels will be rendered
• Still need a low-resolution substitute
334
Correctly Mipmapping BRDF
Parameters
The images in the middle column were rendered using a BRDF-approximating MIPmap. The more we zoon out, the larger the difference between the correct and
incorrect approaches. The Stanford bunny is much too shiny when highly minified.,
because the specular power is not adjusted appropriately.
335
Creating a BRDF MIP Chain
• Advanced Real-time Reflectance I3D2005
• Idea is to create a lower resolution MIP map
which best represents the top level
• Preprocess, using BFGS to find best results
• Similar in concept to ILM’s SpecVar maps
• Can render only into lowest MIP level visible
336
Multiresolution Reflectance Maps
• EGSR 2005
• Uses Cook-Torrance BRDF
• Pre-filters so that the Gaussian element of
Cook-Torrance can be added by the linear
interpolators
• Heavyweight – 250 instructions, 4 textures
• But good results
While the instruction count is high, each additional light is circa 160 instructions for
Cook-Torrance (4 lobes at 40 cycles each). The main limitation, however, will
probably be the memory requirements, since this takes about 4x the memory of
Blinn-Phong. However, it is a one pass technique, unlike texture space lighting.
337
Multiresolution Reflectance Maps
IMAGES BY P. TAN, S. LIN, L. QUAN, B. GUO AND H-Y SHUM
The image on the left shows the result of the naive approach, while the center is a
reference version. The image on the far right is the multi-resolution reflectance
version. Notice the naive approach is much too shiny, just like the Stanford bunny
was.
338
Physically-Based Reflectance
for Games
11:15 - 12:00: Reflectance Rendering
with Environment Map Lighting
Jan Kautz
339
Reflectance Rendering with
Environment Map Lighting
• Environment Maps
• Filtered Environment Maps
– Diffuse Reflections
– Glossy Reflections
• Anti-Aliasing
• Precomputed Radiance Transfer
In this section, we shall discuss the practical considerations for rendering
reflectance models with more general lighting, such as environment maps. First we
will over methods for using environment map lighting with the simplest types of
reflection: perfectly diffuse (Lambertian) and perfectly specular (mirror). Next, we
discuss methods for rendering more general types of reflections with environment
maps. Following that, we will discuss issues related to anti-aliasing when rendering
reflection models with environment maps. Finally we shall discuss a special class of
rendering algorithms for general lighting, precomputed radiance transfer, in the
context of various reflection types.
340
Environment Maps
Environment
Map
Specular
Object
• Definition:
– 2D texture for directional
information
– Radiance arriving at a
single point.
• Assumptions:
– Environment map at
infinity ⇒ valid for whole
object
Environment maps are essentially 2D textures that store directional information. The
directional information is the incident lighting arriving at a single point. This
information can be used to render reflections off mirroring objects, such as the torus
shown on the upper right corner. To this end, the reflected viewing direction at a
surface point is used to lookup into the environment map. This makes the implicit
assumption that the environment is infinitely far away.
341
Environment Maps:
Parameterizations
Sphere Map
Cube Map (most common)
Parabolic Map
There different ways to store this spherical information.
-Cube Maps: a commonly used format nowadays, that is supported by GPUs
-Parabolic Maps: a parameterization that stores two hemispheres separately.
-Sphere Map: original format supported by GPUs (OpenGL). This parameterization
corresponds to the reflection off a metal sphere.
342
Filtered Environment Maps
• Theory
• Diffuse Reflections
– Spherical Harmonics
• Glossy Reflections
– Prefiltering
– On-the-fly Filtering
• Implementation and Production Issues
We will first discuss general reflections with environment maps. Next we will discuss
diffuse reflections and using spherical harmonics to represent lighting on diffuse
surfaces. Glossy reflections will be treated afterwards, and finally we will discuss
implementation and production issues related to environment maps.
343
Filtered Environment Maps
• Goal: different materials
⇒ diffuse and glossy reflections
⇒ Necessary: use of other BRDFs
The goal of filtered environment maps is to produce reflective objects that are not
just mirrors. E.g., that can include purely diffuse object reflecting an environment or
more glossy objects, as shown here.
344
Filtered Environment Maps
Filtered Environment Maps:
- Filter equals BRDF
- Filter depends on view-direction
Object
Eye
Environment map
A filtered environment map stores a filtered version of the original map. Glossy
surfaces reflect light from a larger cone of directions, which depends on the material
(BRDF). A filtered environment map applies the BRDF, i.e. it convolves the original
map to get a “blurry” version. In theory the filter (BRDF) depends on the direction of
the viewer.
345
Filtered Environment Maps
• Environment maps:
– Store incident light
• Filtered environment maps:
– Store reflected light for all possible surface
orientations and view directions (prefiltering)
– Index into environment map with e.g.
• Reflected view direction or surface normal direction
• Depends on chosen reflectance model
Filtered environment maps store reflected light instead of incident light.
346
Filtered Environment Maps
• General filteredr environment maps:
r r r r r r
r r r
r r r
Le (v , n , t ) = ∫ Lenv (l ) f r ω (v , n , t ), ω (l , n , t ) (n ⋅ l )dl
Ω
(
)
BRDF
– Depends on (global) view direction, tangent frame
• Output: 5D table ⇒ too expensive !!
Reflected radiance is computed by integrating all incident light multiplied by the
BRDF and the cosine between the normal and the lighting. Here we see that the
BRDF depends on the coordinates of the view and lighting direction in the local
coordinate system at the current point (BRDF requires local directions, the function
w() converts from global to local coordinate system). The outgoing radiance
depends on the viewing direction, the surface normal and the tangent (in case of
anisotropic BRDFs). Tabulating this ends up being a 5D table!
347
Filtered Environment Maps
• Goal: avoid high memory consumption
• Solutions:
– E.g. Certain BRDFs ⇒ output only 2D
– E.g. Arbitrary BRDFs ⇒ frequency space filtering
Fortunately, it is possible to avoid the high memory requirements by choosing
appropriate BRDFs or approximating them the right way.
348
Filtered Environment Maps
• Theory
• Diffuse Reflections
– Spherical Harmonics
• Glossy Reflections
– Prefiltering
– On-the-fly Filtering
• Implementation and Production Issues
Now we will discuss using spherical harmonics to represent lighting on diffuse
surfaces (irradiance environment maps).
349
Diffuse Environment Maps
• Diffuse prefiltering [Greene ’86]
• Results in 2D map parameterizedroverrther
r
r
surface normal: Le (n ) = k d ∫ Lenv (l )(n ⋅ l )dl
Ω
Original
Diffuse
Result
Result
The first material we want to look at are diffuse materials. The BRDF of a diffuse
material is just a constant kd. The integral of the lighting and the cosine between the
normal and integration direction only depends on the surface normal. Hence the
filtered environment map also only depends on the normal, which makes it again a
2D environment map.
Notice how much smoother the diffusely prefiltered environment map looks like.
350
Diffuse Environment Maps
• Brute-force filtering is very slow
– Can be tens of minutes for large environment
maps
• Observation:
– Diffuse environment maps are low-frequency
⇒ due to large filter kernel
– Hence, filtering in frequency-space is faster
Filtering can be done brute-force (by applying the cosθi filter at every texel of the
environment map). But that can be slow.
Due to the large filter kernel (the cosine kernel extends over a full hemisphere) and
the resulting low-frequency environment map, filtering is fast in frequency-space.
351
Diffuse Environment Maps
Using Spherical Harmonics
• Proposed by [Ramamoorthi01]
• Project Lighting into spherical
r
r rharmonics:
lenv ,k = ∫ Lenv (l ) yk (l )dl
• Convolution with cosine-kernel
r r r r
r
Ldiffuse (n ) = ∫ Lenv (l )(n ⋅ l )dl
becomes
r k
Ldiffuse (n ) = d
π
r
A
l
y
(
n
∑ k env,k k )
8
k =0
First proposed by [Ramamoorthi01], frequency-space filtering is performed with
Spherical Harmonics (the Fourier equivalent over the sphere). If the lighting is
represented in SH, then the convolution becomes a simple scaled sum between the
coefficients of the lighting (projection to be explained in a bit) the the coefficients of
the cosine (times some convolution constants).
The exact definitions of SH can be looked up on the web. If only the first few bands
are used, it is more efficient to use explicit formulas, which can be derived by using
Maple for example. Here are the exact definitions for the first 25 basis functions.
Input is a direction (in Cartesian coordinates). The evaluated basis function is
written to ylm_array[].
float x = dir[0];
float y = dir[1];
float z = dir[2];
float x2, y2, z2;
352
Diffuse Environment Maps
Using Spherical Harmonics
• Convolution
r kd
Ldiffuse (n ) =
π
r
∑ Ak lenv,k yk (n )
8
k =0
• With convolution constants: Ak
• Simple sum instead of integral!
• yk(ω) are simple (quadratic) polynomials
• Only 15 instructions in shader
The constants are:
A_0 = pi;
A_1 = A_2 = A_3 = 2*pi/3;
A_4 = … = A_8 = pi/4;
See previous slides for y_i();
353
Diffuse Environment Maps
Using Spherical Harmonics
• Projection into Spherical Harmonics
– Integrate basis functions against fixed HDR map
lenv ,k = ∫ Lenv
() ()
lenv ,k = ∫
⋅
r
r r
l yk l dl
Ω
• 1) Monte-Carlo integration
• 2) Precompute maps in same space (e.g. cube map) that
contain the basis functions ⇒ big dot-product for each coeff.
This is a standard scenario for projecting an HDR environment into SH basis
functions.
If the environment map is given as a cube map, it is necessary to know the solid
angle of a texel in the cube map. It is: 4/( (X^2 + Y^2 + Z^2)^(3/2) ), where [X Y Z]
is the vector to the texel (not normalized).
354
Diffuse Environment Maps
full integral
SH
IMAGES BY R. RAMAMOORTHI
input
355
Filtered Environment Maps
• Theory
• Diffuse Reflections
– Spherical Harmonics
• Glossy Reflections
– Prefiltering
– On-the-fly Filtering
• Implementation and Production Issues
We will now consider various issues involved with the use of environment maps to
render glossy reflections.
356
Glossy Reflections
• Reminder: store reflected light
r
r r r r r r
r r r
r r r
Le (v , n , t ) = ∫ Lenv (l ) f r ω (v , n , t ), ω (l , n , t ) (n ⋅ l )dl
Ω
(
)
• Table is
– 4D for general isotropic BRDF
– 5D for general anisotropic BRDF
Reminder: Reflected radiance is computed by integrating all incident light multiplied
by the BRDF and the cosine between the normal and the lighting. Here we see that
the BRDF depends on the coordinates of the view and lighting direction in the local
coordinate system at the current point (BRDF requires local directions). The
outgoing radiance depends on the viewing direction, the surface normal and the
tangent (in case of anisotropic BRDFs). Tabulating this ends up being a 5D table!
Tabulating this ends up being a 5D table in case of anisotropic BRDFs (2D for
viewing direction, 3D for tangent frame), and 4D for isotropic BRDFs (2D+2D).
357
Glossy Reflections
• Intuition: lobe size/shape changes with view
Arbitrary BRDF
• Solution: fixed lobe shape?
– In global coordinate system
of the environment map,
the shape still changes.
r
v
r
rvv
r
rvv
r
v
The reason for this is that the lobe size/shape of the BRDF changes with the
viewing direction. For example reflections become sharper at grazing angles.
Even a fixed lobe shape is not sufficient, as the change from the local BRDF
coordinate system to global environment map coordinate system still results in
different shapes (i.e. dimensionality is still high).
358
Glossy Reflections
• Nonetheless, certain BRDFs (or
approximations) reduce dimensionality
• Best example: Phong model
• Reason: its lobe shape is
– fixed
– rotationally symmetric
The solution is to find a BRDF that has a fixed lobe-shape and that the lobe shape
is rotationally symmetric (so change from local to coordinate system doesn’t
matter).
359
Glossy Reflections: Phong
• Phong prefiltering [Miller84, Heidrich99]
– Outgoing radiance of a Phong material at one point
is 2D function of reflected viewing direction
– Phong BRDF (global coords):
r r
r r N r r
f r (v , l ) = k s (rv ⋅ l ) (n ⋅ l )
– “Blurred” environment map resulting from shiftinvariant Phong filter:
r r
r
r r N
Le (rv ) = k s ∫ (rv ⋅ l ) Lenv (l )dl
Ω
The best example is the Phong BRDF, as defined here (original definition using
global coordinate system). Note that the Phong material is physically not plausible,
but results in 2D environment maps.
360
Glossy Reflections: Phong
N=10
N=100
N=1000
perfect
Here we have an example of Phong environment maps, for different
exponents N=10,100,1000.
361
Glossy Reflections: Phong + Diffuse
(Combined with Fresnel)
IMAGES BY W. HEIDRICH
Here we combined Phong filtered environment maps with diffuse environment
maps. The combination is governed by the Fresnel equations. This results in very
realistic looking materials.
362
Glossy Reflections: Phong Problem
• Phong model is not realistic
• Real materials are sharper at grazing angles:
IMAGES BY E. LAFORTUNE
The Phong model is not realistic, as its lobe shape remains constant for all viewing
directions! Real reflections become sharper at grazing angles, e.g. when looking at
a sheet of paper at very grazing angles will show a visible glossy reflection.
363
Glossy Reflections: Real Materials
• Lobe becomes narrower and longer at
grazing angles:
Real BRDF
• For increased realism, want to model that
– Need to use other BRDFs
– Without increasing dimensionality of filtered
environment map
I.e., the lobe shape changes. For more realism, we want to include this effect in our
filtered environment maps.
364
Glossy Reflections: Lafortune Model
• Lafortune model
– Derivative of Phong model
– Keeps lobe shape as in Phong, but modifies
viewing direction to achieve off-specular lobes
– Allows for anisotropic reflections
The first example that allows this (to some limited extent) is the Lafortune model. It
is basically a Phong model, but slightly more realistic, allowing a more realistic class
of materials to be represented.
365
Glossy Reflections: Lafortune Model
• [McAllister02] proposed to use it for
environment maps:
r r
r
r r
r r N
Llaf (rw ) = (n ⋅ rv )k s ∫ (rw ⋅ l ) Lenv (l )dl
Ω
r
with rw = (C x v x , C y v y , C z v z )
r r
• Note that (n ⋅ l ) is moved outside the integral
r r
(
n
as ⋅ rv ) which is an approximation.
– Ok for sharp lobes, otherwise inaccurate
Plugging the model into the reflectance equation, we arrive at the above equation.
Note how similar the integral is to the original Phong environment maps.
The main difference is the direction r_w, which isn’t necessarily just the reflected
view direction, it can be any scaled version of the view direction, which allows for
off-specular reflections.
Note further how the term (n*l), which should be inside the integral is simply moved
outside by approximating it with (n*r_v). This is incorrect of course, but for high
exponents N, the influence of (n*l) is rather small anyway (it doesn’t vary much
where (r_w*l)^N != 0), so moving it outside the integral is an ok approximation. For
small N, the approximation is very crude.
366
Glossy Reflection:
Lafortune Example
Additionally:
– Stored several 2D
filtered environment
maps as 3D stack
(varying exponent)
– Per-pixel lookups to
achieve spatial
variation
Courtesy of D. McAllister
Example of rendering with Lafortune example.
Here the authors have prefiltered an environment map with different exponents and
stored it in a mip-mapped cube map. At each texel, they have a roughness map,
which governs in which level of the mip-map to index. You can also note that the
Lafortune model allows for anisotropies.
367
Glossy Reflections:
BRDF Approximations [Kautz00]
• Approximate BRDF [Kautz00] with
– rotationally-symmetric &
– constant lobe
• E.g:
[Kautz00] proposed to approximate BRDFs with a rotationally-symmetric and fixed
lobe such that filtered environment maps would remain 2D (like the Phong lobe).
368
Glossy Reflections:
BRDF Approximations [Kautz00]
• Given:
+
Arbitrary BRDF
Environment Map
• Fit rotationally symmetric lobes:
• Filter environment map with lobes:
Given some arbitrary shaped lobe, you can see here rotationally-symmetric
lobes and the envmaps filtered with it.
You can still see how the width/length of the lobe changes with viewing
direction.
369
Glossy Reflections:
BRDF Approximations [Kautz00]
• Any BRDF can then be used for filtering:
– for good quality: need separate lobe for each view
• results in 3D environment map (stack of 2D maps)
– otherwise: basically like Phong or Lafortune
• Similar approximation as Lafortune needed
– move (n·l) outside integral, or otherwise the lobes
are not rotationally symmetric
For realistic results, it is necessary to use these separate lobes and filter the
environment map with it. This results in a stack of 2D environment maps, which are
index with the elevation angle of the viewing direction (anisotropic BRDFs are not
considered).
The authors also propose to approximate the BRDF with one (scaled) fixed lobe,
which then is very similar to the Phong/Lafortune BRDFs (but based on measured
BRDFs, which they approximate).
Again, (n·l) is moved outside the integral.
370
Glossy Reflections:
How to filter a cube map?
environment map
over sphere
filter kernel
apply filter
source
target
Now we have some idea on how prefiltered environment maps work. We don’t know
exactly how to filter one.
It is quite simple: for each texel in the target (filtered) environment map, we apply a
convolution filter. This filter needs to be mapped from the sphere (the domain over
which it is defined) into the texture domain of the envmap.
371
Glossy Reflections:
How to filter a cube map?
environment map
over sphere
shape varies
filter kernel
source
This usually means that the filter is spatially-varying in the environment mapdomain, as there is no mapping from the sphere to the plane that doesn’t introduce
distortions.
372
Glossy Reflections:
How to filter a cube map?
• Easy implementation: brute force
– For each target pixel,
input map, and do:
(take solid angle
of each pixel into
account)
∑
go over the full
*
A simple implementation just does brute-force filtering.
373
Glossy Reflections:
How to filter a cube map
• Faster and less accurate:
– Hierarchically [Kautz00]
– Angular filtering with cut-off [CubeMapGen-ATI05]
– Do each face (cube map) separately, but put
other faces around cube face [Ashikhmin03]
– Do it in frequency domain [code: S2kit]
• Either way:
– Filtering is slow (seconds)
Faster and less accurate filtering can be done hierarchically [Kautz00].
Angular extend filtering: ATI’s cubemapgen tool does correct filtering, but clamps
the filter at a certain threshold and only filters within the bound of the filter. Much
faster than doing brute-force.
In case of cube maps:
- by filtering each face individually but including neighboring faces around it to
make sure borders are (somewhat) handled.
as can be seen in the pictures, there is not way to fill in the corners around the
center face.
374
Filtered Environment Maps
• Theory
• Diffuse Reflections
– Spherical Harmonics
• Glossy Reflections
– Prefiltering
– On-the-fly Filtering
• Implementation and Production Issues
We will first consider various issues involved with the use of environment maps to
render mirror and diffuse reflections. Next we will discuss using spherical harmonics
to represent lighting on diffuse surfaces (irradiance environment maps). Finally we
will discuss implementation and production issues related to diffuse and mirror
reflections of environment maps.
375
Filtered Environment Maps:
Filtering Process
Environment Map Filtering
⇒ kernel shift-variant in texture space
shift
shift-variant
(for all: cube map, sphere map, etc.)
⇒ convolution is expensive
As said before, filtering can be expensive, which is mainly due to the shift-variant
filter in texture space.
376
Filtered Environment Maps:
Dynamic Glossy Reflections
• If environment changes, want to create and
filter environment map dynamically.
[Kautz04][Ashikhmin03]
– Render environment into cube-map
– Filter environment using graphics hardware
But if a scene is changing dynamically, we also want dynamic reflections.
Easy solution: render scene into a cube map, and filter it using hardware.
377
Dynamic Glossy Environment Maps
• Hard to tell what kernel is used
– Use a simpler kernel instead: Box-Filtering
original
x2
x4
x8
The actual filter that is used to filter an environment map is not easy to tell for a
viewer, so why not just use an inexpensive box-filter and store a mip-mapped cube
map.
378
Dynamic Glossy Environment Maps
– Box filtering is directly supported by graphics
hardware: GL_SGIS_automatic_mipmap
– Select glossiness: GL_EXT_lod_bias (or per pixel)
bias: 0.05
0.1
0.15
0.2
The filtering can be done automatically using the auto_mipmap feature of GPUs.
Glossiness can then be selected on a per-object level (EXT_lod_bias) or by
changing the LOD level per-pixel.
379
Dynamic Glossy Environment Maps
Phong: N = 225
Box: bias = 2.25
380
Dynamic Glossy Environment Maps:
Naïve implementation
LOD bias = 0.0
LOD bias = 9.4
Problem: cube faces are filtered independently
Solution: filtering with border! Æ slower again
Unfortunately, there is one big drawback, the filtering is not done with borders taken
into account, which means that the faces boundaries will show up at blurrier levels
of the MIP-map!
Now easy way around this, other than slower filtering with borders again! Æ Change
to different parameterization
381
Dynamic Glossy Environment Maps:
Parabolic Maps
• Parabolic maps can include borders
Æ GPU filtering is easier
– Here the border is shown, and one
can see that a filter kernel can extend
outside the area of the hemisphere.
– Disadvantages:
• need to write own lookup into parabolic maps
• filter kernel varies considerably across a
parabolic map (fix [Kautz00])
Parabolic maps can be used instead, where border can be used.
Problem, one needs to write a lookup into the parabolic map, which isn’t that
complicated, but it does cost a few instructions.
The other problem is that when a constant filter kernel is mapped to the center of a
parabolic map, it is much smaller than at the boundaries. Can be fixed with
[Kautz00]
382
Filtered Environment Maps
• Other techniques
– Unified Approach To Prefiltered Environment Maps
(using parabolic maps) [Kautz00]
– Frequency-Space Environment Map Rendering
[Ramamoorthi02]
– Fast, Arbitrary BRDF Shading [Kautz02]
383
Filtered Environment Maps –
Production Issues
• Distant Lighting Assumption
– All techniques assume a distant environment
– Looks odd, when object moves around and
reflections don’t change!
• Solutions
– Static scene:
• Sample incident light on grid (diffuse Æ SH) and interpolate
• Half-Life 2 uses an artist-defined sampling of incident light
The assumption of distant lighting can be disturbing, when an object moves around
within an environment.
For static scenes, the incident lighting can be precomputed (e.g. on a grid, e.g. in
SH for diffuse reflections) and interpolated.
Half-Life 2 places samples of environment maps throughout the scene; the locations
are controlled by the artist.
384
Filtered Environment Maps –
Production Issues
• Dynamic scenes
– Distant lighting not main issue
• Need to re-generate environment map anyway!
– Sample incident light on-the-fly
• Render scene into cube map
– On older hardware: 6 rendering passes
– With DirectX 10 can do in one pass (geometry shaders)
• How to do filtering, see before…
In dynamic scenes, one would like to regenerate the environment map. In case of
cube maps that requires 6 passes, one for each face! This is very costly, and
therefore not done on current hardware.
Next generation hardware will allow to render this in one pass by using geometry
shaders!
385
Filtered Environment Maps –
Production Issues
• Hardware does not filter across faces!
– I.e. a bilinear lookup will not take samples from
adjacent faces
– Idea: use texture borders
• Problem: often not supported by hardware
– Fix: make sure borders of faces are similar…
• ATI’s CubeMapGen does this
Another problem is that currently the GPU does not filter across cube map faces.
That is a bilinear lookup at the border of a face will not lookup into the adjacent
face!
Texture borders (GL) are usually not supported by GPUs. ATI’s CubeMapGen fixes
this by making the borders of faces more similar.
386
Filtered Environment Maps –
Production Issues
• HDR compression
– For realistic effects, environment maps need to be
high-dynamic range
– So far no good compression for float textures
– Solution: see this year’s papers program!
Environment maps should be high-dynamic range to achieve realistic effects, but
there are no compression techniques yet.
See this year’s technical program for a solution.
387
Filtered Environment Maps –
Production Issues
• So, what algorithm should I use then?
Æ depends
388
Filtered Environment Maps –
Production Issues
• On-the-fly vs. Prefiltering
– Prefiltering can achieve higher quality reflections
• But: not quite clear how much better Cosine-lobe filtering
vs. box-filtering is
• Prefiltering is slower
– On-the-fly
• Problem: filtering individual faces when using cube maps
– wrong results
• Parabolic maps: need some more shader instructions
389
Filtered Environment Maps –
Production Issues
• Varying BRDFs?
– Store pre-filtered environment map as mip-mapped
cube map
– Per-pixel roughness Æ select level l of mip-map
• Want to make sure not to exceed maximum LOD lmax
(determined by hardware to avoid aliasing)
• Clamp mip-level “by hand” and then do a texCUBElod
instruction (pre-PS3.0: use texCUBEbias)
390
Reflectance Rendering with
Environment Map Lighting
• Environment Maps
• Filtered Environment Maps
– Diffuse Reflections
– Glossy Reflections
• Anti-Aliasing
• Precomputed Radiance Transfer
Now we will discuss issues related to anti-aliasing when rendering reflection models
with environment maps.
391
Environment Map Anti-Aliasing
• Aliasing due to curved surfaces
environment map
– Envmap may be minified
– Need to anti-alias it
• Done automatically
r
drv
by the GPU!
r
dv
pixel
screen
Minification as well as magnification may occur when rendering with environment
maps. For example, a convex, curved surface can reflect light from a larger area of
the environment map towards a single pixel on the screen. To avoid aliasing
artifacts, the environment map needs to be filtered, just like any other texture needs
to be filtered in this case. GPUs perform this filtering automatically, and the
programmer does not need to deal with it explicitly.
392
Environment Map Anti-Aliasing
• Aliasing due to bump maps
– Multiple bumps within one pixel
environment map
– Accurate filtered reflection
is nearly impossible (almost
random reflected views)
pixel
screen
The situation is more difficult, when rendering environment maps together with
bump maps. Underneath a single pixel, there might be several bumps that are all
reflecting in different directions. Accurate filtering is almost impossible in this
situation (unless very heavy super-sampling is used, which is not an option with
current GPUs).
393
Environment Map Anti-Aliasing
• [Schilling97]
environment map
– Use covariance matrix of
distribution of normals (for
mip-maps of bump map)
– Change derivatives in cube map
pixel
screen
lookup based on covariance value
– An implementation using TEXDD (supply
derivatives by hand) is interesting, but hasn’t
been tried…
A. Schilling proposed to approximate the variation of normals underneath a screen
pixel and then to use that knowledge when looking up the environment map. He
proposes to store a covariance matrix of the normals underneath each texel of
the bump map in a MIP-mapped fashion (i.e., at coarser resolution, the area
taken into account becomes bigger). The covariance matrix is then used to
perform an anisotropic texture lookup into the environment map. Please see the
paper for details.
It might be feasible now to perform all the necessary calculations in a pixel shader.
The anisotropic lookup can be performed with the TEXDD instruction (explicit
derivatives needed). Nobody has tried this yet, but it does seem interesting.
394
Reflectance Rendering with
Environment Map Lighting
• Environment Maps
• Filtered Environment Maps
– Diffuse Reflections
– Glossy Reflections
• Anti-Aliasing
• Precomputed Radiance Transfer
Finally, we discuss a special class of rendering algorithms for general lighting,
precomputed radiance transfer, in the context of various reflection types.
395
Precomputed Radiance Transfer
taken into account
No Shadows
• So far: no self-shadowing
• But is much more realistic
for changing illumination
Shadows
• Want to do it in real-time
396
Precomputed Radiance Transfer
• Diffuse Reflection [Sloan02]
• Other BRDFs
• Implementation and Production Issues
Another method for rendering environment map lighting, which also takes other
effects such as self-shadowing, interreflections and subsurface scattering into
account, is precomputed radiance transfer (PRT). We will first discuss using PRT on
Lambertian surfaces, followed by more general reflection models, and finally
implementation and production issues.
397
Global Illumination
(Reflectance Equation)
• Integrate incident light * V() * diffuse BRDF
Emitter 1
Emitter 2
Object
To compute exit radiance from a point p, we need to integrate all incident lighting
against the visibility function and the diffuse BRDF (dot-product between the normal
and the light direction).
398
Precomputed Radiance Transfer
Reflectance Equation
• Math:
r
r
r
r
r r
Le , p (v ) = ∫ Lenv (l )V p (l ) max(n p ⋅ l ,0)dl
Ω
Reflected
Light
Incident Light
Visibility
Cosine
Computation written down more accurately.
399
Reflectance Equation: Visually
Incident Light
Visibility
Integrand
Cosine
Visually, we integrate the product of three functions (light, visibility, and cosine).
400
PRT: Visually
Incident Light
Visibility
Cosine
∫
Integrand
Precompute
The main trick we are going to use for precomputed radiance transfer (PRT) is to
combine the visibility and the cosine into one function (cosine-weighted visibility or
transfer function), which we integrate against the lighting.
401
PRT
• Questions remain:
– How to encode the spherical functions?
– How to quickly integrate over the sphere?
This is not useful per se. We still need to encode the two spherical functions
(lighting, cosine-weighted visibility/transfer function). Furthermore, we need to
perform the integration of the product of the two functions quickly.
402
PRT
Reflectance Equation: Rewrite
• Math:
r
r
r r
r
r
Le , p (v ) = ∫ Lenv (l )V p (l ) max(l ⋅ n p ,0)dl
r
r
r r
• Rewrite with Tp (l ) = V p (l ) max(l ⋅ n p ,0)
• This is the transfer function
– Encodes:
• Visibility
• Shading
Using some more math again, we get the transfer function Tp(s).
Note, that this function is defined over the full sphere. It also implicitly encodes the
normal at the point p! So, for rendering no explicit normal will be needed.
403
PRT
Reflectance Equation: Rewrite
• Math:
r
r
r r
r
r
Le , p (v ) = ∫ Lenv (l )V p (l ) max(l ⋅ n p ,0)dl
r
• Plug new Tp (l ) into Equation:
r
r r
r
Le , p (v ) = ∫ Lenv (l )Tp (l )dl
into SH
into SH
light function: Lenv transfer: T p
⇒ project lighting and transfer into SH
Now, when we plug the new Tp (s) into the rendering equation, we see that we have
an integral of a product of two functions. We remember, that this special case boils
down to a dot-product of coefficient vectors, when the two functions are represented
in SH.
This is exactly, what we will do. We project the incident lighting and the transfer
function into SH.
404
Evaluating the Integral
• The integral
r
r r
r
Le , p (v ) = ∫ Lenv (l )Tp (l )dl
"light vector"
n
r
L
(
v
becomes e , p ) = ∑ lenv ,k t p ,k
"transfer vector"
k
A simple dot-product!!!!
(All examples use n=25 coefficients)
Then the expensive integral becomes a simple product between two coefficient
vectors.
405
Precomputed Radiance Transfer
Project lighting
per
object
Lookup
Tp
Rotate light
per
pixel/vertex
* =
Compute integral
•
=
r
r
r
r
r r
Le , p (v ) = ∫ Lenv (l )V p (l ) max(n p ⋅ l ,0)dl
This shows the rendering process.
We project the lighting into SH (integral against basis functions). If the object is
rotated wrt. to the lighting, we need to apply the inverse rotation to the lighting
vector (using the SH rotation matrix).
At run-time, we need to lookup the transfer vector at every pixel (or vertex,
depending on implementation). A (vertex/pixel)-shader then computes the dotproduct between the coefficient vectors. The result of this computation is the exitant
radiance at that point.
406
Rendering
• Reminder:
n
Le , p = ∑ lenv ,k t p ,k
k
• Need lighting coefficientr vector:
r r
lenv ,k = ∫ Lenv (l ) yk (l )dl
• Compute every frame (if lighting changes)
• Projection can e.g. be done using MonteCarlo integration, or on GPU
Rendering is just the dot-product between the coefficient vectors of the light and the
transfer.
The lighting coefficient vector is computed as the integral of the lighting against the
basis functions (see slides about transfer coefficient computation).
407
PRT Rendering –
Dynamic Lighting
• Sample dynamic lighting:
– Render scene from center of object p
– 6 times: for each cube face
– Compute lighting coefficients (SH/Haar):
lenv ,k = ∫
⋅
– No need to rotate lighting then
408
Rendering
• Work that has to be done per-vertex is easy:
// No color bleeding, i.e. transfer vector is valid for all 3 channels
for(j=0; j<numberVertices; ++j) {
// for each vertex
for(i=0; i<numberCoeff; ++i) {
vertex[j].red
+= Tcoeff[i] * lightingR[i]; // multiply transfer
vertex[j].green += Tcoeff[i] * lightingG[i]; //
coefficients with
vertex[j].blue += Tcoeff[i] * lightingB[i]; //
lighting coeffs.
}
}
• Only shadows: independent of color
channels ⇒ single transfer vector
• Interreflections: color bleeding ⇒ 3 vectors
Sofar, the transfer coefficient could be single-channel only (given that the 3-channel
albedo is multiplied onto the result later on). If there are interreflections, color
bleeding will happen and the albedo cannot be factored outside the precomputation.
This makes 3-channel transfer vectors necessary, see next slide.
409
Rendering
• In case of interreflections (and color
bleeding):
// Color bleeding, need 3 transfer vectors
for(j=0; j<numberVertices; ++j) {
// for each vertex
for(i=0; i<numberCoeff; ++i) {
vertex[j].red
+= TcoeffR[i] * lightingR[i]; // multiply transfer
vertex[j].green += TcoeffG[i] * lightingG[i]; //
coefficients with
vertex[j].blue += TcoeffB[i] * lightingB[i]; //
lighting coeffs.
}
}
410
PRT Results
Unshadowed
Shadowed
411
PRT Results
Unshadowed
Shadowed
412
PRT Results
Unshadowed
Shadowed
413
PRT Results
414
What does this mean?
• Positive:
– Shadow computation is independent of number or
size of light sources
– Soft shadows are cheaper than hard shadows
– Transfer vectors need to be computed (can be
done offline)
– Lighting coefficients computed at run-time (3ms)
This has a number of implications:
Shadow computation/shading is independent of the number or the size of the light
sources! All the lighting is encoded in the lighting vector, which is independent of
that.
Rendering this kind of shadows is extremely cheap. It is in fact cheaper than
rendering hard shadows!
The transfer vectors can be computed off-line, thus incurring no performance
penalty at run-time.
The lighting vector for the incident light can be computed at run-time (fast enough,
takes a few milliseconds).
415
What does this mean?
• Negative:
– Models are assumed to be static
The precomputation of transfer coefficients means that the models have to be static!
Also, there is an implicit assumption, that all points on the surface receive the same
incident illumination (environment map assumption). This implies that no halfshadow can be cast over the object (unless, it's part of the object preprocess).
416
Precomputation
• Integral
r
r
r r
r
ρ
0
t p ,k = ∫ V ( p → l ) max(n p ⋅ −l ,0) yk (l )dl
π
evaluated numerically with e.g. ray-tracing:
r
r
r
r
4π ρ N −1
t =
∑V ( p → l j ) max(n p ⋅ −l j ,0) yk (l j )
N π j =0
r
• Directions l j need to be uniformly
0
p ,k
distributed (e.g. random)
• Visibility V is determined with ray-tracing
The main question is how to evaluate the integral. We will evaluate it numerically
using Monte-Carlo integration. This basically means, that we generate a random
(and uniform) set of directions s_j, which we use to sample the integrand. All the
contributions are then summed up and weighted by 4*pi/(#samples).
The visibility V(p->s) needs to be computed at every point. The easiest way to do
this, is to use ray-tracing.
------------------------------------------------Aisde: uniform random directions can be generated the following way.
1) Generate random points in the 2D unit square (x,y)
2) These are mapped onto the sphere with:
theta = 2 arccos(sqrt(1-x))
phi = 2y*pi
417
Precomputation – Visually
..
.
Basis 16
Basis 17
illuminate
result
Basis 18
..
.
Visual explanation 2):
This slide illustrates the precomputation for direct lighting. Each image on the right
is generated by placing the head model into a lighting environment that simply
consists of the corresponding basis function (SH basis in this case illustrated on the
left.) This just requires rendering software that can deal with negative lights.
The result is a spatially varying set of transfer coefficients shown on the right.
To reconstruct reflected radiance just compute a linear combination of the transfer
coefficient images scaled by the corresponding coefficient for the lighting
environment.
418
Precomputation – Code
//
//
//
//
//
p: current vertex/pixel position
normal: normal at current position
sample[j]: sample direction #j (uniformly distributed)
sample[j].dir: direction
sample[j].SHcoeff[i]: SH coefficient for basis #i and dir #j
for(j=0; j<numberSamples; ++j) {
double csn = dotProduct(sample[j].dir, normal);
if(csn > 0.0f) {
if(!selfShadow(p, sample[j].dir)) {
// are we self-shadowing?
for(i=0; i<numberCoeff; ++i) {
value = csn * sample[j].SHcoeff[i]; // multiply with SH coeff.
result[i] += albedo * value;
//
and albedo
}
}
}
}
const double factor = 4.0*PI / numberSamples; // ds (for uniform dirs)
for(i=0; i<numberCoeff; ++i)
Tcoeff[i] = result[i] * factor;
// resulting transfer vec.
Pseudo-code for the precomputation.
The function selfShadow( p, sample[j].dir ) traces a ray from position p in direction
sample[j].dir. It returns true if there it hits the object, and false otherwise.
419
PRT – Basis Functions
• Original technique uses Spherical Harmonics
• Wavelets are a good alternative [Ng03]
– Better quality for a given amount of coefficients
– [Ng03] cannot be directly done on GPU
– Using compression [Sloan03], most of the
computation can be done on the GPU
420
Precomputed Radiance Transfer
• Diffuse Reflection
• Other BRDFs
• Implementation and Production Issues
We will now discuss methods for using PRT with more general reflection models.
421
Precomputed Radiance Transfer
• General BRDFs
– Ongoing subject of research, see this year’s
papers.
– Difficult due to additional BRDF term:
r
r
r
r r
r
r
r
Le, p (v ) = ∫ Lenv (l ) V p (l ) f (ω (l ), ω (v )) max(l ⋅ n p ,0)dl
– Could say: like envmap rendering, but with visibility
– Won’t go into details in this course
General BRDFs are ongoing research. The difficulty arises from the use of an
arbitrary BRDF f(). As before, the BRDF requires local coordinates, and the function
w() converts from global to local coordinates. The treatment of arbitrary BRDFs is
outside the scope of this course. The audience is referred to last year’s course on
PRT.
422
Precomputed Radiance Transfer
• General BRDFs
– Commonalities of various techniques:
• Need to store more data
• Use compression
• Slower run-time than pure diffuse BRDF
– Problems with these techniques:
• Still to slow for games
– Question: do glossy reflections need selfshadowing?
423
Precomputed Radiance Transfer
• Diffuse Reflection
• Other BRDFs
• Implementation and Production Issues
We will now cover implementation and production issues relating to the techniques
we have just discussed.
424
PRT: Production Issues
• Albedo maps
– Could be stored as part of the transfer coefficients
– But often textures are sampled at higher resolution
– Better: multiply albedo with transfer at run-time
• If inter-reflections are included this is tricky, since they
depend on albedos.
425
PRT: Production Issues
• Normal maps
– If normal maps are included in precomputation
Æ works just fine
– But
• Normal maps often at different resolution than PRT maps
– Solution:
• Normal Mapping for Precomputed Radiance Transfer
[Sloan06]
426
PRT: Production Issues
• Should I use PRT?
– If you are using light maps anyway
• PRT is not that different really!
– If you are using dynamic lighting (point lights …)
• More difficult to answer
• Combination of the two is largely unexplored…
427
Physically-Based Reflectance
for Games
12:00 - 12:15: Conclusions / Summary
All Presenters
428
Conclusions
• Open Discussion
Here is where the presenters will discuss the results we have presented.
429
Q&A
Time permitting, we will take questions from the audience here.
430
Course Web Page
• The latest version of the course notes, as
well as other supplemental materials can be
found at
http://www.cs.ucl.ac.uk/staff/J.Kautz/GameCourse/
431
References
• See course notes for references
432
Acknowledgements
• Shanon Drone, John Rapp, Jason Sandlin and John Steed for demo
help
• Paul Debevec for light probes
• Jan’s research collaborators: Fredo Durand, Paul Green, Wolfgang
Heidrich, Jaakko Lehtinen, Mike McCool, Tom Mertens, Hans-Peter
Seidel
• Keith Bruns for game art pipeline screenshots and information
• Firaxis Games for support
• Michael Ashikhmin, Wolfgang Heidrich, Henrik Wann Jensen,
Kazufumi Kaneda, Steve Lin, Eric Lafortune, David McAllister, Addy
Ngan, Michael Oren, Ravi Ramamoorthi, Kenneth Torrance, Gregory
Ward, and Stephen Westin for permission to use their images
433
5/4/2006
434
434
References
All-Frequency Shadows Using Non-Linear Wavelet Lighting Approximation,
R. Ng, R. Ramamoorthi, P. Hanrahan,
Siggraph 03, pages 376-381
Frequency Space Environment Map Rendering,
R. Ramamoorthi, P. Hanrahan,
SIGGRAPH 2002, pages 517-526
An Efficient Representation for Irradiance Environment Maps,
R. Ramamoorthi, P. Hanrahan,
SIGGRAPH 2001
Simple Blurry Reflections with Environment Maps,
Ashikhmin, M., Ghosh, A. ,
Journal of Graphics Tools, 7(4), 2003 , pages 3-8
Advanced Environment Mapping in VR Applications
J. Kautz, K. Daubert, H.-P. Seidel
Computers & Graphics, 28(1), February 2004, pages 99-104
Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency
Lighting Environments
P.-P. Sloan, J. Kautz, J. Snyder,
Proc. SIGGRAPH 2002, July 2002, pages 527-536
Fast, Arbitrary BRDF Shading for Low-Frequency Lighting Using Spherical Harmonics
J. Kautz, P.-P. Sloan, J. Snyder
Proc. of the 12th Eurographics Workshop on Rendering, June 2002, pages 301-308
A Unified Approach to Prefiltered Environment Maps
J. Kautz, P.-P. Vázquez, W. Heidrich, H.-P. Seidel
Proc. of the 11th Eurographics Workshop on Rendering, June 2000, pages 185-196
Approximation of Glossy Reflection with Prefiltered Environment Maps
J. Kautz and M. D. McCool
Proceedings Graphics Interface 2000, May 2000, pages 119-126
Half-Life 2 Source Shading
G. McTaggart
GDC 2004
Normal Mapping for Precomputed Radiance Transfer,
Peter-Pike Sloan
Proc. of ACM Symposium on Interactive 3D Graphics and Games 2006, March, 2006
Lighting Controls for Computer Cinematography
Ronen Barzel.
Journal of Graphics Tools. 2(1), pp. 1-20, 1997.
An Improved Normalization for the Ward Reflectance Model
Arne Duer
ACM Journal of Graphic Tools, Vol. 11, Nr. 1 (2006), 51-59.
Notes on the Ward BRDF
Bruce Walter
Technical report PCG-05-06, Program of Computer Graphics, Cornell University, April
2005.
Basic Parameter Values for the HDTV Standard for the Studio and for International
Programme Exchange
Recommendation ITU-R BT.709 [formerly CCIR Rec. 709]
Geneva: ITU, 1990
Principles of Digital Image Synthesis,
A. Glassner
Morgan Kaufmann Publishers Inc., 1994
The Physics and Chemistry of Color, The Fifteen Causes of Color.
Kurt Nassau
John Wiley & Sons, Somerset NJ, 1983
Geometric Considerations and Nomenclature for Reflectance,
Nicodemus, F. E., J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis
National Bureau of Standards (US) Monograph 161, 1994
A Practitioners' Assessment of Light Reflection Models
Peter Shirley Helen Hu Brian Smits Eric P. Lafortune.
Pacific Graphics '97. 1997.
A Survey of Shading and Reflectance Models
Christophe Schlick
Computer Graphics Forum. 13(2), pp. 121-131, 1994.
Illumination for Computer Generated Pictures,
B.-T. Phong
CACM, Vol. 18, pp. 311-317, June, 1975.
Models of Light Reflection For Computer Synthesized Pictures
James F. Blinn
Computer Graphics (Proceedings of SIGGRAPH 77), 11(2), pp. 192-198, 1977.
A reflectance model for computer graphics
Robert L. Cook, Kenneth E. Torrance.
Computer Graphics (Proceedings of SIGGRAPH 81). 15(3), pp. 307-316, 1981.
Measuring and Modeling Anisotropic Reflection
Gregory J. Ward Larson.
Computer Graphics (Proceedings of SIGGRAPH 92). 26(2), pp. 265-272, 1992.
An Anisotropic Phong BRDF Model
Michael Ashikhmin, Peter S. Shirley.
Journal of Graphics Tools. 5(2), pp. 25-32, 2000.
Non-Linear Approximation of Reflectance Functions
Eric P. F. Lafortune, Sing-Choong Foo, Kenneth E. Torrance, Donald P. Greenberg.
Proceedings of SIGGRAPH 97. pp. 117-126, 1997.
Generalization of Lambert's Reflectance Model
Michael Oren, Shree K. Nayar.
Proceedings of SIGGRAPH 94. pp. 239-246, 1994.
Predicting Reflectance Functions From Complex Surfaces
Stephen H. Westin, James R. Arvo, Kenneth E. Torrance.
Computer Graphics (Proceedings of SIGGRAPH 92). 26(2), pp. 255-264, 1992.
A Microfacet-based BRDF Generator
Michael Ashikhmin, Simon Premoze, Peter S. Shirley.
Proceedings of ACM SIGGRAPH 2000. pp. 65-74, 2000.
Illumination in Diverse Codimensions
David C. Banks.
Proceedings of SIGGRAPH 94. pp. 327-334, 1994.
Experimental Analysis of BRDF Models
Addy Ngan, Frédo Durand, Wojciech Matusik.
16th Eurographics Symposium on Rendering. pp. 117-126, 2005.
Normal distribution functions and multiple surfaces
Alain Fournier.
Graphics Interface '92 Workshop on Local Illumination. pp. 45-52, 1992.
Efficient Rendering of Spatial Bi-directional Reflectance Distribution Functions
David K. McAllister, Anselmo A. Lastra, Wolfgang Heidrich.
Graphics Hardware 2002. pp. 79-88, 2002.
The plenoptic function and the elements of early vision
E. H. Adelson and J. R. Bergen,
Computational Models of Visual Processing, pages 3--20,
Cambridge, 1991. MIT Press.
Steerable Illumination Textures
Michael Ashikhmin, Peter Shirley.
ACM Transactions on Graphics. 21(1), pp. 1-19, 2002.
Polynomial Texture Maps
Tom Malzbender, Dan Gelb, Hans Wolters.
Proceedings of ACM SIGGRAPH 2001. pp. 519-528, 2001.
Interactive subsurface scattering for translucent meshes
Xuejun Hao, Thomas Baby, Amitabh Varshney.
2003 ACM Symposium on Interactive 3D Graphics. pp. 75-82, 2003.
Clustered Principal Components for Precomputed Radiance Transfer
Peter-Pike Sloan, Jesse Hall, John Hart, John Snyder.
ACM Transactions on Graphics. 22(3), pp. 382-391, 2003.
All-Frequency Precomputed Radiance Transfer for Glossy Objects
Xinguo Liu, Peter-Pike Sloan, Heung-Yeung Shum, John Snyder.
15th Eurographics Symposium on Rendering. pp. 337-344, 2004.
Triple product wavelet integrals for all-frequency relighting
Ren Ng, Ravi Ramamoorthi, Pat Hanrahan.
ACM Transactions on Graphics. 23(3), pp. 477-487, 2004.
The Irradiance Volume
Gene Greger, Peter Shirley, Philip M. Hubbard, Donald P. Greenberg.
IEEE Computer Graphics & Applications. 18(2), pp. 32-43, 1998.
Normal Distribution Mapping,
M. Olano and M. North
UNC Chapel Hill Computer Science Technical Report 97-041, 1997
Homomorphic Factorization of BRDF-based Lighting Computation
Lutz Latta, Andreas Kolb.
ACM Transactions on Graphics. 21(3), pp. 509-516, 2002.
Mipmapping Normal Maps
Michael Toksvig,
Journal of Graphics Tools, 10(3):65-71, 2005
Multiresolution Reflectance Filtering
Ping Tan, Stephen Lin, Long Quan, Baining Guo, Heung-Yeung Shum.
16th Eurographics Symposium on Rendering. pp. 111-116, 2005.
Toward Real-Time Photorealistic Rendering: Challenges and Solutions
Andreas Schilling.
SIGGRAPH / Eurographics Workshop on Graphics Hardware. pp. 7-16, 1997.
Antialiasing of Environment Maps
Andreas G. Schilling
20(1), Computer Graphics Forum, pp. 5-11, 2001
Antialiasing of Bump Maps
Andreas G. Schilling
TechReport, Wilhelm-Schickard-Institut für Informatik, University of Tübingen,
Germany, 1997
SpecVar Maps: Baking Bump Maps into Specular Response
Patrick Conran
Sketch, SIGGRAPH 2005.