CS488 A5 Project Manual: Enhanced Raytracer Alex Lake ajlake  20354795

Transcription

CS488 A5 Project Manual: Enhanced Raytracer Alex Lake ajlake  20354795
CS488 A5 Project Manual: Enhanced Raytracer
Alex Lake
ajlake 20354795
July 30, 2014
Table of Contents
1 Purpose
1
1.1
Topics
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2
Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.3
Objectives Overview
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 User Guide
1
3
2.1
Project Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.2
Program IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.3
Executable
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.4
Lua Callbacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
3 Implementation
8
3.1
Code Layout
3.2
Non-Objective Technical Details
3.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
8
9
3.2.1
Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
3.2.2
Decorator Pattern
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
3.2.3
Ray Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
3.2.4
Memory Leak Fixes
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
3.2.5
Additional Mesh Optimizations . . . . . . . . . . . . . . . . . . . . . . . .
10
3.2.6
Additional Optimizations
. . . . . . . . . . . . . . . . . . . . . . . . . . .
10
Objective Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
3.3.1
Objective 1: Adaptive Anti-alising
11
3.3.2
Objective 2: Reection and Refraction . . . . . . . . . . . . . . . . . . . .
11
3.3.3
Objective 3: Mesh Uniform Space Partitioning
12
3.3.4
Objective 4: Phong Normal Interpolation
3.3.5
Objective 5: Texture Mapping
3.3.6
Objective 6: Perlin Noise
. . . . . . . . . . . . . . . . . . . . . . . . . . .
14
3.3.7
Objective 7: Caustics
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
3.3.8
Objective 8: Soft Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
3.3.9
Objective 9: Constructive Solid Geometry . . . . . . . . . . . . . . . . . .
3.3.10 Objective 10: Final Scene
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
13
. . . . . . . . . . . . . . . . . . . . . . . .
13
16
. . . . . . . . . . . . . . . . . . . . . . . . . . .
17
3.3.11 Objective 11 (Extra): Skybox . . . . . . . . . . . . . . . . . . . . . . . . .
17
3.3.12 Objective 12 (Extra): Gui Previewer . . . . . . . . . . . . . . . . . . . . .
18
3.3.13 Objective 13 (Extra): Fresnel Equations . . . . . . . . . . . . . . . . . . .
19
3.3.14 Objective 14 (Extra): Revised Final Scene . . . . . . . . . . . . . . . . . .
19
4 Bibliography
20
5 Objectives Sheet
21
i
1
CS488/688
1
Spring 2014
ajlake 20354795
Purpose
The main purpose of this project was to learn ray tracing techniques and to practice implementing them. The techniques covered both functional improvements as well as eciency improvements. The end goal was to have a reasonably photorealistic image. Additionally, some practice
was gained building an interactive OpenGL application which interacts with the raytracer code.
1.1
Topics
•
Raytracer optimizations
•
Raytracer light and shadow techniques
•
Texture mapping and perlin noise
•
Constructive solid geometry
•
Interactive OpenGL applications using gtkmm
1.2
Statement
The core of this project and all objectives involved extending the ray tracer from A4.
The
modeling language used to dene the scene to render was implemented as a set of Lua callbacks,
similar to that in A4 but with extensions and changes. An OpenGL application using gtkmm
was also built around the raytracer which allows users to watch the target image be raytraced
in front of them in real time.
The original project vision was to have the OpenGL application also preview the scene to be
rendered using the OpenGL pipeline (3D to 2D), but be devoid of features such as reection,
refraction, textures, and shadows. The user could then use the virtual trackball interface from
A3 to adjust the perspective for the scene and then render from that perspective (and watch it
be rendered in real time). Additional extensions to the OpenGL interactions would allow users
to add, change, and remove light sources and objects in the scene. Even further extensions would
allow users to assign velocities to objects in the scene (including lights), and have the application
simulate time and raytrace an image 30 times per emulated second of time. These raytraced
images could then be used to produce a short animation. Unfortunately, time constraints limited
the OpenGL component of the project to only being able to watch the image be raytraced in
real time.
The project was both challenging and interesting. First, the raytracer enhancements proved to
be algorithmically interesting and signicant eciency gains were had; this allowed a more complex scene to be rendered. The choice of enhancements was also such that greater photorealism
could be achieved (ie: soft shadows and phong normal interpolation). The OpenGL application
built around the raytracer also proved to be a valuable objective that made the raytracer far
more interactive and enjoyable to watch.
2
CS488/688
1.3
Spring 2014
Objectives Overview
•
Objective 1: Adaptive Anti-aliasing
•
Objective 2: Reection and Refraction
•
Objective 3: Mesh Uniform Space Partitioning
•
Objective 4: Phong Normal Interpolation
•
Objective 5: Texture Mapping
•
Objective 6: Perlin Noise
•
Objective 7: Caustics (ANBANDONED)
•
Objective 8: Soft Shadows
•
Objective 9: Constructive Solid Geometry
•
Objective 10: Final Scene
•
Objective 11 (Extra): Skybox
•
Objective 12 (Extra): GUI Previewer
•
Objective 13 (Extra): Fresnel Equations
•
Objective 14 (Extra): Revised Final Scene
ajlake 20354795
3
CS488/688
2
Spring 2014
ajlake 20354795
User Guide
The entirety of the project is located in the A5 directory.
2.1
Project Structure
Source Code:
Data Files:
All source code and the Makele are located in the A5/src/ directory.
All data les, such as demonstration scenes (Lua scripts), textures, and models will
NNdemo.lua
be located in the A5/data/ directory. For each implemented objective, there is a corresponding
demonstration scene in the data directory named
(NN is the objective number)
which renders one or more scenes that demonstrate the objective.
Root Directory:
rt
• README
•
The root directory contains:
: The raytracer executable (this should be run in the root level directory with ./rt).
: A short guide on the raytracer program execution and the command line
options.
screenshot01.png
• screenshot02.png
• render_all.sh
•
: The nal scene (renderable with ./rt -alsamples 8 -ss 4 data/10demo.lua
: The revised nal scene (renderable with ./rt -alsamples 8 -ss 4 data/14demo.lua
: Renders all demonstration scenes in order (for objectives 1-14). When the
GUI launches, press 'R' to render and 'Q' when the rendered image is complete (or use
the Application Menu).
2.2
Program IO
Input:
•
Wavefront .OBJ les to describe polygonal mesh objects.
•
Lua les to describe the scene to render
Interaction:
•
The raytracer cannot be interacted with unless the GUI is launched.
•
The GUI has an Application Menu with two options:
Render (shortcut R) and Quit
(shortcut Q). While the image is rendering, pressing Q or closing the window will prevent
the image from rendering and no PNG le will be written to.
Output:
•
An image in PNG format; the dimensions and write location are specied by the input
LUA le.
4
CS488/688
2.3
Spring 2014
ajlake 20354795
Executable
Invocation:
Options:
• -gui:
./rt [options ...] [a-zA-Z0-9]+.lua [ [a-zA-Z0-9]+.lua ... ]
Launch the raytracer GUI (default o )
• -ss N:
Use NxN ray samples per pixel for the anti-aliasing pass (default 3)
• -reflect-limit N:
Limit the number of times a ray can reect to N (default 2)
• -refract-limit N:
Limit the number of times a ray can refract to N (default 4)
• -write-aa-targets X:
For a given render job writing to foo.png, this will cause the
raytracer to write to Xfoo.png as well. This image will be the rst pass image but with
all pixels that have been marked for anti-aliasing in red (default o )
• -write-pre-aa X: For a given render job writing to foo.png, this will cause the raytracer to
write to Xfoo.png as well. This image will be the rst pass image prior to the anti-aliasing
pass (default o )
• -side_tolerance N:
Sets the threshold for the colour dierence for directly adjacent pix-
els. This threshold denes which pixels get marked for anti-aliasing (default 0.005)
• -corner_tolerance N: Sets the threshold for the colour dierence for diagonally adjacent
pixels. This threshold denes which pixels get marked for anti-aliasing (default 0.01)
• -use-tir:
Consider total internal reection when computing refraction (default o )
• -al-samples N: For area lights,
2.4
sample them using a grid of NxN shadow rays (default 4)
Lua Callbacks
The set of Lua callbacks is largely the same as in A4 but with many extras.
There are four
types of objects:
1.
Primitives
A primitive describes a 3D solid
• gr.cube(<Material>):
Create a cube primitive from the origin (0,0,0) to (1,1,1)
with a particular surface material.
• gr.sphere(<Material>):
Create a sphere primitive at the origin (0,0,0) with radius
1 with a particular surface material.
• gr.mesh(<Material>, <table of 3-tuples>, <table of n-tuples>):
Create a
solid out of a polygon mesh with the rst argument designating the surface material,
the second argument designating the vertices, and the third argument designating
the faces (each tuple is a set of vertices making up the face, specied by their index
in the table of vertices).
• gr.fast_mesh(<Material>, <table of 3-tuples>, <table of n-tuples>):
Func-
tionally equivalent to gr.mesh but signicantly more performant and memory intensive.
5
CS488/688
Spring 2014
ajlake 20354795
• gr.vn_mesh(<Material>, <table of 3-tuples>, <table of 3-tuples>, <table
of n-tuples>, <table of n-tuples>): Create a solid out of a polygon mesh with
the rst argument designating the surface material, the second argument designating
the vertices, and the third argument designating the vertex normals. The fourth argument designates the faces and the fth argument designates the vertex normals for
the faces. The fourth and fth argument are specied in the same was as in gr.mesh.
• gr.fast_vn_mesh(<Material>, <table of 3-tuples>, <table of 3-tuples>, <table
of n-tuples>, <table of n-tuples>): Functionally equivalent to gr.vn_mesh but
signicantly more performant and memory intensive.
2.
Materials
A material describes the surface properties of a primitive
• gr.material(<3-tuble of [0,1]>, <3-tuple of [0,1]>, <number [0, inf )>):
Re-
turns a Phong material whose diuse colour (RGB) is given by the rst argument,
specular colour (RGB) given by the second argument, and shininess given by the
third argument.
• gr.rr_material(<3-tuple of [0,1]>, <3-tuple of [0,1]>, <number [0, inf )>,
<number [0,1]>, <number [0,1]>, <number [1, inf )>): Returns a Phong material whose diuse colour (RGB) is given by the rst argument, specular colour (RGB)
given by the second argument, shininess given by the third argument, reection
amount given by the fourth argument, refraction amount given by the fth argument, and refractive index given by the sixth argument.
• gr.marble_pnd_material(<Material, <number>, <number>, <number>, <number
[0 or 1]>, <3-tuple of [0,1]>): Returns a modied version of the material given
as an argument whose diuse colour is modied to have a marble-like texture. The
generated texture is aected by arguments 2-6.
The second argument aects the
noisiness of the veins, the third argument aects the overall noise, the fourth argument aects the vein frequency, the fth argument species whether to use linear
interpolation (0) or cosine interpolation (1), and the sixth argument species which
RGB colour to use to blend with the underlying colour of the rst argument.
• gr.wood_pnd_material(<Material>, <number>, <number>, <number [0 or 1]>,
<3-tuple of [0,1]>): Returns a modied version of the material given as an argument whose diuse colour is modied to have a wood-grain-like texture.
generated texture is aected by arguments 2-5.
The
The second argument aects the
noisiness of the pattern, the third argument aects the frequency of the grain, the
fourth argument species whether to use linear interpolation (0) or cosine interpolation (1), and the fth argument species which RGB colour to use to blend with the
underlying colour of the rst argument.
• gr.td_cube_material(<Material>, <string>):
Returns a modied version of the
material given as an argument whose diuse colour is modied such that the image
texture specied by the second argument is mapped onto each face of the cube.
• gr.td_sphere_material(<Material>, <string>):
Returns a modied version of
the material given as an argument whose diuse colour is modied such that the
image texture specied by the second argument is mapped onto the surface of the
sphere.
6
CS488/688
Spring 2014
• gr.difresnel_material(<Material>):
ajlake 20354795
Returns a modied version of the material
given as an argument that exhibits the behaviour of a dielectric, but otherwise has
the same properties.
3.
Nodes
• gr.node(<string>):
Creates a node at the origin (0,0,0) with the given name
• gr.geometry(<string>, <Primitive>):
Creates a geometry node at the origin (0,0,0)
with the given name and with 3D geometry given by the second argument.
• gr.csg_difference(<string>, <GeometryNode>, <GeometryNode>):
Creates a ge-
ometry node at the origin (0,0,0) with the given name whose 3D geometry is given
by subtracting the geometry of argument 3 from that of argument 2.
• gr.csg_union(<string>, <GeometryNode>, <GeometryNode>):
Creates a geome-
try node at the origin (0,0,0) with the given name whose 3D geometry is given by
performing the union of the geometry of argument 2 with that of argument 3.
• gr.csg_intersection(<string>, <GeometryNode>, <GeometryNode>):
Creates a
geometry node at the origin (0,0,0) with the given name whose 3D geometry is given
by performing the intersection of the geometry of argument 2 with that of argument
3.
4.
Lights
• gr.light(<3-tuple>, <3-tuple of [0,1]>, <3-tuple>):
Creates a point light
source at the position specied by the rst argument, with the RGB colour of the
light given by the second argument, and with the attenuation values given by the
third argument.
• gr.area_light(<3-tuple>, <3-tuple>, <3-tuple>, <3-tuple of [0,1]>, <3-tuple>):
Creates an area light source (rectangle) whose corner is specied by the rst argument, with the second and third argument specifying two vectors such that when
both are added to the given corner the opposite corner is produced.
The fourth
argument species the RGB colour of the light and the fth argument species the
attenuation values.
Both
Node
and
GeometryNode
also support the following member methods:
• :add_child(<Node or GeometryNode>):
Adds the argument as a child of the calling
node. All transformations (scales, rotations, and translations) performed on the calling
node will also be applied to its children (note: the children themselves will not actually
be modied). When a node is rendered, itself plus all of its children are rendered. Node
objects have no 3D geometry therefore they simply render their children after applying
any local transformations.
• :scale(<number>, <number>, <number>):
Scales the node in the x, y, and z dimensions
respectively.
• :rotate(<string>, <number>):
The rst argument species an axis (one of 'x', 'y', or
'z'). The second argument species the angle in degrees to rotate the node by.
• :translate(<number>, <number>, <number>):
Translate the node in the x, y, and z
7
CS488/688
Spring 2014
ajlake 20354795
direction respectively.
There are two options for rendering:
1.
gr.render(<Node>, <string>, <number>, <number>, <3-tuple>, <3-tuple>, <3-tuple>,
<number>, <3-tuple of [0,1]>, <n-tuple of Light>): Renders the scene given by
the rst argument with the second argument specifying the write destination for the rendered image. The third and fourth argument are the image width and height in pixels.
The fth argument is the view position, the sixth argument is the view direction, the
seventh argument is the Up vector, and the eigth argument is the eld of view in degrees.
The ninth argument is the ambient light factor, and the tenth argument specify the lights
in the scene.
2.
gr.render_skybox(<Node>, <string>, <string>, <number>, <number>, <3-tuple>,
<3-tuple>, <3-tuple>, <number>, <3-tuple of [0,1]>, <n-tuple of Light>): Identical to gr.render but with an additional argument inserted after the second argument
which species a path to a skybox texture.
8
CS488/688
3
Spring 2014
ajlake 20354795
Implementation
The project is written in C++ using gtkmm for the OpenGL interface. Scene information is
provided using LUA les. The general technical details resemble that of A4.
3.1
Code Layout
Lua
• lua488.hpp:
Lua library header le
• scene_lua.hpp/scene_lua.cpp:
Lua driver
Scene Specication
• scene.hpp:
• light.hpp:
Node and GeometryNode
Lights
• material.hpp:
• primitive.hpp:
Materials
Primitives
Input/Output
• image.hpp/image.cpp:
The code for writing to/from PNG les.
• image_writer.hpp/image_writer.cpp:
A wrapper around the Image class.
GUI
• rtgui_window.hpp/rtgui_window.cpp:
• rt_viewer.hpp/rt_viewer.cpp:
The main gtkmm GUI window
The OpenGL Viewer
• gl_image_writer.hpp/gl_image_writer.cpp:
Allows the GUI to read the rendered
image so far.
Program Init
• main.cpp:
Program entry point
• rtoptions.hpp/rtoptions.cpp:
Handles CLI parsing/options
• rt_bootstrap.hpp/rt_bootstrap.cpp:
Bootstraps the raytracer
Core Raytracer Code (meat and potatoes)
• algebra.hpp/algebra.cpp:
Code for using and manipulating 3D matrices, 3D vectors,
and RGB colours.
• polyroots.hpp/polyroots.cpp:
• raytracer.hpp/raytracer.cpp:
• ray.hpp/ray.cpp:
• collision.hpp:
Polynomial solver.
The core raytracer algorithm.
The Ray class. This class encapsulates information about a ray.
The Collision class.
This class encapsulates information about a ray
9
CS488/688
Spring 2014
ajlake 20354795
interaction.
• mesh.hpp/mesh.cpp:
Implementation of polygon meshes.
• perlin.hpp/perlin.cpp:
• csg.hpp/csg.cpp:
Implementation of perlin noise.
The implementation of constructive solid geometry.
• fresnel.hpp/fresnel.cpp:
Mostly just a class denition. Implementation of fresnel equa-
tions is in raytracer.cpp.
• skybox.hpp/skbyox.cpp:
• solid.hpp/solid.cpp:
Implementation of skyboxes.
Implementation of the cube and sphere primitives.
• texture_mapper.hpp/texture_mapper.cpp:
Implementation of cube and sphere
texture mapping.
• light.cpp:
• scene.cpp:
Implementation of lights and area lights.
Implementation of scene nodes.
• material.cpp:
3.2
Implementation of Phong materials.
Non-Objective Technical Details
3.2.1 Tools
Git was used for source control.
gdb was used to debug (often with TUI mode).
A typical
debugging workow was identifying artifacting in images and then locating exactly which pixels
have artifacts. Then, using gdb, breakpoints were set to trigger only on the particular pixels
with artifacts. From there, gdb can be used to identify the cause of the artifacting (often epsilon
errors!).
3.2.2 Decorator Pattern
A decorator pattern was used for all materials except for
The idea is that materials expose a
getTexture
gr.material
and
gr.rr_material.
method. Materials using this pattern are con-
structed by being given a base material in addition to their regular arguments. Their
getTexture
method follows this following pattern:
Texture tex = m_baseMaterial->getTexture
Modify some aspect of tex ...
return tex
The advantage of this is it allows simple composition of texture behaviours without having combinatorially more commands, classes, code, or member variables.
slightly reective texture mapped cube could be achieved by decorating
gr.td_cube_material.
For example, a
gr.rr_material
with
10
CS488/688
Spring 2014
ajlake 20354795
3.2.3 Ray Trees
The Ray class (ray.hpp) encapsulates all pertinent information about a ray (reect depth,
refract depth, colour, ray component, ray children, etc.). When secondary rays are casted, they
are added as children of the primary ray with their respective components set. For example,
if a material is 50% reective and 50% refractive, then two secondary rays will be created,
their components will be set to 0.5 and 0.5 respectively, they will be added as children of the
primary ray, and then pushed on the queue of rays to process. The usage of queues prevents
unnecessary stack frames from being created during raytracing which saves computational time.
When the primary ray is asked what its colour is, it returns it's own colour (default black)
plus all of the contributions of its children rays (multiplied by their respective components).
This has streamlined and simplied a lot of behaviour:
•
Reection and refraction (simply spawn a new ray, add it as a child, and push it on the
ray queue)
•
Supersampling (ie: For 4x supersampling, create 4 rays with components 0.25 and add
them as children of a primary ray. Then cast the 4 rays and then ask the primary ray
what its colour is. The averaging work will be done for free!)
3.2.4 Memory Leak Fixes
The original Lua driver code in A4 leaked memory.
Some cleanup code and containers were
added to scene_lua.cpp. Each time an object is allocated by the Lua driver, it adds the pointer
to one of the relevant containers. Before returning, the Lua driver deletes all of the allocated
objects in the containers. Additionally, signicant use of auto pointers and stack objects were
used to ensure proper cleanup of memory (see gl_image_writer.hpp as an example).
3.2.5 Additional Mesh Optimizations
First, for reasons later explained, all polygon mesh faces are triangulated. Then, for each face,
as much as possible is pre-computed such as the area of the triangle (used for Phong normal
interpolation), the surface normal, the normals of each side, and the vectors
for the vertices
A, B ,
and
C.
~ , BC
~ ,
AB
and
~
CA
Coupled with the mesh optimization, this has brought mesh
intersection times to be on approximately the same order as primitive intersections!
3.2.6 Additional Optimizations
•
In the Makele, the -O3 option is used to perform optimizations such as function inlining.
This has improved the raytracer performance by a constant factor of around 3 to 5 in most
cases.
•
The scene hierarchy is attened before raytracing. This is done to speed up scene transformations into a single matrix multiplication in the GeometryNode object.
11
CS488/688
3.3
Spring 2014
ajlake 20354795
Objective Implementation
3.3.1 Objective 1: Adaptive Anti-alising
Code Location:
raytracer.cpp
Initially, one ray per pixel is used to generate an image. Then, for each pixel in the image, it is
compared against its (up to) 8 neighbouring pixels (4 adjacent, 4 diagonal). A dierence value
is computed for adjacent and diagonal pixels by treating the two RBG colours of the test pixel
and the neighbouring pixel as 3D points and computing the squared distance between them.
If the squared distance is greater than the provided threshold (which can be set by -side-tolerance
and -corner-tolerance CLI options), the pixel is marked for resampling and changed to pure red.
This allows the GUI to show viewers which pixels are being resampled.
Pixels marked for resampling are then supersampled using an evenly spaced grid of rays (the
number of rays is set by the -ss CLI option). For example, if a pixel was the square (0,0) to
(1,1) and the number of samples was 4, the 4 rays would be shot through (0.25, 0.25), (0.25,
0.75), (0.75, 0.25), and (0.75, 0.75).
Also note that the -write-aa-targets and -write-pre-aa CLI options are considered and the image
will be saved accordingly before resampling and pixel agging respectively.
3.3.2 Objective 2: Reection and Refraction
Code Location: raytracer.cpp
Relevant Lua Commands: gr.rr_material
When a collision occurs, the Collision object contains information about the nature of the surface
texture that the ray is interacting with. This includes the reection and refaction coecient.
If the refraction coecient is greater than 0:
First, the refract depth of the current ray is checked to see if the refraction limit has been
reached (congurable with the -refract-limit CLI option). If it has not, then the refractive index
of the surface texture
is equal to
rt ,
rt
exiting
is retrieved. If the refractive index of the current ray
then it is assumed the ray is
the solid and
rt
ri
(starts as 1.0)
is set to 1.0. Note that this
assumption has aws as it assumes that air is the medium after exiting an object. It also falls
apart when refractive objects are inside of other refractive objects.
computed. The surface normal
the
~i
~n
The ratio
r =
ri
rt is then
and the incident angle ~
i are also computed. It is worth noting
θt be the angle of transmission. The
cos2 θt < 0 then we have total internal
is from the point of collision to the ray origin. Let
2
class notes then give cos θt
= 1−
n2r
∗ (1 − (~n · ~i)2 ).
If
reection (however this must be enabled with the -use-tir CLI option). In this case, the refraction
coecient is added to the reection coecient. Otherwise, a secondary ray is casted whose origin
is the point of collision and whose direction is given by
t = (−nr ∗ ~n · ~i −
√
2
cos2 θt ) ∗ ~n + nr ∗ ~i
(from class notes). The secondary ray is added as a child of the primary ray with its refract
depth equal to its parent depth - 1, with it's ray component equal to the refraction coecient,
and with its refractive index equal to
nt .
If the reection coecient is greater than 0:
First, the reect depth of the current ray is checked to see if the reection limit has been
12
CS488/688
Spring 2014
ajlake 20354795
reached (congurable with the -reect-limit CLI option). If it has not, then the surface normal
~n
and incident vector
~i
are computed. It is worth noting that the incident vector is from the
point of collision to the ray origin. The reection vector
~r = (2 ∗ ~n · ~i) ∗ ~n − ~i
is then computed
and a secondary ray is casted in the same way as in the refraction case.
Additionally, Phong highlights are applied with (1 - refraction coecient) as a coecient. Therefore, the more refractive a material, the less specular highlights are visible. This makes sense
since specular highlights are due to reection. Also, on reection and refraction cases, the origin
of the secondary Ray that is spawned is nudged along the path of reection or transmission
by a distance of 0.01. This is to help prevent artifacting due to Rays colliding with the same
surface location multiple times. 0.01 is arbitrary and loses eectiveness when geometry is very
small or very large!
Another feature worth discussing is the specular lter colour (default (1,1,1)) in the Ray class.
How this works is that when a Ray is refracted or reected, it takes on the specular colour of
the object that it interacted with and applies that colour multiplicatively to its nal colour. For
example, suppose there was a refractive object that only allowed blue light to be transmitted.
It's specular colour would be set to (0,0,1) so that when a secondary Ray refracted through
it, it's specular lter would be set to (0,0,1).
When the Ray is then asked what its colour
is, it will return its nal colour multiplied by (0,0,1), only allowing the blue component of the
light to pass through. This feature allows more interesting materials to be expressed and had
the caustics objective been implemented, it would have allowed coloured caustic patterns (ie: a
green glass solid having green caustics).
3.3.3 Objective 3: Mesh Uniform Space Partitioning
Code Location: mesh.cpp
Relevant Lua Commands: gr.fast_mesh, gr.fast_vn_mesh
Given a bounding box for a mesh, it's reasonable to assume that in many cases the distribution
of geometry within that box is fairly uniform. In particular, it is probably not the case that the
A Fast Voxel Traversal Algorithm for Ray Tracing
bounding box contains a small cluster where the majority of geometry exists, and everywhere
else is empty. Thus, the technique outlined in
performs quite well. This mesh optimization was performed using the technique in this paper.
First, all faces of the polygon mesh are triangulated if they are not already triangles. Then, the
bounding box is uniformly subdivided into Voxels (a Voxel is a list of faces). The number of
voxels is given by
n3
where
n=
√
3
numf aces
(the x, y, and z dimensions of the bounding box
are partitioned the same number of times). Then, each face is assigned to the voxels that it
falls into. This is done by using a 3D triangle scanline algorithm. Beginning at two vertices of
the triangle, two points
A
and
B
are incrementally advanced along towards the third vertex of
the triangle (the increment is one half of the minimum voxel dimension). Then, using the same
increment value, points are interpolated between
A
and
B.
Each point is checked which Voxel
it is in. If that Voxel is not currently in the set of voxels that is being recorded, it is added to
the set. When the scanline algorithm nishes, each Voxel in the set adds the current face to its
list of faces.
When performing collision detection, the algorithm from the paper is used. First, it is determined which voxel the ray origin is located in or at which edge voxel the ray will enter. The
13
CS488/688
Spring 2014
ajlake 20354795
voxels are then traversed in a linear fashion according to the direction of the ray (3D DDA
algorithm). Each polygon face in the traversed voxel is tested against and added to a set (to
prevent repeated testing). If a face is collided with, the voxel containing the intersection point
is recorded.
If
n
is the number of polygon faces in the mesh, then the runtime (in practice) is
√
O( 3 n).
This
is because there are two cases:
1. No collisions: Since the line is linear,
√
O( 3 n)
voxels will be tested. There will be at most
k < ≈ 20 faces (in practice) in a given voxel and testing an intersection with a face requires
√
3
constant time. Therefore, the total complexity is O( n).
2. There is a collision: Same rationale as in case 1, but potentially more performant due to
the early exit.
Furthermore, if it is assumed that
n ≤ 100000, then
√
3
n ≈ 46 which can be considered constant
time, hence the claim that mesh intersection times are roughly the same complexity as primitive
intersection times.
3.3.4 Objective 4: Phong Normal Interpolation
Code Location: mesh.cpp
Relevant Lua Commands: gr.vn_mesh, gr.fast_vn_mesh
First, data/readobj.lua was enhanced with a method to read in vertex normal information in
the same way as other information. The extra information is consequently passed on to the Lua
command.
Implementation-wise, if using Phong Normal Interpolation, a boolean ag is set to true in
the mesh object. When returning the collision information, the vertex normals of the triangle
face are interpolated in the same way as in Gouraud shading in the course notes (Barycentric
Coordinates and area rations of triangles) to compute the surface normal.
3.3.5 Objective 5: Texture Mapping
Code Location: texture_mapper.cpp
Relevant Lua Commands: gr.td_sphere_material, gr.td_cube_material
Texture mapping was implemented for the sphere and cube primitives (unit cube and unit
sphere) The mapping system is straightforward (due to unit sphere and unit cube) and as
described in class (UV mapping).
format.
Acceptable textures are rectangular images in the PNG
Interpolation and ltering are not used (supersampling ends up doing a lot of that
work, although more expensively).
Cubes have the texture applied to each face. The front, back, and sides of the cube have the
same orientation (note: front face is considered to be the one with the normal (0,0,1) assuming
a right-handed system). Therefore, if the cube is rotated about the y axis by 90 degrees, the
face that the viewer sees looks the same. The top is mapped such that if the viewer was directly
above the cube with (0,0,-1) as the up vector, the top face looks the same as the front face.
The bottom face is mapped in the same way as the top face. This was designed intentionally
14
CS488/688
Spring 2014
ajlake 20354795
so that seamless textures would appear more contiguous along the edges (assuming the viewer
views the front region of the cube and not the back region).
Spheres were texture mapped by converting x,y,z to spherical coordinates and then using
φ
and
θ to determine the UV coordinates.
φ = acos(y)
θ = atan2(−z, −x) + π
θ
u = 2π
φ
v=π
A signicant improvement would be to use interpolation (ie: bilinear or trilinear). This would
improve the mapping and reduce supersampling in many cases.
3.3.6 Objective 6: Perlin Noise
Code Location: perlin.cpp
Relevant Lua Commands: gr.marble_pnd_material, gr.wood_pnd_material
To simulate grainy wood and marble textures, procedural texturing using perlin noise was used.
The implementation for the noise function was based on Ken Perlin's 2002 SIGGRAPH paper:
http://mrl.nyu.edu/~perlin/paper445.pdf
http://mrl.nyu.edu/~perlin/noise, from which
http://freespace.
virgin.net/hugo.elias/models/m_perlin.htm.
Perlin provides a sample implementation at
the code is heavily templated. Inspiration for the textures was also from
Perlin noise works by interpolating between noise. In the 3D case, the idea is that there exists
a 3D lattice where each point has a noise value. To determine the value for an arbitrary point,
it is determined which cube in the lattice the point is in and then its value is interpolated from
the 8 corners across each dimension.
The wood grain texture is parameterized by four arguments: m_noiseFactor, m_density, m_interpolationMethod,
and m_mixColour. The following is pseudo-code for the wood grain (let noise be linearly interpolated noise or cosine interpolated noise depending on whether m_interpolationMethod is
0 or 1):
p = surface point
totalNoise = noise(m_noiseFactor*p[0], m_noiseFactor*p[1], m_noiseFactor*p[2]);
totalNoise = m_density*totalNoise;
totalNoise = totalNoise - std::floor(totalNoise);
double mixRatio = c_inter(0.5 + 0.5*totalNoise, 0., 1.);
baseTexture.diffuse_colour = mixRatio*baseTexture.diffuse_colour + (1-mixRatio)*m_mixColour;
The reason m_noiseFactor increases the noise is that it causes the surface points to move
around in the lattice more quickly. The reason m_density increases the grain density is because
it shortens the interval before the noise value gets oored again.
The marble texture is parameterized by ve arguments: m_noiseFactor, m_noiseMultiplier,
m_veinFrequency, m_interpolationMethod, and m_mixColour. The following is pseudo-code
for the marble (let noise be linearly interpolated noise or cosine interpolated noise depending
on whether m_interpolationMethod is 0 or 1):
15
CS488/688
Spring 2014
ajlake 20354795
int numOctaves = 4;
double persistence = 4.;
double currP = 1.;
double basePM = m_noiseFactor;
for (int i = 0; i < numOctaves; i++)
totalNoise += (1./currP)*noise(basePM*p[0], basePM*p[1], basePM*p[2]);
currP *= persistence;
basePM *= persistence;
end
double mixRatio = c_inter((1. + cos(m_veinFrequency*p[1] + m_noiseMultiplier*totalNoise))
* 0.5, 0., 1.);
baseTexture.diffuse_colour = mixRatio*baseTexture.diffuse_colour + (1-mixRatio)*m_mixColour;
The idea is that cos(y) is used as the base pattern which yields smooth stripes. By adding noise,
a marbly vein texture is produced. Four octaves of noise are added together with each octave
having a smaller amplitude. This makes the distorted veins have additional smaller veins and
distortions adding realism. The reason m_noiseFactor increases the noise is the same reason as
with the wood grain. m_noiseMultiplier makes the marble more noisy by increasing the ratio of
noise to stripe pattern. m_veinFrequency increases the vein frequency by reducing the period
required so that cos(y) repeats more often.
3.3.7 Objective 7: Caustics
A Ray Tracing Method for Illumination Calculation in Diuse-Specular Scenes
This objective was aborted and is not implemented. Originally, the plan was to use the method
described in
. An
illumination map would have been created for any diuse surface for which caustics are desired.
The illumination map would act as a sort of texture map for these surfaces.
To construct
the illumination map, rays would need to be casted from each light source and illumination
information would be saved for diuse surfaces whenever light is reected or refracted before
hitting it.
This illumination information is later used to create caustics when rendering the
surfaces.
3.3.8 Objective 8: Soft Shadows
Code Location: light.cpp
Relevant Lua Commands: gr.marble_pnd_material, gr.wood_pnd_material
Support was added for area lights. Instead of casting a single shadow ray as with point light
sources, multiple shadow rays (congurable with -alsamples CLI option) are casted in a grid
pattern over the area of the light with slight jittering.
The jittering of the points will never
cause the point on the area light to leave its subsection. For example, given a hypothetical
area light from (0,0) to (1,1) with 4 samples, the bottom left point will be in the range [0,0.5]
for both x and y. The surface point for which visibility is being tested is also hashed and the
RNG is seeded with that hash. This causes the jittering to be reproducably pseudorandom.
The percentage of shadow rays which have no intersections is used as the percentage of the
diuse light component and specular highlight component to apply.
16
CS488/688
Spring 2014
ajlake 20354795
To streamline raytracer logic, the shadow ray logic was delegated to the
derived class
AreaLight.
A method
computeVisibility
Light
class and its
is exposed which returns a double
between 0 and 1. Point light sources return either 0 or 1 whereas area light sources can return
anywhere in that range.
3.3.9 Objective 9: Constructive Solid Geometry
Code Location: csg.cpp
Relevant Lua Commands: gr.csg_union, gr.csg_difference, gr.csg_intersection
To create more complex objects from simple primitive solids, CSG techniques were used as
GeometryNode class was subclassed into three
CSGUnionNode, CSGDifferenceNode, and CSGIntersectionNode which support
described in section 18.6 of the course notes. The
derived classes
a tree like structure to enable the union, intersection, and dierence of spheres and cubes.
Before computing the union, intersection, or dierence of two solids A and B, a series of line
segments are computed for A and B which represent the intersection of the ray and the solid.
t1 and t2 which represent the end points (p1 =
~
ray _origin + t1 ∗ ray _dir; p2 = ray _origin + t2 ∗ ray~_dir) as well as the texture information
Each intersection is represented as two values
for each end point. A new list of intersections is then produced based on the operation and the
list of intersections for A and B.
Case 1: Union (A OR B): The new list is simply the union of the lists from A and B
Case 2: Intersection (A AND B): To compute the new list, the following is performed:
•
For each line segment a in A
For each line segment b in B
∗
•
If a and b have overlap, push the overlapping segment onto the new list
Return the new list
Case 3: Dierence (A AND NOT B): To compute the new list, the following is performed:
•
For each line segment a in A
For each line segment b in B
∗
If b fully contains a, break
∗
If b overlaps a on the edge, truncate a
∗
If b divides a into two segments, push the 2 new segments to the list in A and
break
•
Push a onto the new list if previous loop did not break early
Return the new list
The top level CSG Node uses its list of intersections to compute the collision information. It
does this by iterating through the intersections and selecting the smallest non-negative
and then returning that collision information.
t
value
17
CS488/688
Spring 2014
ajlake 20354795
3.3.10 Objective 10: Final Scene
Creating the nal scene was dicult and was probably the second most time-consuming objective.
The goal was a visually pleasing and photo-realistic scene that demonstrated every
objective. An indoor scene with a warm and calming theme was selected. Consequently, a light
green was selected for the wall colour (there is respectable research indicating green is a calming
colour) and wood ooring was selected to complement it. There is also a lot of usage of browns
to complete the green and brown nature combination. The tea is warm and calming and the rug
and soccer ball give the room a youthful character. The overhead lights are a slightly orange
colour giving a warm hue to the room (the type of light an incandescent bulb would emit) and
the soft shadows complete the scene. Finally, an Easter egg on the left wall was added for a bit
of wit and personal touch.
Objectives 1-12 (except 7) are present in the nal scene:
•
Adaptive Anti-aliasing: 16% of pixels in the nal scene were supersampled instead of 100
•
Reection and refraction: The marble table top is slightly reective, the window panes
are refractive and slightly reective, and the mirror on the right exhibits pure specular
reection.
•
Mesh Uniform Space Partitioning: The table base, mirror frame, teapot, and teacups are
all meshes. The table base and mirror frame are particularly high polygon count (71280
and 27278 polygons respectively). The scene would never had rendered in a reasonable
amount of time without this optimization.
•
Phong Normal Interpolation: The teapot and vase use normal interpolation to give the
appearance of a smoother solid.
•
Texture Mapping: The rug, the oor, and the note are all texture mapped cubes. The
soccer ball is a texture mapped sphere.
•
Perlin Noise: The window frame, the panels, and the mirror frame all use a wood grain
texture that was procedurally generated with perlin noise. The marble table top is also
procedurally generated with perlin noise.
•
Soft Shadows: All shadows in the scene are soft.
•
Constructive Solid Geometry: The mirror in the mirror frame was created using CSG. A
sphere was intersected with a cube to create a cylinder approximation.
The result was
then positioned into place. The light xture on the ceiling was created by hollowing out
a semi-sphere in (dierence) and placing a light inside the hole.
•
Skybox: The skybox texture can be seen refracted through the window and reected o
of the marble table top.
•
GUI Previewer: The nal scene was rendered in the GUI.
3.3.11 Objective 11 (Extra): Skybox
Code Location: skybox.cpp
Relevant Lua Commands: gr.render_skybox
18
CS488/688
Spring 2014
ajlake 20354795
The idea of the Skybox is that the viewer and the scene are small compared to the world
and regardless of the camera position and the scene position, it can be supposed that they are
located at approximately the origin (0,0,0) which can be assumed to be the center of the world.
The Skybox class therefore exposes the method:
virtual Colour getSkyboxColour(const Vector3D& dir);
The reason the method only considers the direction the viewer is facing is because of the assumption that the viewer is located at approximately the center of the world (0,0,0). The default
Skybox implementation simply creates a blue/black gradient based on the y-coordinate of the
view direction.
However, the TexturedSkybox class overrides the default
getSkyboxColour
method and instead computes the intersection of a cube and performs a texture mapping.
To determine where in the world the viewer is looking, a cube dened from (0,0,0) to (1,1,1) is
intersected with the line with origin (0.5, 0.5, 0.5) and direction
dir, and the closest intersection
point is recorded. From there, standard UV mapping for cubes is used to determine the pixel
x and pixel y in the texture image (PNG). The front of the Cube is considered to be the face
with normal (0, 0, 1) assuming a right-handed system. No ltering or interpolation (ie: bilinear
or trilinear) is used.
Unlike cube texture mapping in Objective 5, a dierent part of the texture image is mapped
to each cube face. The texture image is expected to have a width that is a multiple of 4 and a
height that is a multiple of 3. The faces are expected to be aligned in the following way (T
top, B
=
bottom, F
=
front, L
=
left, R
=
right, K
=
=
back):
#T##
LFRK
#B##
3.3.12 Objective 12 (Extra): Gui Previewer
Code Location:
rt_bootstrap.cpp / rt_viewer.cpp / rtgui_window.cpp / gl_image_writer.cpp
If the -gui option is set, a gtkmm GUI (a Window) is launched. The GUI contains an OpenGL
Viewer which is used to display the image as it is being rendered. The constructor for the Viewer
takes in a wrapper class (OpenGLImageWriter) around the image being written to that allows
the Viewer to access an array of oats which represent the RGB values of the pixels. This array
of oats is passed to the
glDrawPixels
method to display the image.
The main gtkmm Window's constructor takes in a Raytracer object as well as all of the relevant
arguments (such as Lights, a pointer to the SceneNode, etc.).
It also starts a timer which
executes every 30ms which causes the Viewer to refresh the image. When a user begins rendering,
the Raytracer render method is called in a seperate thread with all of the relevant arguments.
When the rendering is done, the thread exits.
As an improvement, the
glDrawPixels
method should be replaced as it is deprecated. Instead,
a textured quad should be displayed to the Viewer area.
19
CS488/688
Spring 2014
ajlake 20354795
3.3.13 Objective 13 (Extra): Fresnel Equations
Code Location: fresnel.cpp / raytracer.cpp
Relevant Lua Commands: gr.difresneld_material
In
Physically Based Rendering, p.434-435
, Pharr states that for unpolarized light, a close ap-
proximation for the Fresnel reectance for dielectrics is
rk =
ηt cosθi −ηi cosθt
ηt cosθi +ηi cosθt
r⊥ =
ηi cosθi −ηt cosθt
ηi cosθi +ηt cosθt
2 + r2 )
Fr = 12 (r⊥
k
where:
θi = incident angle
θi = transmission angle
ηi = refractive index of incident material
ηt = refractive index of transmission material
For materials that use this behaviour, these equations are used to determine the amount of
reected and refracted light (it is assumed that light is unpolarized). Once the reection and
refraction coecient are set, the regular code pathways for reection and refraction are executed.
Note: normally, this objective's functionality would be mostly located in fresnel.cpp following
the pattern of perlin.cpp and texture_mapper.cpp. However, most of the computations that are
needed to compute the amount of reected and refracted light are duplicated by raytracer.cpp
(in particular
θt ).
To save computational work, raytracer.cpp has a seperate branch for the
Fresnel Equations logic and all of the heavy lifting takes place there.
3.3.14 Objective 14 (Extra): Revised Final Scene
The nal scene was revised to incorporate Objective 13 by angling the right shutter.
otherwise identical.
It is
20
4
CS488/688
Spring 2014
ajlake 20354795
Bibliography
The following references were used.
[1] John Amanatides and Andrew Woo,
Eurographics '87, 1987, pp. 310.
Perlin noise
[2] Huge Elias,
htm,
,
A fast voxel traversal algorithm for ray tracing
, In
http://freespace.virgin.net/hugo.elias/models/m_perlin.
[Online; accessed 18-July-2014].
Cs488/688 course notes
Improved noise reference implementation
[3] School of Computer Science,
[4] Ken
Perlin,
noise/,
, 2014.
,
http://mrl.nyu.edu/~perlin/
[Online; accessed 4-July-2014].
[5] Ken Perlin,
Improving noise
to implementation
, ACM Trans. Graph.
[6] Matt Pharr and Greg Humphreys,
2010.
[7] Peter Shirley,
21 (2002), no. 3, 681682.
Physically based rendering, second edition: From theory
, 2nd ed., Morgan Kaufmann Publishers Inc., San Francisco, CA, USA,
A ray tracing method for illumination calculation in diuse-specular scenes
,
In Proceedings of Graphics Interface '90, 1990, pp. 205212.
ray-box intersection algorithm
[8] Amy Williams, Steve Barrus, R. Keith, and Morley Peter Shirley,
, Journal of Graphics Tools
An ecient and robust
10 (2003), 54.
21
5
CS488/688
Spring 2014
ajlake 20354795
Objectives Sheet
Due: Wednesday, July 23, 2014
Name:
User ID:
Student ID:
1: Adaptive Anti-aliasing:
Jaggies and colour edges are demonstrably smoothed by
the adaptive anti-aliasing algorithm.
2: Reection and Refraction:
Secondary rays are casted for objects to account for
transmission and reection: Snell's Law is used to compute the angle of transmission and
the Law of Reection is used to compute the angle of reection.
3: Mesh Uniform Space Partitioning:
The eciency gains using this technique are
well demonstrated with before and after timings for a sample mesh.
4: Phong Normal Interpolation:
surface normals for meshes.
Vertex normal interpolation is performed to compute
The result is a demonstrably smoother-looking polygonal
mesh.
5: Texture Mapping:
At minimum, PNG textures can be applied to the surfaces of
spheres and cubes.
6: Perlin Noise:
At minimum, a variety of procedurally generated Perlin noise textures
can be applied to spheres and cubes.
7: Caustics:
Caustics has been demonstrably implemented.
8: Soft Shadows:
Area lights and averaging of shadow rays are used to produce soft
shadow eects.
9: Constructive Solid Geometry:
At minimum, union, dierence, and intersection
operations are demonstrated with spheres and cubes.
10: Final Scene:
A unique nal scene is created which demonstrates features of the
raytracer.
A4 extra objective:
Supersampling
Declaration:
I have read the statements regarding cheating in the CS488/688 course handouts. I arm with
my signature that I have worked out my own solution to this assignment, and the code I am
handing in is my own.
Signature:

Similar documents

Find Homes for Sale in Columbia, SC at Ray Covington

Find Homes for Sale in Columbia, SC at Ray Covington Looking for homes for sale in Columbia, SC? At Ray Covington, we have a top team of experienced Real estate Agent and Realtor in Columbia, SC who are expertise in selling & purchasing of homes. Feel free to contact us at (803) 331-8833 at any time! Visit Us: http://raycovington.com/

More information