Shaders and graphic pipeline

Transcription

Shaders and graphic pipeline
Shader and render Pipeline
[001] OPEN TECH
http://www.ontmoeting.nl/opentech/
1
publisher Ontmoeting, the Netherlands
http://www.ontmoeting.nl
Abstract - Shaders give the user the change
to change and model the endresult of a rendering in a way he or she wants it to be.
Therefore a good understanding of the working of the shader process (pipeline) is necessary. And for making your own shaders
one or more bitmap and Normal/Bump map
programs are needed to make them ready for
the final goal the rendering or animation. At
first open source programs can just do the
trick for you. It give the feeling in games that
it is a real world where we are looking at and
playing in. And parallel on choosing and
buying Shaders, you want to use for the render program, you can make them yourself in
order to enhance the quality of the endproduct.
I. INTRODUCTION
Shader
A Shader is a in- or external file that performs the
‘shading’ of surfaces. Performing graphic effects and
life-like pictures in render software.
In fact the name tells that something (the 3D model
its surface) is covered up: ‘Shaded’. Just like a cloth
to put on. And that can be a plain color, gradient or
for instance a brick-wall or water texture. In order to
start working with Shaders, the program must be capable of working with them. Simple programs work
with Materials and Textures, more interesting and
natural programs can use Bump, Normal Map and
Alpha maps to steer the quality. But there are a lot
more Maps that could interact with render- or game
programs. The program itself consist of several ShaKeywords:
Shader, Normal Map, Color Map, Bump Map,
graphic pipeline, Render program, 3D program, tile,
seamless, graphic processing, OpenGL, CPU and
GPU, pixel shader, vertex shader, textures, materials.
der steps or algorithms from 3D objects to 3D textures onto 2D textures to the end product the screen, a
picture file or animation. This processing in the software is called the “render pipeline”.
In order to manipulate this part of the rendering program there are several ways for the user to use his
own Shaders with or without Bump, Normal Map,
Alpha etc. Shaders are manipulated with Material
and Texture files that store all sorts of parameters.
Most of the time the Materials and Textures must be
“seamless” that is to say that the picture must be capable of extending itself on all four sides. See the
picture of the stone, that is repeated on all sides in
order to ‘smooth’ the border differences, so that after
the manipulation there is only one picture (most of
the time square) that works as a Color- or Diffuse
Map picture.
Below a stone texture with
added pictures on all four sides.
On the left the differences when
it comes to one stone side and
the other side with the joint.
If you do not change the borderlines of this tile, then there will
be a big transition visable between the individually tiles in
your rendering or game.
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
Material
A material could be a piece of steel, gold or silver.
A material in CG simulates a physical material. For
instance 'glass' or 'wood'. It mimic a group of light
equation parameters. A material defines the optical
properties of an object. It's color and whether it is
dull or shinny. A material defines basic properties
like shading, specularity, reflectivity, color and
transparency etc.
In OpenGL (the most common API in render-, game
and 3D software) a material is a set of coefficients
that define how the lighting model interacts with the
surface.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Materials
Introduction to Materials within Blender.
A texture gives you control over these different properties on a finer level of detail, which makes it possible to change the characteristics of a certain
channel of shading or the mesh itself. This is important because in real life, almost nothing (outside of
extremely clean glass) is perfectly smooth / specular
/ uniform color etc. Everything tends to have slight
variations which make it look real. Textures are what
allow us to mimic these variations in 3D.
Texture
In material science a texture is the distribution of
crystallographic orientations, most of the time in regular order for instance a linnen cloth. If these orientations are random then it is said to have no texture.
The amount of orientation from weak, moderate to
strong determine the texture.
A solid with perfect random crystallite orientation
has no texture at all.
In OpenGL a texture is a set of 2- (sometimes 3D)
dimensional bitmap image files that are applied and
interpolated on to a surface according to texture coordinates at the vertices. Texture data alters the color
and brightness of a surface completely, depending of
the lighting applied.
Textures are used frequently to provide a repeating
brick or water to simulate a brick wall or a waterpool. Larger surfaces like a grass field, a hill or lake
does not look natural, even with great seamless tiles.
To make it more natural one has to change the tiles
and work with more then one tile at the same time.
All this should be made accessable in the used software.
2
In the classical (fixed-pipeline) OpenGL model, textures and materials are orthogonal. In the new programmable Shader world, the line has blurred quite a
bit. Frequently textures are used to influence lighting
in other ways. For example, Bump maps are textures
that are used to perturb surface normals to effect
lighting, rather than modifying pixel color directly
as a regular "image" texture would.
A texture is a pattern that breaks up the uniform appearance of the material. In the open source program
Blender one must set the material property first before you can add a texture.
Texture mapping is a method to add detail to surfaces by projecting images and patterns onto those
surfaces. The projected images and patterns can be
set to affect not only color, but also specularity, reflection, transparency, and even fake 3-dimensional
depth. Most often, the images and patterns are projected during render time, but texture mapping is
also used to sculpt, paint and deform objects.
In Blender, Textures can be applied to a Material.
Textures can be used to modulate most material properties, including bump mapping and displacement.
(Source: LuxRender Textures).
Image Textures: Image Textures are two dimensional
images that are projected on 3d objects. These textures can be used in a variety of material channels, for
example colour, bump and displacement. Typically,
the UV mapping of images on 3d objects is handled
by the 3d application.
What is in the name?
Textures and materials are used interchangeably and
it is common to refer to a bitmap as a texture. And a
set of Shaders could be addressed as Textures too.
Some sources are only talking about Materials and a
set of Materials, others use the word Textures. And
to make it less clear: In the Material Editor one can
use Textures to make a Bump map or a Normal Map.
We see at the search engine the name "Free 3D Textures and Free Textures/Materials” arrive.
See also:
http://www.vpython.org/contents/docs/materials.html
The way Texture mapping works can be seen here
and was pioneered by Edwin Catmull in 1974:
http://en.wikipedia.org/wiki/Texture_mapping
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
3D application and API's
3D Graphics rendering pipe-line
Application
The application directs the camera (user view) and
the 3D scene. With animations also the movement of
objects and sometimes even the momentum and collision detection tools (games).
Scene-level tasks
Skipping all scenery that cannot be looked at, there
is no need to send all those data further into the pipeline. At the same time there could be a selection in
the right detail level, where further away is left with
a lower resolution and up front with a higher resolution. Trees at the background are just for the background so with a lower resolution it seems alright.
Transformation
The part of the program that transforms and converts
the 3D data from one frame of the reference to a new
frame of reference. By changing the local coordinate
system to the 3D world coordinate system. It is to
handle scenes changing from one frame to the next.
The conversion must take place before performing
the next steps like lighting, triangle setup and rendering. With every redraw of the scene there need to be
a transformation to the shown (and sometimes parts
that are not in the view) objects in the scene. Moving
objects in the scene is called as translation. But
zooming, scalling and rotation also fall on this category. The math required is the one of linear algebra
with matrix operations.
Lighting
This step in the pipeline is very important, because it
makes a great impact in the overall quality of the
endproduct. To enhance the reality of the rendering
the lighting tools must calculate carefull.
There are several light sources like the sun, spotlights and indoor lighting. But also reflected surfaces
and mirrors reflects light and act as new light sources. See for instance the Photon Photography &
Femto photography were you see 'real' Photons
http://blog.ontmoeting.nl/renderinfo/photon-photograp.html )
The light calculations also make use of matrix manipulations. To further enhance the light engine we
make use of diffuse light, for instance the sunlight
shining on the ground. Or Specular light where the
direction of the viewer (camera) is important as well
as the direction of the light source itself. Some pro-
3
grams work with different refraction factors for different RGB colors. Then the orientation of the object
/ triangles play a role in the whole calculating process.
Triangle setup and clipping
Although in the transormation process there was a
shifting in viewable objects, here the software clips
the scenery for the rendering engine. The parts of the
scene that fall completely outside the viewing
frustum will not be visible and is discarded at this
stage of the pipeline.
The floating point math processor (GPU most of the
time) receives the vertex data and works on the different parameters that are needed.
Rendering - Rasterization
This is the most interesting part of the whole proces,
it is actually the calculation of the right color of each
pixel on the picture or screen. Converting the 2D
space representation of the scene into raster format.
From this point on the calculations will be formated
on each single pixel.
Not only the color is important, also the direction of
the lightsources hitting the 3D objects. And whether
the object is transparant or oblique. The wanted textures that are put onto the objects. The whole rendering process takes place in multiple stages: the
multi-pass rendering.
Pixel Shader
After the Rasterization comes the Pixel Processing.
Receiving a pixel as input and calculating the final
Triangle rasteriation process where the actual pixels
are placed on the correct coordinates of the 2D
image to be.
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
4
color of the pixel. And pass that on to the output
merger. Which pixels are in the polygon and which
are not.
Not only the color can be altered, also all sorts of
extra effects can be added like reflections, Bump
mapping, Normal mapping and much more. And of
course it is posible to use Postprocessing effects over
the entire rendered scene like brightness, sharpness,
contrast and color enhancements, saturation and
blur. And making an depth (Z) buffer value for the
pixels.
http://niul.org/gallery/render-pipeline
Not all render program work with the GPU as the
main calculation source. There could be a big time
advantage to let the GPU do the things that the
Graphic Card is good in.
http://www.opengl.org/wiki/Rendering_Pipeline_Overview
http://en.wikipedia.org/wiki/Rendering_%28computer_graphics%29
http://publib.boulder.ibm.com/infocenter/aix/v6r1/in
dex.jsp?topic=%2Fcom.ibm.aix.graPHIGS%2Fdoc
%2Fphigsund%2FRendPipeln.htm
Figure below:
GeForce GPU frees off the CPU cycles. With one of
the first GPU cards the GeForce 256 with only 22
million transistors. Capable of processing billions of
calculations per second. The CPU is set aside to perform life-like images in no time.
And with the ever increasing demand to work with
bigger 3D models and corresponding more polygons, the GPU is the 'only' way to handle that in a
Pixels in a 2D matrix. All pixels could obtain only
one color (RGB) and one brightness and Hue.
very short time. Experts of different backgrounds
disagree about this statement.
Here we see the NVIDIA picture of a helicopter. On
the left with 998 polygons and on the right the
100,000 polygon representative. It is clear that render companies and gamers did like the right picture
much more then the left one, look at the landings tire
for instance.
The very old picture with the GeForce GPU is
showing the offload advantages of shifting the calcu-
lations from the CPU to the GPU.
At first the big CPU manufactures denied such a
thing, “the CPU will always be a better and simpler
place to do the needed calculations”. But after a
while they still denied the advantages of parallel
computing with GPU.
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
When it was not longer possible to deny the advantages in production time (for the customer) they
brought out parallel boards themselves. And deserve
a part of the CPU waver for parallel processing on a
smaller scale, to fill up the gap somehow.
After numerous life tests with GPU render programs
with speed differences of more then 10 - 100 times,
slowly an alteration in the thinking took place. But
that is not displayed at several ‘old school’ renderfirms who prefer the ‘big marketplace’ of CPU render clients in the world. And they are advertising this
as their big feature.
See the second 002 PDF called:
Render_Principles.pdf
with a time lapse from 2012-2013 where the battle is
still going strong.
Why are not all render programs GPU oriented?
That is a historical question with parameters in the
marketing sector and in the software programming
department.
First the marketing aspect, potential customers of
rendersoftware and gamers are not always aware of
the hardware requirements when they buy a computer or game console. So to get most users interested,
the software manufacturer will see that even 'the slowest' hardware can be used. With no special requirements for the GPU. Thereby disrecarding the fact
that Macintosh users must buy new computers more
often, then any Windows computer user.
Parallel on this, there is the established computer
programming mass, “they always programmed in the
past for the CPU” and doing 'the same' for the GPU
did not work at all. It is the same mass that worked
against the introduction of OpenSource Linux software. The IT-specialists were used to Microsoft
Windows and followed several courses on the technque and at the same time with embedded marketing
thoughts telling ‘to look no further. . . then MS’
GPU programming is quite different and much more
challenging then for the CPU. So other more special
programmers are needed, fresh from university to
take part in programming a GPU renderprogram.
The time lapse and investment from idea to working
software for the market is much longer for GPU then
for CPU programming, despite all the contradictions
spread over this subject.
http://en.wikipedia.org/wiki/Rendering_%28computer_graphics%29
5
Chronology of important published ideas
Rendering of an ESTCube-1 satellite.
* 1968 Ray casting [3]
* 1970 Scanline rendering [4]
* 1971 Gouraud shading [5]
* 1974 Texture mapping [6]
* 1974 Z-buffering [6]
* 1975 Phong shading [7]
* 1976 Environment mapping [8]
* 1977 Shadow volumes [9]
* 1978 Shadow buffer [10]
* 1978 Bump mapping [11]
* 1980 BSP trees [12]
* 1980 Ray tracing [13]
* 1981 Cook shader [14]
* 1983 MIP maps [15]
* 1984 Octree ray tracing [16]
* 1984 Alpha compositing [17]
* 1984 Distributed ray tracing [18]
* 1984 Radiosity [19]
* 1985 Hemicube radiosity [20]
* 1986 Light source tracing [21]
* 1986 Rendering equation [22]
* 1987 Reyes rendering [23]
* 1991 Hierarchical radiosity [24]
* 1993 Tone mapping [25]
* 1993 Subsurface scattering [26]
* 1995 Photon mapping [27]
* 1997 Metropolis light transport [28]
* 1997 Instant Radiosity [29]
* 2002 Precomputed Radiance Transfer [30]
[source Wikipedia]
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
Geometry
Vertex
Shader
Geometry
Shader
6
Rasterizer
Pixel
Shader
Screen
Render- or Game process: From 3D Geometry to Vertex Shader (3D oriented) to the Geometry Shader, Rasterizer and Pixel Shader (2D oriented). Up to the last in the chain with Screen and buffer and file output.
So a Shader (for the user) is a set of pictures, that
could steer the way the render or 3D program will
show the textures in the rendering.
The most simple Shader is a Diffuse- or Color Map
Shader with a Preview representative (as Thumbnail)
for the menu. But the Color Map is only one aspect,
you can work with a lot more picture files to fullfill
the essential goal: a good and lifelike rendering.
A corresponding Alpha map bring in some transparancy in the Diffuse Map picture. Or even an outline
of a human form. Black is most of the time the transparant part and white the oblique in an Alpha map
picture.
A Bump map picture is used if one want a small suggestion in a plane surface that there is a height difference. The amplitude of the height suggestion is
made by a picture with black and white pixels and
all the different (256) greys in between. The black
pixels represent the most deep parts, the white pixels
the higher parts. There are a lot of Shaders that could
change the way we look and feel and change the
endproduct. Normal Map, Displacement Map, Relief
Map, Parallax Map, Shininess Map, Reflection Map,
Specular Map and others. If you want to use such a
complete Shader set, then the render program you
are working with, should be ready to incorporate all
those features. The ‘simpler’ render programs only
works with a sort of Bump map, or with an equivalent that’s automatically calculated from the Color
map. In the simpler programs the user has no or less
influence to make the rendering more natural.
There are two main Shadertypes algoritms in the
Stone 100 Shader from Artlantis from left to right:
Bump.jpg, Color.jpg, Normal.jpg, Reflection.jpg,
Shininess.jpg, Minfo and .xsh information files executed by the program and the Preview.
render program:
Vertex Shader
Pixel Shader
each performing in a
different part of the
render pipeline of the
program.
The Vertex Shader
The Vertex Shader is
the first in the pipeline with 3D info and the pixel
Shader comes after that, where the pixels become a
part of a regular 2D picture that can be seen on the
screen (Preview) or rendered later on. In the Vertex
Shader each vertex will be operated individually.
Most of the time the execution is done in the Graphic Card.
The Pixel Shader
Fill in the given primitives with colors. Based on
textures and the light physics of the model. Pixels
A Diffuse map is not view dependent of its own, it
will look the same from any angle. See picture gallery at the end. But it does depend on the Normal
at each pixel. Diffuse Shaders give a non-shiny
look, while specular shaders give a mirror-like
look.
The process of working with Shaders (Artlantis Studio 5.1 render program). On the left the Thumbnail of the
texture. Then the Diffuse (Color) map, the Reflection-, Shininess-, Bump-, Normal-, Alpha- and 3D effect maps.
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
7
are the colored dots on the screen or output picture
file. Each pixel can contain only one sort of color
and hue.
To perform there are specific languages developed:
High Level Shader Language (HLSL)
HLSL is a programming language like C++ which is
used to implement shaders (Pixel Shaders / Vertex
Shaders).
C for Graphics (Cg) [ Cg = C - language and Graphics]
OpenGL Shading language (GLSL)
The main source engine uses HLSL, Cg is similar to
that and can be quickly ported to HLSL.
All the languages were issued with a version number. Currently there are five versions of Shader models: SM 4.0, 3.0, 2.0, 1.4 and 1.1. Shader Model
version 2.0 is the most common.
Not all Graphic Cards with new features and new
versions of Shader Model do work just out of the
box. The version must be compatible with the Card.
And older graphic cards must be given a way to fall
back to (Shader fallbacks) perform and show the
mentioned effects too.
In this page
http://en.wikipedia.org/wiki/High_Level_Shader_La
nguage
you find all kind of versions of Shader model comparisons.
Cg programs operate on vertices and fragments
(think = "pixels" for now if you do not know what a
fragment is) that are processed when rendering an
image. Think of a Cg program as a black box into
Bumpmap of cube with a circle as bumpmap. A relative simple touch to mimic that there is a height difference here.
which vertices or fragments flow on one side, are somehow transformed, and then flow out on the other
side.
However, the box is not really a black box because
you get to determine, by means of the Cg programs
you write, exactly what happens inside.
Cg supports either programming interface. You can
write Cg programs so that they work with either the
OpenGL or Direct3D programming interface. Developers can pair their 3D content with programs
written in Cg and then render the content no matter
what programming interface the final application
uses for 3D rendering and that's a big plus.
But one speak about a Shader also as a set of Material files that could be manipulated with parameters.
The most common Shader external files are:
* Preview picture
* Color / Diffuse picture
Route OpenGL from 3D vertices till the end of the chain the Frame Buffer of the screen display.
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
8
* Bump map picture
* Normal map picture
* Alpha map picture
and several Specular and ..... map pictures files.
http://en.wikipedia.org/wiki/Shader
Geometry Shaders
Makes primitives out of the given vertices. Compute
extra data attached to the vertices. Clipping, to calculate only what is needed for this scene camera
view. Primitives could be a Trangle, Quad, Point or
Line.
A new type of Shader that came with Direct3D 10
and OpenGL version 3.2 and higher. The first time
was with version 2.0+ with the use of extensions.
Now from 3.2 version it is issued.
Herewith new graphic primitives can be generated.
From points, lines and triangles at the begin of the
Graphic pipeline.
Geometry Shaders fit into the middle of the pipeline:
first the Vertex Shader, then the Geometry Shader
and the last the Pixel Shader. Tesselation, Shadow
Volume extrusion and Pont Sprite generation are
possible. A cube map (perferred way of mapping for
instance for modelling outdoor illumination accurately) can be single passed in rendering.
Earlier, we told you to think of a fragment as a pixel
if you did not know precisely what a fragment was.
At this point, however, the distinction between a
fragment and a pixel becomes important.
The term pixel is short for "picture element." A pixel
represents the contents of the frame buffer at a specific location, such as the color, depth and any other
values associated with that location. A fragment is
the state required potentially to update a particular
pixel.
The term "fragment" is used because rasterization
breaks up each geometric primitive, such as a
triangle, into pixel-sized fragments for each pixel
that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary
(specular) color, and one or more texture coordinate
sets. These various interpolated parameters are derived from the transformed vertices that make up the
particular geometric primitive used to generate the
fragments. You can think of a fragment as a "potential pixel."
Shader examples to choose from.
If a fragment passes the various rasterization tests
(in the raster operations stage, which is described
shortly), the fragment updates a pixel in the frame
buffer.
The Raster operatings stages are by itself a series of
pipeline stages.
Shaders come in two sorts (or more) in the Render
or Game program as Vertex Shaders and as Pixel
Shaders as stated before. A Shaders task is to give
stunning visual effects of the appearance of surfaces.
Shaders have the ability to make any surface look
like it is made of some material or substance.
Shaders are based on light too, they determine how
to color a surface by looking at how the surface is
oriented towards the light source(s) in the scene.
There are different Shadings models for it, for instance the Lambertian or Phong Shading model.
The task of a Shader is not always that difficult,
there could be simple 'shaders' that just paint every
spot on the surface in the same color. They are called
'flat' or 'constant' Shaders. There are several features
that can be added on a program that uses Shaders.
Specularity is one of them. It is an extension of the
Lambert Shading method. The hightlights are mimiced the highlights on shiny objects. They are camera
location dependent and the light source location plus
the suface normal. All to calculate the amount of reflected light towards the camera (viewer).
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
9
Reflectivity is another feature that can be added on
the algorithm of the program. The reflected colors
are usually multiplied by the amount of specularity.
In fact the light reflected from the scene into the
final shaded color.
And the wellknown Bump map. It merely distorts
the Normal vectors during the Shader process to
make the surface appear to have small variations in
shape like the Bump on the picture.
Textures found on the internet
Lots of sites advertise that their “Textures are free”.
Look careful if you are allowed to use that picture in
all sorts of publications, sometimes they are still restricted.
Most of the free textures are not ‘seamless’, although the site does tell you they are. Many of them
are just nice pictures of material that ends nowhere
and has directions (wooden structure) that never
could be used as a Material / Texture in your program. And the least difference in tone on the left versus the right side and upper against the lower side is
shown in the repetition process as an artifact.
So to work with Shaders you can choose from the library that is incorporated in the software. Or buy individually tiles in special shops. Or make them
yourself, which is a lot of fun but beware that it
takes time. And you must be familiar with a bitmap
editor like Photoshop, Photo Elements or in the open
source corner: Gimp. Testing such home made tiles
is easy, just write a line in a html editor for the
Background, open up your browser and look what
happens:
<html>
<body background="waterDSC00797.jpg">
</body>
</html>
Two publications of working with Render programs
and Shaders (both in Dutch language)
ISBN 978-90-8814-40-2
Tips & Trucs -1, 108 pages
ISBN 978-90-8814-41-9
Tips & Trucs -2, 76 pages.
Publisher Ontmoeting.nl
Just a ‘normal’ picture of water and the sandy shoreline. Picture this one and look for bright and dark
parts, the flowing directions and structure.
Then look here what happens if we want to take this
picture as a Color map tile for Shading.
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
Sites for Free Textures/Materials
See the text warning on page 9.
http://www.cgtextures.com/
10
http://www.boulterplywood.com/product_gallery.htm
http://free-3d-textures.com/
SketchUp texture Warehouse
30.000 textures and 3D models
http://www.sketchuptexture.com/
http://www.texturepilot.com/
http://www.plaintextures.com/
http://www.noctua-graphics.de/english/fraset_e.htm
http://3dxtra.cygad.net/index.htm
http://textures.forrest.cz/
http://www.texturewarehouse.com/gallery/index.php
http://www.vray-materials.de/ (You have to register, in
order to download, but it's free)
http://www.rdeweese.com/
http://www.accustudio.com/index.php?...id=1&Itemid=77
http://www.eldoradostone.com/flashsite/
http://www.3dtotal.com/
http://www.sharecg.com/
Literature
Appearance Material and Textures
X3D
Extensible 3D Graphics
Chapter 05 - Appearance Material and Textures.pdf
250 slides
X3D Graphics for Web Authors. Chapter 5. Appearance,
Material, and Textures
Things are not always as they appear.
http://faculty.nps.edu/brutzman/brutzman.html
http://wiki.bk.tudelft.nl/toi-pedia/Texture_positioning__Projection_mapping
Texture positioning - Projection mapping
by TU Delft, the Netherlands.
http://www.3d-animation.com.ar/index.php#
http://www.flatpyramid.com/
http://www.usgdesignstudio.com/TrueWood.asp
ISBN 978-90-8814-037-2
Render principles (English)
270 pages
Dutch
ISBN 978-90-8814-036-5
Render principes
277 pagina’s
Programma codes
Literatuur met originele PDF's
NVIDIA lijn
Werkstations
Publisher: http://www.ontmoeting.nl
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
11
Shaders (color map) are flat by nature as can be clearly
seen in this picture of a cube.
The texture is not seamless and although the bright and
darker areas mimic high and low, there is no main direction of the light visable.
Different pictures of a stone wall Shader. From left to
right: Bump map, Color Map, Normal Map, Relief map,
Shininess Map and Preview.
Modern 3d and render programs can make millions of
vertices in order to come close to nature’s grassfield.
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
12
Normal maps are a special kind of picture. They represent normals at the surface translated into a color sceme. On the right
Shadermap for Windows: ‘creating maps in seconds’.
Left: zoomed in Shader with Relief function. Left under: is this
a shader or 3D scene? Right: Water shaders are quite good in
showing natural water. Bump map of stones in Gimp.
[001] OPEN DOC http://www.ontmoeting.nl/opentech/
13
Above: the grass shader contains three different color maps put
together in random order to fill in a big area.
Botom: Render program Artlantis with choice of shaders
beneath.