Virtual Reality - Computer Science

Transcription

Virtual Reality - Computer Science
Presentation on
Virt ual Realit y
By
Nikhilesh, Balepur and Sunil, Appanaboyina
1. Overview
"Virtual Reality is a way for humans to visualize, manipulate and interact with
computers and extremely complex data."
The visualization part refers to the computer generating visual, auditory or other
sensual outputs to the user of a world within the computer. This world may be a CAD
model, a scientific simulation, or a view into a database. The user can interact with the
world and directly manipulate objects within the world. Other processes, perhaps
physical simulations animate some worlds or by simple animation script.
The applications being developed for VR run a wide spectrum, from games to
architectural and business planning. Many applications are like CAD or architectural
modeling. Some applications provide ways of viewing from an advantageous perspective
not possible with the real world, like scientific simulators and telepresense systems, air
traffic control systems. Other applications are much different from anything we have ever
directly experienced before. These latter applications may be the hardest and most
interesting systems.
2. Types of VR Syst ems
A major distinction of VR systems is the mode with which they interface to the user.
This section describes some of the common modes used in VR systems.
2.1. Window on World Systems (WoW)
Some systems use a conventional computer monitor to display the visual
world. This sometimes called Desktop VR or a Window on a World (WoW).
This concept traces its lineage back through the entire history of computer
graphics.
2.2. Video Mapping
A variation of the WoW approach merges a video input of the user’s
silhouette with a 2D-computer graphic. The user watches a monitor that shows his
body’s interaction with the world.
2.3. Immersive Systems
The ultimate VR systems completely immerse the user’s personal
viewpoint inside the virtual world. These "immersive" VR systems are often
equipped with a Head Mounted Display (HMD). This is a helmet or a facemask
that holds the visual and auditory displays. The helmet may be free ranging,
tethered, or it might be attached to some sort of a boom armature.
Nice variations of the immersive systems use multiple large projection
displays to create a ’Cave’ or room in which the viewer(s) stand. An early
implementation was called "The Closet Cathedral" for the ability to create the
impression of an immense environment with-in small physicals space.
3 3- D Posit ion Sensors
Ultrasonic sensors can be used to track position and orientation. A set of emitters
and receivers are used with a known relationship between the emitters and between the
receivers. The emitters are pulsed in sequence and the time lag to each receiver is
measured. Triangulation gives the position. Drawbacks to ultrasonics are low resolution,
long lag times and interference from echoes and other noises in the environment.
Logitech and Transition State are two companies that provide ultrasonic tracking
systems.
Magnetic trackers use sets of coils that are pulsed to produce magnetic fields. The
magnetic sensors determine the strength and angles of the fields. Limitations of these
trackers are a high latency for the measurement and processing, range limitations, and
interference from ferrous materials within the fields. However, magnetic trackers seem to
be one of the preferred methods. The two primary companies selling magnetic trackers
are Polhemus and Ascension.
Optical position tracking systems have been developed. One method uses a
ceiling grid LEDs and a head mounted camera. The LEDs are pulsed in sequence and the
camera image is processed to detect the flashes. Two problems with this method are
limited space (grid size) and lack of full motion (rotations). Another optical method uses
a number of video cameras to capture simultaneous images that are correlated by highspeed computers to track objects. Processing time (and cost of fast computers) is a major
limiting factor here. One company selling an optical tracker is Origin Instruments.
Inertial trackers have been developed that are small and accurate enough for VR
use. However, these devices generally only provide rotational measurements. They are
also not accurate for slow position changes.
4 VR Tools
There are a number of specialized types of hardware devices that have been developed
or used for Virtual Reality applications.
4.1 Image Generators
One of the most time consuming tasks in a VR system is the generation
of the images. Fast computer graphics opens a very large range of applications
aside from VR, so there has been a market demand for hardware acceleration for a
long while. There are currently a number of vendors selling image generator cards
for PC level machines; many of these are based on the Intel i860 processor. These
cards range in price from about $2000 up to $10,000. Silicon Graphics Inc. has
made a very profitable business of producing graphics workstations. SGI boxes
are some of the most common processors found in VR laboratories and high-end
systems. The simulator market has produced several companies that build special
purpose computers designed expressly for real time image generation. These
computers often cost several hundreds of thousands of dollars.
4.2 Manipulation and Control Devices
One key element for interaction with a virtual world, is a means of
tracking the position of a real world object, such as a head or hand. There are
numerous methods for position tracking and control. Ideally a technology should
provide 3 measures for position (X, Y, Z) and 3 measures of orientation (roll,
pitch, yaw). One of the biggest problems for position tracking is latency, or the
time required making the measurements and preprocessing them before input to
the simulation engine.
The simplest control hardware is a conventional mouse, trackball or
joystick. While these are 2D devices, creative programming can use them for 6D
controls. There are a number of 3 and 6 dimensional mice/trackball/joystick
devices being introduced to the market at this time. These add some extra buttons
and wheels that are used to control not just the XY translation of a cursor, but its
Z dimension and rotations in all three directions. The Global Devices 6D
Controller is one such 6D. You can pull and twist the ball in addition to the
left/right & forward/back of a normal joystick. Other 3D and 6D mice, joystick
and force balls are available from Logitech, Mouse System Corp. among others.
4.3 Gloves
One common VR device is the instrumented glove. Such a glove is
outfitted with sensors on the fingers as well as an overall position/orientation
tracker. There are a number of different types of sensors that can be used. Several
DataGloves, mostly using fiber optic sensors for finger bends and magnetic
trackers for overall position are available. Mattel manufactured the PowerGlove
for use with the Nintendo game system, for a short time. This device is easily
adapted to interface to a personal computer. It provides some limited hand
location and finger position data using strain gauges for finger bends and
ultrasonic position sensors.
4.4 Body Suits
The concept of an instrumented glove has been extended to other body
parts. Full body suits with position and bend sensors have been used for capturing
motion for character animation, control of music synthesizers, etc. in addition to
VR applications.
Mechanical armatures can be used to provide fast and very accurate
tracking. Such armatures may look like a desk lamp (for basic
position/orientation) or they may be highly complex exoskeletons (for more
detailed positions). The drawbacks of mechanical sensors are the encumbrance of
the device and its restrictions on motion. Exos Systems builds one such
exoskeleton for hand control. It also provides force feedback. Shooting Star
system makes a low cost armature system for head tracking.
4.5 Stereo Vision
Stereo vision is often included in a VR system. This is accomplished by
creating two different images of the world, one for each eye. The images are
computed with the viewpoints offset by the equivalent distance between the eyes.
There are a large number of technologies for presenting these two images. The
images can be placed side-by-side and the viewer asked (or assisted) to cross their
eyes. The images can be projected through differently polarized filters, with
corresponding filters placed in front of the eyes. Anaglyph images user red/blue
glasses to provide a crude (no color) stereovision.
The two images can be displayed sequentially on a conventional monitor
or projection display. Liquid Crystal shutter glasses are then used to shut off
alternate eyes in synchronization with the display. When the brain receives the
images in rapid enough succession, it fuses the images into a single scene and
perceives depth. A fairly high display-swapping rate (min. 60hz) is required to
avoid perceived flicker.
Another alternative method for creating stereo imagery on a computer is
to use one of several split screen methods. These divide the monitor into two
parts and display left and right images at the same time. One method places the
images side by side and conventionally oriented. It may not use the full screen or
may otherwise alter the normal display aspect ratio. A special hood viewer is
placed against the monitor who help position the eyes correctly and may contain a
divider so each eye sees only its own image. Most of these hoods, such as the one
for the V5 of Rend386, use fresnel lenses to enhance the viewing. An alternative
split screen method orients the images so the top of each points out the side of the
monitor. A special hood containing mirrors is used to correctly orient the images.
A very nice low cost (under $200) unit of this type is the Cyberscope available
from Simsalabim.
4.6 Head Mounted Display (HMD)
One hardware device closely associated with VR is the Head Mounted
Device (HMD). They come in two types also namely as a Magnetic Sensor HMD
& also an Ultra Sound HMD. These use some sort of helmet or goggles to place
small video displays in front of each eye, with special optics to focus and stretch
the perceived field of view. Most HMDs use two displays and can provide
stereoscopic imaging. Others use a single larger display to provide higher
resolution, but without the stereoscopic vision.
Most lower cost HMDs ($3000-10,000 range) use LCD displays, while
others use small CRTs, such as those found in camcorders. The more expensive
HMDs use special CRTs mounted along side the head or optical fibers to pipe the
images from non-head mounted displays. ($60,000 and up). A HMD requires a
position tracker in addition to the helmet. Alternatively, the display can be
mounted on an armature for support and tracking (a Boom display).
4.7 3-D Sound Generators
• Convolvotron:
NASA experiments with virtual sound sources proved the possiblity of
synthesising static 3-D sound. By static we mean that the listner was in a static
position and there was no measurement of his head motion while hearing the
virtual sounds. In VR simulations the users change their head orientation to look
around in the simulated world.
When wearing simple stereo headsets the violin would turn to the left
following the users head motion. However, if 3-D sound is synthesised used head
tracker data then the virtual violin should remain localized in space and its sound
appear to move to the back of the user head. Additionally the simulation beeing
real time the virtual sound position should also change dyanamically in real time.
This is a computationally intensive task.
Sounds in a real room bounce off the walls, the floor or ceileing adding to
the direct sound received from the source the reflections depend on the room
geometry and on the construction material used. The realism of the virtual room
therefore requires the reflected sounds be factored in.
The head tracking data from a 3-D sensor is sent to the host computer. The
host then calculates the new position of upto 4 simulated sound sources relative to
the user head. A processor to calculate the new head related transfer function for
the 4 sound sources then uses this data. Then the output sound produced is
relative & proportional to the amount of head movement. The convolvoing engine
to 4 sound inputs applies the filters. The convolved sound is then converted to
analog signals & sent to the speakers.
•
Beachtron:
This allows 2 VR sources to be simulated. Tracker data for the user head
position / orientation together with simulation data for VR object locations is sent
to a local geometry transform stage. This calculates the new relative VR object
user head 3-D position and then new HRTF’s (Head realted Transfer Function)
are interpolated. This was targetted at PC based VR systems.
5 The Comput ing Archit echure
The VR I/O tools mediate interactions between the user and the “VR engine”. The
I/O tools together with the VR Engine coordinating the simulation form the VR system.
The VR Engine first reads task dependent user input and then accesses task
dependent databases to caalculate corresponding world instances or frames. Since it is not
possible to predict all user actions and to store all corresponding frames in memory, the
world is created in real time. Additionally human factor studies indicate that eye motion
degrades dramatially under 12 frames/second. For smooth simulations atleast 24 or better
30 frames/second need to be displayed. This process results in a large computational load
that needs to be handeled by the VR Engine.
Very important for VR interactivity is total simulation latency (the time between
user action and the VR engine feedback). Total latency is the sum of the effects of sensor
latency, transmission delays (to and from VR engine) + the time it takes to recompute and
display a new frame. Low latency and fast refresh rates require a VR Engine that has fast
CPUs (for modelling of world dynamics and for tool input output) as well as powerful
graphics accelerators (for fast frame rendering).
In analysing the computational load we need to compare dynamic modelling loads
with graphic loads. If the simulated world has a moderate number of dyanmically
modelled objects then the dominant load is due to graphics rendering. The computational
load inturn depends on scene illumination, shading, texturing and graphics complexity.
Another factor that influences the computational load and therefore the frame rate is
polygon shading. The sum of the entire polygon in a virtual scene determines its
complexity. The greater the scene realism the larger the number of polygons it contains.
•
PC Based VR Systems:
Due, in large part, to the significant advances in PC hardware those have been
made over the last two years, PC based VR is approaching reality. While the cost of a
basic desktop VR system has only gone down by a few thousand dollars since that
time, the functionality has improved dramatically, both in terms of graphics
processing power and VR hardware such as head-mounted displays (HMDs). The
availability of powerful PC engines based on such computing work-horses as Intel’s
Pentium Pro and Digital’s Alpha processors, and the emergence of reasonably priced,
OpenGL-based 3D accelerator cards allow high-end PCs to process and display 3D
simulations in real time. While a standard 486 PC with as little as 8 MB of RAM call
provide sufficient processing power for a bare-bones VR simulation, a fast Pentiumbased PC with 32 Mega of RAM, can transport users to a convincing virtual
environment; while a dual Pentium configuration with OpenGL acceleration and
24MB of VRAM running Windows NT rivals the horsepower of a graphics
workstation.
•
Workstation based architecures:
After PC’s the next largest computing base in terms of existing units are
the workstations. Their advantage over PC’s is greater computing power, larger disk
space and faster communication modalities.
In 1992, SUN introduced the virtual holographic workstation. This system
used initially a SUN SparcStation II with GT graphics accelerator (100,000
polygons/second). In 1994, it was upgraded to a SUN 10-51 with ZX accelerator
(125,000 polygons/second).
The Provision-100 Workstation with parllel archetechure has multiple
processors called “Director”(for collision detection and temporal syncronization) and
“Actors” for sterio visual display, 3-D sound and hand tracking and gesture
recognition. The archetechure also has an I/O card and is sacleable allowing
additional I/O processors to be added. A conncetion with the host computer such as a
486PC allows the UNIX based Provision-100 to work as a high-end terminal in
simulations.
•
High Parllel VR engines:
The 35,000 polygons/eye/sec that Provision can render are a good start,
but are far from the several million polygons/eye/sec needed for true image realisum.
In 1992 Division announced the “ Super Vision” engine which uses a high
performance parllel archetichure to increase rendering power upto 280,000
polygons/eye/sec. The supervision architechure has a standard Provision frontend and
a multiprocessor cluster. The compute/render cards are connected by a highspeed
200Mbps low latency link. Each cluster is an autonomous unit with a 40 Mhz i860
processor, a T-425 transputer for I/O and upto 16 MB of local memory. The multicluster archetechure includes a frame buffer along with a stero-video frame grabber
that allows graphics to be overlaid in real time over live video. Communication on the
link is on a piont to point basis. Diffeent distribution rountings could be aopted in
order to optimise the soultion for the different problems.
•
Distributed VR:
Another approach to load distribution is to divide the tasks among several
workstations communicating over a LAN or ethernet. Network distribution has the
advantage that existing computers can be used in the simulation without having to
purchase dedicated VR workstations. Another advantage is the feesibility of remote
computer access and participation in the simulation. This in turn makes possible
multi-user VR simulations.
The similarities between client server databases and VR distribution is that
both use client-server approach. The server is the central processor coordinating
mosst of the simulation activities. It has the responsibility of maintaining the state of
all of the virtual objects in the simulation. Client processes manage the local
simulations, interact with I/O tools and perform graphics rendering.
One instance where it is not possible to match the computers participating
in the distributed system is visualization of supercomputer size problems. These
problems involve complex physical modelling and multiple equations whose solution
rquires the power of a supercomputer.
There are 3 possible schemes for load distribution between a
supercomputer and a graphics workstation. In the 1st case called post-processing
visualization, the supercomputer calculates the scene geaometry, then sends it over
the network to the graphics workstation renderer. This visualization technique is well
known with fixed geometry data sets. In the 2nd technique called distributed
visualization, the geometry data is continiously sent over the network and rendered by
the graphics workstation. This somewhat improves interactivity since once geometry
is transferred to the frontend, rotations/translations can be done very fast. The 3rd and
least common technique is to use a software renderer on the supercomputer. In this
way only final image data is sent over the network to a frame buffer of a workstation
for display.
6 The VR Sof t ware
A VR software tool must support the users varied needs. In some cases, the visual
elements consist of geometric shapes of the objects of the virtual world and the
appereance of texture, color, lighting & other specific charecteristics. In other cases the
accent is on comparision of items or figures that trigger some type of reaction that is
visible to the user “ flying over” the area under investigation. In still other cases the
virtual world consists of the exploration of new concepts and ideas that are difficult to
understand. As a starting point in the development of virtual worlds, CAD software can
help create visual elements i.e. the CAD software begins the process with the actual
design of objects in 2D and 3D space. However this software does not allow the user to
walk through a 3D model just created and look for furthur design improvements and this
is where a VR software comes into play, whereby the system analyst or programmer can
prototype a number of visual environments until the best one is found for the situation.
Many functions can act upon objects found from the primary classes of the universe
(container of all entities and objects) for task hierarcies, sensors, appearance and collision
detection. As such each object performs a task per frame and objects can be linked
together in hierarcies, which can be attached to sensors. The color, texture or size of an
object can be changed in appearance when there is collision detection between objects
and polygons. Polygons can be dynamically created and texture mapped by using various
sources of image data, rendering is performed in either wire frame, smooth shade or
texture modes. Lights can be dynamically loaded or created from a file and are updated
with every frame, the user could have multiple viewpoints and could attach to multiple
sensors. Sensors such as 2D mouse could be connected to lights, objects viewports and so
on. Objects or viewports could follow predefined paths that can be dynamically created
and interpolated. Terrain could be either randomly generated or based on actual data.
The key elements of application that use VR software are real time graphics, the high
level interactivity, simulation of object behaviour and immersion. Although much of VR
technology is present in training simulators for years the advances in computing hardware
has brought interactivity computer graphics and simulation of virtual worlds within the
reach of PC and workstation users. Conventional computer graphics have long been used
for CAD and visualization. But virtual reality now offers users the ability to interact and
immerse themselves within their creations before a construction is done.
¾ An example of VR Software is “VRJuggler”. VR Juggler provides virtual reality (VR)
software developers with a suite of application programming interfaces (APIs) that
abstract, and hence simplify, all interface aspects of their program including the
display surfaces, object tracking, selection and navigation, graphics rendering
engines, and graphical user interfaces. An application written with VR Juggler is
essentially independent of device, computer platform, and VR system. VR Juggler
may be run with any combination of immersive technologies and computational
hardware.
¾ WorldToolKit(TM) is a portable, cross-platform development system for visual
simulation and virtual reality applications. Current platforms include Silicon
Graphics, Sun, Hewlett-Packard, DEC, Intel, Evans and Sutherland, and PowerPC.
Irrespective of the system, WorldToolKit has the function library and end-user
productivity tools you need to create, manage, and commercialize your applications.
Because of its high-level API (application programmer’s interface), you can prototype
applications quickly and reconfigure them as required. WorldToolKit also supports
network-based distributed simulations and the largest array of interface devices, such
as head-mounted displays, trackers, and navigation controllers.
7 Rendering Process
The Rendering Processes of a VR program are those that create the sensations that
are output to the user. A network VR program would also output data to other network
processes. There would be separate rendering processes for visual, auditory, haptic
(touch/force), and other sensory systems. Each renderer would take a description of the
world state from the simulation process or derive it directly from the World Database for
each time step.
7.1 Visual Renderer
The visual renderer is the most common process and it has a long history
from the world of computer graphics and animation. The reader is encouraged to
become familiar with various aspects of this technology.
The major consideration of a graphic renderer for VR applications is the
frame generation rate. It is necessary to create a new frame every 1/20 of a second
or faster. 20 frames per second (fps) is roughly the minimum rate at which the
human brain will merge a stream of still images and perceive a smooth animation.
24fps are the standard rate for film, 25fps are PAL TV, 30fps are NTSC TV.
60fps are Showscan film rate. This requirement eliminates a number of rendering
techniques such as raytracing and radiosity. These techniques can generate very
realistic images but often take hours to generate single frames.
Visual renderers for VR use other methods such as a ’painter’s algorithm’,
a Z-Buffer, or other Scanline oriented algorithm. There are many areas of visual
rendering that have been augmented with specialized hardware. The Painter’s
algorithm is favored by many low-end VR systems since it is relatively fast, easy
to implement and light on memory resources. However, it has many visibility
problems. For a discussion of this and other rendering algorithms, see one of the
computer graphics reference books listed in a later section.
The visual rendering process is often referred to as a rendering pipeline.
This refers to the series of sub-processes that are invoked to create each frame. A
sample-rendering pipeline starts with a description of the world, the objects,
lighting and camera (eye) location in world space. A first step would be
eliminating all objects that are not visible by the camera. Clipping the object
bounding box or sphere against the viewing pyramid of the camera can quickly do
this. Then the remaining objects have their geometries transformed into the eye
coordinate system (eye point at origin). Then the hidden surface algorithm and
actual pixel rendering is done.
The pixel rendering is also known as the ’lighting’ or ’shading’ algorithm.
There are a number of different methods that are possible depending on the
realism and calculation speed available. The simplest method is called flat
shading and simply fills the entire area with the same color. The next step up
provides some variation in color across a single surface. Beyond that is the
possibility of smooth shading across surface boundaries, adding highlights,
reflections, etc.
An effective short cut for visual rendering is the use of "texture" or
"image" maps. These are pictures that are mapped onto objects in the virtual
world. Instead of calculating lighting and shading for the object, the renderer
determines which part of the texture map is visible at each visible point of the
object. The resulting image appears to have significantly more detail than is
otherwise possible. Some VR systems have special ’billboard’ objects that always
face towards the user. By mapping a series of different images onto the billboard,
the user can get the appearance of moving around the object.
7.2 Auditory Rendering
A VR system is greatly enhanced by the inclusion of an audio
component. This may produce mono, stereo or 3D audio. The latter is a fairly
difficult proposition. It is not enough to do stereo-pan effects as the mind tends to
locate these sounds inside the head. Research into 3D audio has shown that there
are many aspects of our head and ear shape that effect the recognition of 3D
sounds. It is possible to apply a rather complex mathematical function (called a
Head Related Transfer Function or HRTF) to a sound to produce this effect. The
HRTF is a very personal function that depends on the individual’s ear shape, etc.
However, there has been significant success in creating generalized HRTFs that
work for most people and most audio placement. There remains a number of
problems, such as the ’cone of confusion’ wherein sounds behind the head are
perceived to be in front of the head.
Sound has also been suggested as a means to convey other information,
such as surface roughness. Dragging your virtual hand over sand would sound
different than dragging it through gravel.
7.3 Haptic Rendering
Haptics is the generation of touch and force feedback information. This
area is a very new science and there is much to be learned. There have been very
few studies done on the rendering of true touch sense (such as liquid, fur, etc.).
Almost all systems to date have focused on force feedback and kinesthetic senses.
These systems can provide good clues to the body regarding the touch sense, but
are considered distinct from it. Many of the haptic systems thus far have been
exo-skeletons that can be used for position sensing as well as providing resistance
to movement or active force application.
8 VR Applicat ions
As the technologies of virtual reality evolve, the applications of VR become literally
unlimited. It is assumed that VR will reshape the interface between people and
information technology by offering new ways for the communication of information, the
visualization
of
processes,
and
the
creative
expression
of
ideas.
•
Millitary & Training simulations
Because of their focus on new ways of waging war virtually, the military
is also very interested in recruiting video game players to control all of their new
virtual reality tools. So it makes perfect sense to devise a Web3D game to use for
recruitment, public relations and even training. Called "America’s Army:
Operations", it is a squad-based multi-player game that has received a lot of
attention for many reasons, not the least of which is that it is a well made game
that is fun to play.
The game includes an amazing amount of realism and attention paid to
recreating the way things happen in the Army. New game ’recruits’ attend basic
training and are given standard issue weapons and equipment that they have to
learn about. All of this leads neatly into the virtual training aspect of the game.
True Army recruits who have played the game will not be nearly so ’raw’ as
recruits has been in the past and will be much more ready for life in the Army.
Military organizations seem poised to continue to push the envelope with
regard to VR and Web3D with plans already in place to expand the use of drones,
VR training, Web3D recruitment, as well as virtual reconnaissance and
simulation. Anyone interested in the future of virtual reality would do well to
keep an eye on where the military is going in VR and Web3D.
•
Medical
Surgery
¾ Practice performing surgery
¾ Perform surgery on a remote patient
Rehabilitation
¾ Use VR input devices and telepresence to enable handicapped people to do
things that would otherwise be impossible for them to do
¾ Enable handicapped people to visit/see places that they would be otherwise
unable to experience
¾ Use VR to teach new skills in a safe, controlled, environment
•
Design and prototyping
Use to create rapid prototypes rather than make clay models or full-scale
mock-ups Simulate assembly lines. For example, to evaluate the risk of
interference or collision between robots, cars, and car parts
9 VR Market
We are witnessing a constant improvement in marketing perspective of both quality
of applicative VR systems and receptiveness of potential customers. This is due to mainly
three reasons:
(1) The decrease of the cost of VR systems and devices
(2) The constant improvement of performance reliability of the technology,
(3) The extremely valuable economic benefits derived from VR use in its various forms
and purposes (training, simulation, design).
So we can affirm the consolidation of a class of technology that can positively be
stated as "virtual reality" and appraised like any other high tech industry. This technology
has been confidently adopted in a number of markets, and has the potential to penetrate in
many more.
Virtual Reality Market Forecasts by Application ($ millions, constant 1995 )
1994
1995
2000
AAGR% 1995-2000
Instructional &
70
95
355
31
Developmental
Design &
25
30
150
40
Development VR
60
110
500
35
Entertainment VR
Medical Treatment
10
20
50
20
VR
165
255
1055
33
Total
Source: Business Communications Company, Inc., GB-175, The Virtual Reality Business, 1996
10 Conclusion
The state of VR is very fluid / nascent, i.e., the technology while being ready for
professional applications is not at the stage of settling definite standards and definite
reference points in all perspectives, including possible leading manufacturers,
compatibility specifications, performance levels, economical costs and human expertise.
So standing the situation it is heavily characterised by uncertainty.
This uncertainty should not be confused with lack of confidence on the promising
outcomes of the technology, but instead with the rapid mutation and evolution that
characterises all information technology markets. For what concerns the project these
reflections sound as warning in the adoption of solutions that need to be considered as a
short term answer to a contingent problem. A special concern must be raised to a
continious chase of the last up to date technological product release.
•
•
Appendix
Painter’s Algorithm:
¾ The Painters’ Algorithm makes use of z-depth sorting. Using the sorted z-depth
information, the algorithm "draws" the objects furthest away first and moves
forward, drawing the closest objects last. It is called the Painters’ Algorithm
because this is a common painting method - painting the background first and
then painting the foreground over that.
Z-Buffering:
¾ An algorithm used in 3-D graphics to determine which objects, or parts of
objects, are visible and which are hidden behind other objects. With Zbuffering, the graphics processor stores the Z-axis value of each pixel in a
special area of memory called the Z-buffer . Different objects can have the same
x- and y-coordinate values, but with different z-coordinate values. The object
with the lowest z-coordinate value is in front of the other objects, and therefore
that’s the one that’s displayed.
Flat Shading
•
Gourard Shading
Phong Shading
Gouard shading:
¾ Gouraud shading extends the concept of interpolated shading applied to
individual polygons by interpolating polygon vertex illumination values that
take into account the surface being approximated. The Gouraud shading process
requires that the normal {perpendicular vector} be known for each vertex of the
polygonal mesh. This process computes these ’vertex normals’ directly from an
analytical description of the surface. Alternatively, if the vertex normals are not
stored with the mesh and cannot be determined directly from the actual surface,
then, we can approximate them by averaging the surface normals of all
polygonal facets sharing each vertex. If an edge is meant to be visible (as at the
joint between a plane’s wing and body), then we find two vertex normals, one
for each side of the edge, by averaging the normals of polygons on each side of
the edge separately. Normals were not averaged across the teapot’s patch cracks.
•
¾ The next step in Gouraud shading is to find ’vertex intensities’ by using the
vertex normals with any desired illumination model. Finally, each polygon is
shaded by linear interpolation of vertex intensities along each edge and then
between edges along each scan line in the same way that we describe
interpolating z values. The term ’Gouraud shading’ is often generalized to refer
to intensity interpolation shading of even a single polygon in isolation, or to the
interpolation of arbitrary colors associated with polygon vertices.
Phong shading:
¾ ’Phong Shading’, also known as ’normal-vector interpolation-shading’,
interpolates the surface normal vector N, rather than the intensity.
Interpolation occurs across a polygon span on a scan line, between starting
and ending normals for the span. These normals are themselves interpolated
along polygon edges from vertex normals that are computed, if necessary, just
as in Gouraud shading. The interpolation along edges can again be done by
means of incremental calculations, with all three components of the normal
vector being incremented from scan line to scan line. At each pixel along a
scan line, the interpolated normal is normalised, and is backmapped into the
WC {world coordinate} system or one isometric to it, and a new intensity
calculation is performed using any illumination model.
Books & Links
•
Virtual Reality Technology
Grigore Burdea & Philippe Coiffet, New York: Wiley, c1994.
•
Virual Reality Systems for Buisness
Robert J. Thierauf, Westport, Conn.: Quorum Books, 1995.
•
Virtual reality in Engineering
IEE Publication, IEE computing series; 20. London: Institution of Electrical
Engineers, c1993.
•
www.isdale.com/jerry/VR/WhatIsVR.html
•
www.ia.hiof.no/~michaell/home/vr/vrhiof98/
•
www.bubu.com/baskara/baskaravrsoftware.htm