an efficient real-time terrain data organization and visualization

Transcription

an efficient real-time terrain data organization and visualization
AN EFFICIENT REAL-TIME TERRAIN DATA ORGANIZATION AND
VISUALIZATION ALGORITHM BASED ON ENHANCED
TRIANGLE-BASED LEVEL OF DETAIL TECHNIQUE
MUHAMAD NAJIB BIN ZAMRI
Faculty of Computer Science and Information System
Universiti Teknologi Malaysia
ii
ABSTRACT
The massive volume of data involved in the development of a real-time
terrain visualization and navigation system is inevitable. Based upon the current offthe-shelf hardware capability, it is impossible to process the amount of data using the
conventional approach.
This is due to the fact that the amount of data to be
processed exceeds the capacity that can be loaded into main memory. This problem
is further compounded by other hardware constraints such as memory bus speed and
data transfer bandwidth from the main memory to the graphics card. Consequently,
it has affected the total system performance. The triangle-based level of detail
(LOD) technique has been developed in order to reduce the drawback but it still
suffers the main memory constraint and slow data loading. The purpose of this
research is to design, develop and implement an algorithm for enhancing the
rendering efficiency of triangle-based LOD technique. A prototype system has been
developed using digital elevation data for testing purposes.
The system was
evaluated based on the following criteria, namely: data size, processing time for data
partitioning, memory usage, loading time, frame rate, triangle count and geometric
throughput.
According to the results obtained during the pre-processing, the
partitioning of digital elevation data into tiles has successfully reduced the data size
although it required a longer processing time. During the run-time processing, the
integration of dynamic tile loading scheme, view frustum culling and enhanced
triangle-based LOD technique has produced encouraging results with significant
overall improvement as compared to the techniques that have been developed earlier.
The proposed algorithm in this research is very practical in developing interactive
graphics applications, which considers real-time rendering as one of the important
element.
iii
ABSTRAK
Data bersaiz besar yang terlibat di dalam pembangunan sistem visualisasi dan
navigasi bentuk muka bumi secara masa nyata memang tidak dapat dielakkan.
Berdasarkan kepada keupayaan perkakasan terkini yang berada di pasaran, adalah
mustahil untuk memproses amaun data tersebut menggunakan pendekatan biasa. Ini
disebabkan oleh amaun data yang diproses melebihi kapasiti yang boleh dimuatkan
ke dalam ingatan utama. Masalah ini dibebankan pula dengan kekangan-kekangan
perkakasan lain seperti kelajuan ingatan bas dan lebar jalur penghantaran data dari
ingatan utama ke kad grafik. Kesannya, ia telah mempengaruhi keseluruhan prestasi
sistem. Teknik aras keperincian berasaskan segi tiga telah dibangunkan dalam usaha
untuk mengurangkan kelemahan tetapi masih perlu berhadapan dengan kekangan
ruang ingatan utama dan pemuatan data yang perlahan. Tujuan penyelidikan ini
adalah untuk mereka bentuk, membangun dan mengimplementasi sebuah algoritma
bagi meningkatkan keberkesanan perenderan teknik aras keperincian berasaskan segi
tiga. Satu sistem prototaip telah dibangun dengan menggunakan data ketinggian
digital untuk tujuan pengujian. Sistem ini telah dinilai berdasarkan kepada kriteriakriteria berikut: saiz data, masa pemprosesan bagi pemecahan data, penggunaan
ruang ingatan, masa pemuatan, kadar kerangka, serta bilangan dan kadar penjanaan
segi tiga. Berdasarkan kepada keputusan yang diperolehi semasa pra-pemprosesan,
pemecahan data ketinggian digital kepada blok-blok data kecil telah berjaya
mengurangkan saiz data walaupun proses ini memerlukan masa pemprosesan yang
lebih panjang. Semasa pemprosesan masa larian, gabungan skema pemuatan blokblok data kecil yang dinamik, teknik penghapusan data berasaskan sudut pandangan
dan teknik aras keperincian berasaskan segi tiga yang dipertingkatkan telah
menghasilkan keputusan yang memberangsangkan dengan peningkatan keseluruhan
yang signifikan jika dibandingkan dengan teknik-teknik yang telah dibangunkan
sebelum ini. Algoritma yang dicadangkan di dalam penyelidikan ini adalah sangat
praktikal di dalam membangunkan aplikasi-aplikasi grafik yang bersifat interaktif di
mana mempertimbangkan perenderan masa nyata sebagai salah satu daripada elemen
penting.
iv
TABLE OF CONTENTS
CHAPTER
TITLE
TITLE PAGE
1
PAGE
i
ABSTRACT
ii
ABSTRAK
iii
TABLE OF CONTENTS
iv
LIST OF TABLES
ix
LIST OF FIGURES
x
LIST OF ALGORITHMS
xiii
LIST OF ABBREVIATIONS
xv
LIST OF APPENDICES
xvii
INTRODUCTION
1
1.1 Introduction
1
1.2 Problem Background
5
1.2.1 Massive Terrain Dataset
5
1.2.2 Geometric Complexity
6
1.2.3 Limited Capability of Triangle-Based LOD
Techniques
8
1.3 Problem Statements
8
1.4 Goal
8
v
2
1.5 Objectives
9
1.6 Scopes
9
1.7 Organization of the Report
10
LITERATURE REVIEW
11
2.1 Introduction
11
2.2 Spatial Data Structures
12
2.2.1
Kd-Tree
13
2.2.2
Quadtree
15
2.2.3
R-Tree
16
2.2.4
Tiled Block
18
2.2.5
Comparative Studies
19
2.3 Visibility Culling Methods
20
2.3.1 Types of Culling Methods
21
2.3.1.1 Backface Culling
21
2.3.1.2 Small Feature Culling
22
2.3.1.3 Occlusion Culling
23
2.3.1.4 View Frustum Culling
24
2.3.1.5 Comparative Studies
25
2.3.2 Intersection Test Techniques
26
2.3.2.1 Axis Aligned Bounding Box
27
2.3.2.2 Oriented Bounding Box
27
2.3.2.3 Bounding Sphere
29
2.3.2.4 k-Discrete Orientation Polytope
30
2.3.2.5 Comparative studies
31
2.4 View-Dependent LOD Techniques
32
2.4.1 Terrain Representation
32
2.4.1.1 Regular Grid
32
2.4.1.2 Triangulated Irregular Network
33
2.4.1.3 Advantages and Disadvantages
34
vi
2.4.2 Geometric Primitives
35
2.4.2.1 Independent Triangles
35
2.4.2.2 Triangle Fans
36
2.4.2.3 Triangle Strips
36
2.4.2.4 Comparative Studies
37
2.4.3 Previous Works
37
2.4.3.1 Real-time Optimally Adapting
Meshes
38
2.4.3.2 Real-time Generation of
Continuous LOD for Height Fields
43
2.4.3.3 View-Dependent Progressive
Meshes
3
45
2.4.3.4 Out-of-Core Terrain Visualization
48
2.4.3.5 Comparative Studies
52
2.5 Summary
53
METHODOLOGY
55
3.1 Introduction
55
3.2 Partitioning Terrain Data
58
3.2.1 Extracting Data
58
3.2.2 Splitting Data into Tiles
59
3.3 Loading Terrain Tiles
63
3.4 Culling Invisible Terrain Tiles
64
3.4.1 Plane Extraction
64
3.4.2 Intersection Test
67
3.5 Simplifying Geometry of Visible Terrain Tiles
68
3.5.1
Distance Calculation
69
3.5.2
Subdivision Test
69
3.5.3
Generation of Triangle Fans
70
3.6 Summary
71
vii
4
IMPLEMENTATION
72
4.1 Introduction
72
4.2 Pre-Processing
73
4.2.1
Reading a DEM File and Creating TXT
Files
4.2.2
73
Reading TXT Files and Creating TILE
Files
79
4.3 Run-time Processing
4.3.1
Creation of Classes
87
4.3.2
Initialization
89
4.3.3
Rendering
89
4.3.3.1
Configuring Information for
Camera
4.3.3.2
5
87
90
Loading and Unloading Terrain
Tiles
90
4.3.3.3
Calculating Frustum’s Planes
92
4.3.3.4
Removing Unseen Terrain Tiles
93
4.3.3.5
Generating Quadtree Traversal
94
4.3.3.6
Generating Triangle Fans
94
4.3.4
Interaction
96
4.3.5
Shutdown
96
4.4 Summary
97
RESULTS, ANALYSIS AND DISCUSSIONS
98
5.1
Introduction
98
5.2
System Specifications
99
5.3
Testing Data
100
5.4
Screenshots from the Prototype System
101
5.5
Evaluations of the System
103
5.5.1 Pre-Processing
103
viii
5.5.2 Run-time Processing
5.6
5.7
6
5.5.2.1
Memory Usage
106
5.5.2.2
Loading Time
108
5.5.2.3
Frame Rate
109
5.5.2.4
Triangle Count
111
5.5.2.5
Geometric Throughput
113
Comparison with Previous Techniques
114
5.6.1 Memory Usage
115
5.6.2 Loading Time
116
5.6.3 Frame Rate
117
5.6.4 Triangle Count
118
Summary
120
CONCLUSION
121
6.1
Conclusion
121
6.2
Recommendations
124
REFERENCES
APPENDICES A - B
INDEX
105
126
133 – 138
139
ix
LIST OF TABLES
TABLE NO.
1.1
TITLE
PAGE
Types of busses and the corresponding memory
bandwidth and geometric throughput (Hill, 2002;
Peng et al., 2004)
2.1
7
Advantages and disadvantages of visibility culling
methods
25
2.2
Comparison of intersection test techniques
31
2.3
Comparison among the geometric primitives
37
2.4
Comparison of view-dependent LOD techniques
53
3.1
Clipping planes with corresponding co-efficients
66
4.1
Data structure
75
4.2
Declaration of variables
80
5.1
Hardware specifications
99
5.2
Software specifications
99
5.3
Testing data
100
5.4
Environment settings for testing the proposed
algorithm
5.5
101
The results of time required for converting DEM
data and the corresponding file size for number of
vertices per tile: 129 x 129 and 257 x 257
103
5.6
Number of generated tiles
104
5.7
Environment settings for comparison purpose
115
x
LIST OF FIGURES
FIGURE NO.
1.1
TITLE
PAGE
Visualization of full resolution terrain model
(number of vertices = 16,848,344,430 and number
of triangles = 33,695,786,468)
6
2.1
Kd-tree
14
2.2
Quadtree
15
2.3
R-tree
17
2.4
Partitioning data into tiles
18
2.5
Tree structure for tiles
19
2.6
Vector angle between view direction and
polygons normal
21
2.7
Small feature culling a) OFF, b) ON
22
2.8
Occlusion culling of occluded object
23
2.9
Camera’s view frustum
24
2.10
The red objects lie completely outside the viewfrustum and will be discarded from rendering
when view frustum culling is performed
24
2.11
AABB of an object
27
2.12
OBB of a rotated object
28
2.13
BS of an object
29
2.14
Aircraft model and the corresponding k-dops
a) Level 1, b) Level 2, c) Level 3, d) Level 4
30
2.15
Variations of grid representations
33
2.16
TIN representation
33
2.17
Independent triangles
35
2.18
Triangle fan
36
xi
2.19
Triangle strip
36
2.20
Levels 0-5 of a triangle bintree
38
2.21
Split and merge operations on a bintree
triangulation
39
2.22
Forced splitting process
39
2.23
Wedgies in ROAM implementation
40
2.24
A sample triangulation of a 9 x 9 height field
a) quadtree matrix, b) the corresponding quadtree
structure
43
2.25
Measuring surface roughness
44
2.26
Vertex split and edge collapse operations
45
2.27
Steps in pre-processing block-based simplification
46
2.28
Interleaved quadtree structure
49
2.29
Four levels of longest-edge bisection operation
50
3.1
Research framework
56
3.2
Overlapping tile
62
3.3
Dynamic load management
63
3.4
Six planes of view frustum
64
3.5
Plane-sphere intersection test for view frustum
culling technique a) sample scene, b) intersection
test, c) result after culling
68
3.6
Distance (l) and edge length (d)
69
3.7
Generation of triangle fans
70
3.8
Terrain tiles after applying view-dependent LOD
technique
71
4.1
Menu for partitioning DEM data
73
4.2
Structure of a DEM file
74
4.3
First data point for DEM data a) West of the
central meridian b) East of the central meridian
4.4
Parameters after extraction of important header
information
4.5
77
78
The indices and the corresponding elevation
values
79
xii
4.6
List of generated TILE files
85
4.7
Structure in a TILE file
86
4.8
Class diagram
88
4.9
Sixteen possibilities of generating triangle fans
95
5.1
Small dataset a) wireframe, b) textured
102
5.2
Moderate dataset a) wireframe, b) textured
102
5.3
Large dataset a) wireframe, b) textured
102
5.4
Total time for converting DEM datasets to TILE
files
104
5.5
Memory usage for the proposed system
106
5.6
The results of memory usage
107
5.7
The results of loading time
108
5.8
The results of average frame rate based on
different FOV
5.9
The results of average frame rate based on
different length of view frustum
5.10
110
The results of average triangle count based on
different FOV
5.11
109
111
The results of average triangle count based on
different length of view frustum
111
5.12
The results of frame rate versus triangle count
112
5.13
The results of average geometric throughput
based on different FOV
5.14
113
The results of average geometric throughput
based on different length of view frustum
114
5.15
Comparison based on memory usage
116
5.16
Comparison based on loading time
117
5.17
Comparison based on frame rate
118
5.18
Comparison based on triangle count
119
xiii
LIST OF ALGORITHMS
ALGORITHM NO.
TITLE
PAGE
2.1
Pseudo code for split queue
41
2.2
Pseudo code for merge queue
42
2.3
Pseudo code for selective refinement
47
2.4
Pseudo code for force vsplit
48
2.5
Pseudo code for mesh refinement (meshrefine)
2.6
Pseudo code for sub mesh refinement
(submesh-refine)
2.7
51
Pseudo code for generating triangle strips
(trianglestrip-append)
4.1
51
52
Pseudo code for obtaining maximum
number of vertices per side for original
terrain data
4.2
Pseudo code for assigning new maximum
number of vertices per side for terrain data
4.3
81
81
Pseudo code for determining the maximum
number of vertices per side for tiles in
vertical direction (Vt_v_max)
4.4
Pseudo code for obtaining starting point and
existence status for each tile
4.5
83
Pseudo code for partitioning data and
creating TILE files
4.6
82
84
Pseudo code for obtaining starting
coordinates of existing tile
85
xiv
4.7
Pseudo code for obtaining the current tile
index based on camera position
4.8
Pseudo code for obtaining the terrain
indices to be loaded
4.9
91
92
Pseudo code for detecting the visibility
status and assigning flag for loaded tiles
93
xv
LIST OF ABBREVIATIONS
ABBREVIATION
DESCRIPTION
2D
-
Two-dimensional
3D
-
Three-dimensional
AABB
-
Axis Aligned Bounding Box
API
-
Application Programming Interface
BS
-
Bounding Sphere
CBIS
-
Computer-based information system
CFD
-
Computational fluid dynamics
CPU
-
Central processing unit
DEM
-
Digital Elevation Model
DSS
-
Decision Support System
DTED
-
Digital Terrain Elevation Data
ESS
-
Executive Support System
FOV
-
Field of view
FPS
-
Frames per second
GIS
-
Geographic Information System
GPU
-
Graphics processing unit
GRASS
-
Geographic Resources Analysis Support System
IS
-
Information system
KMS
-
Knowledge Management System
LOD
-
Level of detail
MBB
-
Minimal bounding box
MBR
-
Minimal bounding rectangle
MIS
-
Management Information System
NE
-
Northeast
xvi
NW
-
Northwest
OAS
-
Office Automation System
OBB
-
Oriented Bounding Box
OS
-
Operating system
PC
-
Personal computer
RAPID
-
Rapid and Accurate Polygon Interference Detection
ROAM
-
Real-time Optimally Adapting Meshes
SE
-
Southeast
SW
-
Southwest
TIN
-
Triangulated irregular network
USGS
-
United States Geological Survey
UTM
-
Universal Transverse Mercator
VDPM
-
View-dependent progressive meshes
ViRGIS
-
Virtual Reality GIS
xvii
LIST OF APPENDICES
APPENDIX
TITLE
PAGE
A
DEM Elements Logical Record Type A
133
B
DEM Elements Logical Record Type B
138
CHAPTER 1
INTRODUCTION
1.1
Introduction
Computer-based information system (CBIS) has changed the people works
from traditional manual system to an automation system that provides more effective
and systematic way in solving specific problems. In addition, the works can be done
and completed much faster than the previous approach. Thus, time consumption can
be reduced and managed efficiently. This is due to the rapid development of
computer’s technology over the last few years. Several types of CBISs have been
developed and those systems have contributed to the evolution of information
system. Among of them are Management Information System (MIS), Office
Automation System (OAS), Executive Support System (ESS), Decision Support
System (DSS), Knowledge Management System (KMS), Geographic Information
System (GIS) and Expert System (Gadish, 2004).
Geographic information system (GIS) is one of the important developments
of CBIS that combines digital map and geographic data; and facilitate assessments of
related characteristics of people, property and environmental factors. In general, GIS
is a set of computer tools for collecting, storing, retrieving, transforming and
displaying data from the real world for a particular set of purposes or application
domains (Burrough and McDonnell, 1998). There are two main elements in
developing GIS applications: database and visualization. This system provides an
opportunity to store both kinds of spatial data (location-based data) and non-spatial
2
data (attribute data) in a single system. In current GIS applications, these two types
of data are often stored in two separate databases. However, these databases are
linked or joined together later depending on the particular characteristic.
Computer graphics disciplines are exploited in visualizing the GIS data. It is
useful for analyzing and verifying the related activities in terms of accuracy,
efficiency and other factors. For this reason, GIS is becoming increasingly important
to a growing number of activities in many sectors including urban planning, flood
planning, oil exploration, evaluations of vegetation, soil, waterway and road patterns
(Fritsch, 1996; Burrough and McDonnell, 1998).
Lacking in several aspects such as user interaction (limited to panning and
zooming operations), presentation (points, lines and symbols) and interpretation in
two-dimensional GIS (2D GIS) was lead to the development of higher dimensional
GIS especially three-dimensional GIS (3D GIS), which is also known as Virtual GIS
(Burrough and McDonnell, 1998). The use of colors and symbols representation in
previous GIS is replaced with 3D photo-textured models, which improves the level
of understanding of the system. Moreover, Virtual GIS has the ability to have
detailed 3D views and provide real-time navigation whether fly-through or
walkthrough for analyzing large volumes of spatial data and related information. At
the same time, the operations used in 2D GIS can be integrated easily into the system
(Noor Azam and Ezreen Elena, 2003).
Real-time terrain visualization is one of the subfield in GIS that attracts a
number of researchers involved in this area. Normally, terrain and the other objects
such as trees, buildings and roads are managed separately due to their different data
representation. Speed and realism are two factors frequently considered in solving
related terrain problems. At present, there are more than 800 commercial terrain
visualization software have been developed by different vendors and developers as
reported by United States Army Topographic Engineering Center (2005). Among
the popular software are ESRI® ArcGIS, ERDAS® Imagine VirtualGIS, Geographic
Resources Analysis Support System (GRASS) and Manifold System.
3
In this report, the research focuses on real-time 3D terrain data organization,
visualization and on-the-fly navigation. There are two key issues have to be
encountered in developing terrain-based applications in GIS field: spatial database
management and fast interactive visualization (Kofler, 1998). These two obstacles
take into account with the capabilities of current hardware technology that supports
the system to be developed. Large amount of terrain dataset need to be handled
efficiently in real-time environment with restricted main memory capacity. These
include the management of geometric and texture data. At the same time, the
interactive frame rate needs to be achieved consistently when facing with too many
polygons generated for each frame. The detailed information on problem
background is covered in Section 1.2.
There are many solutions have been proposed by a number of researchers in
the last few years. Among of them are the implementation of hierarchical data
structure, visibility culling methods and level of detail (LOD) techniques. Briefly,
the most common types of data structures can be employed to manage data are Kdtree (Bentley, 1975; Bittner et al., 2001; Langbein et al., 2003), quadtree (Samet,
1984; Röttger et al., 1998; Pajarola, 1998), R-tree (Guttman, 1984; Kofler, 1998; Yu
et al., 1999; Zlatanova, 2000) and tiled block (VTP, 1999; Lindstrom et al., 1996;
Pajarola, 1998; Ulrich, 2002). The difference is depending on how the space (data)
is divided in hierarchical manner (Gaede and Günther, 1998).
For visibility culling, there are four types of methods can be adopted:
backface culling (Laurila, 2004), small feature culling (Burns and Osfield, 2001;
Assarsson, 2001), occlusion culling (Stewart, 1997; Mortensen, 2000; Martens,
2003) and view frustum culling (Hoff, 1996; Assarsson and Möller, 2000). These
are useful for removing the unneeded data from being processed. One of the
important components in culling methods is the intersection test. The related
research works include Axis Aligned Bounding Box (Beckmann et al., 1990;
Morley, 2000), Oriented Bounding Box (Gottschalk et al., 1996; Assarsson and
Möller, 2000), Bounding Sphere (Hubbard, 1995; Dunlop, 2001; Morley, 2000;
Picco, 2003) and k-Discrete Orientation Polytope (Klosowski et al., 1998).
4
LOD techniques are used to reduce the number of polygons and give details
to the certain area in a scene. The examples of well-known techniques are Real-time
Optimally Adapting Meshes (Duchaineau et al., 1997), Real-time Generation of
Continuous LOD (Röttger et al., 1998), View-Dependent Progressive Meshes
(Hoppe, 1998) and Out-of-core Terrain Visualization (Lindstrom and Pascucci,
2001). Although the LOD technique can reduce the problem complexity but lots of
polygons that are out-of-sight still being rendered, thus deteriorating the system
performance. Most of the abovementioned researchers have integrated their system
with culling method at vertices or polygon level in order to solve the problem.
However, lots of computations involved in implementing such method especially
when facing with large amount of datasets.
In this report, a set of techniques has been designed and proposed for
handling massive terrain dataset in interactive rate. The idea behind this research is
to expand the capability of triangle-based LOD technique and improve the
performance of real-time terrain visualization and navigation system. Triangle-based
LOD technique has been chosen for this research because it has potential to remove
lots of terrain geometry and capable to speed up the system at interactive rates
without losing the visual appearance of visualization part (Duchaineau et al., 1997;
Hoppe, 1998; Pajarola, 1998; Röttger et al., 1998; Lindstrom and Pascucci, 2001).
There are two separate phases involved, which are pre-processing and runtime processing. In first phase, the specific terrain data is converted to the internal
data structure that is based on tile representation. The original terrain data is
partitioned into several small tiles or patches. In run-time, several tiles are loaded
depending on camera position. Then, these tiles are tested for determining the
visibility status to remove the unseen block of data. Finally, the geometric
simplification of visible terrain tiles is done which is based on triangle-based LOD
technique in order to minimize the number of triangles to be processed and being
rendered.
5
1.2
Problem Background
There are three challenging issues in developing terrain-based applications.
These involve the massive terrain datasets, geometric complexity and capability of
current triangle-based LOD techniques.
1.2.1
Massive Terrain Dataset
Various types of terrain datasets are widely being used in GIS applications,
including Digital Elevation Model (DEM), Digital Terrain Elevation Data (DTED),
satellite data and aerial photographs. Commonly, terrain consists of geometry and
imagery data. Geometric data represents the location or coordinate of related
terrain’s vertices, while imagery data represents as a texture map for producing
realistic 3D model. In fact, large amount of this terrain dataset have to be faced
when dealing with GIS applications. It is not surprising if the size of dataset will
occupy a number of Gigabytes or even Terabytes of storage (Gaede and Günther,
1998).
The use of this massive dataset will affect the capability of main memory to
load and store data for visualization. A number of researchers have pointed out that
it is impossible to hold the entire detailed dataset at once (Kofler, 1998; Pajarola,
1998). Besides that, accessing this data quickly enough to use it in real-time
rendering is a difficult problem. It will limit the performance of the system during
interactive visualization. This is because when the new frame is generated, all
polygons will be displayed whether they are within the view frustum or not. It is
time consuming to compute and render the unneeded data continuously in real-time
(Hill, 2002).
6
1.2.2
Geometric Complexity
For any detailed terrain model, the number of triangles that must be rendered
is prohibitively high. For instance, a 100 km x 100 km terrain sampled at 1-meter
resolution and represented as a regular grid will contain roughly 20,000,000,000
triangles. At present, it is not be able to render anywhere near this full resolution
data in real-time. According to the research that has been done by Hill (2002) in
managing large terrains, rendering 20,000,000,000 triangles per second would not be
feasible until 2010. Furthermore, rendering this many triangles at 60 frames per
second would not be possible until 2016.
Figure 1.1 shows an example of
visualizing original terrain model with billion of vertices and polygons. The white
line shows the triangles generated in wireframe mode and the black color is a
background of a scene.
Figure 1.1 Visualization of full resolution terrain model (number of vertices =
16,848,344,430 and number of triangles = 33,695,786,468)
7
With current graphics hardware, all communications between the card and the
CPU takes place over the memory bus. Presently, the primary bottleneck related to
rendering performance is the memory bandwidth on the bus. The graphics hardware
is capable of rendering substantially more geometry per second than can be read in
over the bus. In order to render 60 million triangles per second, over 5 Gigabytes of
data would need to be transferred to the card each second. According to the
maximum memory bandwidth and million triangles per second can be transferred to
the graphics card as shown in Table 1.1, it is impossible to achieve the maximum
rendering speed directly. Moreover, this bandwidth is also used for transferring
texture data to the card, so the 60 million triangles per second target are even further
out of reach (Hoppe, 1999; Hill, 2002).
Table 1.1: Types of busses and the corresponding memory bandwidth and geometric
throughput (Hill, 2002; Peng et al., 2004)
Type of Bus
Bandwidth Limitation
Geometric Throughput
(MB/s)
(Million triangles per second)
PCI
133
1.385
PCI64
533
5.552
AGP 1x
266
2.771
AGP 2x
533
5.552
AGP 4x
1066
11.104
AGP 8x
2150
22.396
PCI Express
4300
44.792
8
1.2.3
Limited Capability of Triangle-Based LOD Techniques
In order to deal with so much dataset, level of detail (LOD) techniques need
to be employed. Theoretically, terrain close to the viewer must be presented with
more geometry than terrain that far away from the viewer; and rough terrain requires
more geometry than flat terrain does. Frequently, triangle-based LOD algorithm is
applied for simplifying the terrain geometry. Although this technique provides fast
visualization through polygon simplification in real-time but it is not practical for
very large terrain dataset because the amount of data to be loaded exceeds the data
that can be stored into main memory. Even if the system is able to load the data, it is
time expensive for waiting the loading process completed (Hill, 2002).
1.3
Problem Statements
According to the problems mentioned earlier, the following question needs to
be solved in this research:
How to store and display large amount of terrain datasets efficiently
for real-time visualization and navigation system with restricted
capacity of current main memory and memory bus bandwidth?
1.4
Goal
The research goal is to expand the capabilities of triangle-based LOD
technique by developing a set of techniques using hybrid method that will enhance
the efficiency of real-time 3D terrain visualization and navigation for the
development of Virtual GIS.
9
1.5
Objectives
In order to accomplish the main goal, several objectives have to be fulfilled.
These include:
(i)
To design and construct an efficient data structure that will be able to
store arbitrary-sized terrain model.
(ii)
To develop an algorithm that will be able to provide fast data loading
and reduce the memory usage in run-time.
(iii)
To develop a culling technique that will be able to remove large
amount of terrain geometry.
(iv)
To integrate aforementioned data structure, loading algorithm and
culling technique with the chosen triangle-based LOD technique into
the real-time terrain visualization and navigation system.
1.6
Scopes
The research focuses on managing large-scale terrain dataset in real-time.
Several scopes need to be considered. These include:
(i)
The system manages static terrain dataset. It does not include the
buildings, trees, rocks and other related objects.
(ii)
The system focuses on managing terrain geometry. Imagery terrain
data is only used as a texture map for enhancing the visual
appearance.
(iii)
Terrain datasets are the regular grid representation.
(iv)
Triangle-based LOD technique is based on view-dependent LOD
technique.
(v)
The system is developed in real-time environment, which involves
user interaction through computer’s keyboard and mouse on desktop
PC.
10
1.7
Organization of the Report
The report is organized into six chapters. Chapter 1 gives a general picture on
related research fields, problem background and existing solutions involved in
developing real-time terrain visualization and navigation in Virtual GIS. Besides
that, the goal, objectives and scopes of this research have been specified.
Chapter 2 provides surveys on spatial data structures, visibility culling
methods, intersection test techniques and view-dependent LOD techniques.
Chapter 3 presents the methodology of this research, which consists of four
main steps. The explanations include partitioning original terrain data, loading tiles,
culling invisible tiles and simplifying geometry of visible tiles.
Chapter 4 describes the implementation details of prototype system from preprocessing to run-time processing. For pre-processing, there are two pairs of steps
have been explained, which are reading a DEM file and creating TXT files, and
reading TXT files and creating TILE files. For run-time processing, the section has
been divided into five subsections: creation of classes, initialization, rendering,
interaction and shutdown procedure.
Chapter 5 presents and discusses the results obtained from the
implementation phase. System specifications and testing data have been specified.
Subsequently, the evaluations of the system and comparison among the existing
technique have been explained in further details.
Chapter 6 summarizes the research and includes recommendations of future
works that can be extended with the prototype system.
CHAPTER 2
LITERATURE REVIEW
2.1
Introduction
In order to achieve the goal and the objectives as specified in previous
chapter, the literature review have been made. First, the spatial data structures have
been reviewed in details. The explanations involve the organization of the data and
the indexing mechanism.
Then, the reviews on visibility culling methods have been done. The
explanations are divided into two parts: types of culling methods and intersection test
techniques. The first part describes the culling methods have been developed by
previous researchers. The second part explains the technique can be used to detect
the intersection between two different objects.
Lastly, the reviews on triangle-based LOD techniques were made. There are
three main parts: terrain representation, geometric primitives and the previous works
on view-dependent, triangle-based LOD techniques.
The comparative studies were done for all the abovementioned data structure,
culling methods and LOD techniques in order to obtain the strengths and weaknesses
of those approaches.
12
2.2
Spatial Data Structures
Spatial data structure is crucial in organizing large data in a systematic way.
There are three main approaches can be adopted: hashing, space-filling curve and
hierarchical tree.
Hashing is one of the bucketing methods. According to this approach, the
records or data are sorted on the basis of the space that they occupy and are grouped
into cells of a finite capacity (Hjaltason and Samet, 1995). The drawback of using
hashing approach is performance may degenerate depending on the given input data
(Gaede and Günther, 1998). The examples of hashing methods are linear hashing
(Larson, 1980; Litwin, 1980), extendible hashing (Fagin et al., 1979), grid file
(Nievergelt et al., 1984) and EXCELL (Tamminen, 1982).
Space-filling curve has been extensively used as a mapping scheme from the
multidimensional space into the one-dimensional space. It is a thread that goes
through all the points in the space while visiting each point only one time (Mokbel et
al., 2002). An advantage of space-filling curve is that it creates an efficient indexing
mechanism for fast data retrieval. However, this approach is difficult to integrate
with different types of input data (Gaede and Günther, 1998). The examples of
space-filling curves are row-wise (Samet, 1990), Z-order (Orenstein and Merrett,
1984), Hilbert curve (Faloutsos and Roseman, 1989; Jagadish, 1990) and Gray code
(Faloutsos, 1988).
Hierarchical tree is an approach that is based on spatial decomposition. In tree
structure, there are a root node, internal nodes and leaf nodes. Hierarchical method is
frequently being used in many related applications due to its simplicity, scalability,
well-organized structure and capabilities to handle different type and size of data
(Gaede and Günther, 1998). For this reason, the remainder of this report is focusing
on hierarchical tree in order to create an efficient data structure.
13
2.2.1
Kd-Tree
Kd-tree was introduced by Bentley (1975). It is also called multidimensional
binary tree. The way it stores the data is same as binary search tree. It represents a
recursive subdivision where two new nodes (child nodes) will be produced during
the node splitting for each current node (parent node). Splitting process is parallel to
the Cartesian axis (x-axis and y-axis). The root node contains a vertical line through
the median x-coordinate of the points, which splits the plane into a left and right
region. The left child node contains a horizontal line through the median ycoordinate of the points in the left region, which splits the left region into an upper
and lower region. Similarly, the right child node contains a horizontal line through
the median y-coordinate of the points in the right region, which splits the right region
into an upper and lower region. This process continues splitting the plane with
vertical lines at nodes of even depth and horizontal lines at nodes of odd depth.
Figure 2.1 depicts an example of plane subdivision and the corresponding Kd-tree.
From this basic idea, Bittner et al. (2001) have applied Kd-tree into their
system to organize the whole scene and then integrate it with other data structure for
constructing hierarchical visibility algorithm. Besides that, Langbein et al. (2003)
have used this technique in visualization of large unstructured grids for
computational fluid dynamics (CFD) used in industrial applications.
14
Figure 2.1 Kd-tree
15
2.2.2
Quadtree
Hierarchical quadtree is based on the principle of recursive decomposition. It
has been proposed by Samet (1984). In general, the top-level node in this tree
represents the area of the entire dataset and each child in the quadtree represents onefourth the region of its parent (Northwest, Northeast, Southwest and Southeast
quadrants). Each node in the tree corresponds to a block of terrain with the same
number of data points. The highest resolution data blocks belong to the deepest level
of the tree. Figure 2.2 shows an example of block decomposition with its
corresponding tree structure.
Due to its simplicity, many researchers have employed this technique into
their system as one of the major component. Röttger et al. (1998) generated
hierarchical quadtree with 2D matrices for handling multiresolution terrain model.
Pajarola (1998) has created a restricted quadtree algorithm in order to reduce the
terrain geometry in scene.
Figure 2.2 Quadtree
16
2.2.3
R-Tree
R-tree is an alternative hierarchical access methods that has high performance
in order to search and retrieve rapidly spatial data in real-time. R-tree is derived
from B+-tree. It was originally created by Guttman (1984). R-tree encompasses
index records for each node where leaf nodes reference to the actual data items. The
term minimal bounding box (MBB) or minimal bounding rectangle (MBR) was used
to represent search region for spatial object. For leaf nodes in an R-tree, it contains
index record entries of the form:
(R, data-item-pointer)
where R is an n-dimensional rectangle which is the bounding box of the spatial
object indexed and data-item-pointer refers to a record in the database. While nonleaf nodes or interior nodes contain high-level index entries of the following form:
(R, child-pointer)
where child-pointer is the address of a lower node in the R-tree. An R-tree must
satisfy the following properties:
(i)
Every leaf has between m and M entries where m refers to minimum
number of entries in a node and M denotes to maximum number of
entries in a node.
(ii)
Leaf-entries (R, data-item-pointer): R is the MBR of object.
(iii)
Every non-leaf has between m and M children.
(iv)
Non-leaf-entries (R, child-pointer): R is the MBR of entries in child
node.
(v)
Root (if no leaf) has at least two children.
(vi)
All leaves are on the same level.
17
In general, searching an R-tree is done by finding all index records whose
MBRs overlap a search rectangle. The algorithm will start with the root and will
follow recursively all entries in the current node whose MBRs overlap the search
rectangle. At the end of each path, a leaf node is processed and the MBRs of all
entries in that leaf are tested against the search pattern. The index records found to
have an overlapping rectangle are stored as a list of result candidates. Figure 2.3
illustrates an R-tree structure where N refers to index record and R refers data
entries.
Figure 2.3 R-tree
Due to the potential used in virtual GIS, it was implemented in numerous
applications. In SATURN system, a modified R-tree (combination of R-tree and kdtree) has been produced to manage various types of objects in scene (Yu et al.,
1999). Pajarola (1998) has used the R-tree for clustering the 3D objects based on
viewer position. Kofler (1998) has created R-tree based LOD for managing terrain
dataset in Vienna Walkthrough System. Zlatanova (2000) has developed 3D R-tree
for LOD and spatial indexing in his 3D urban system.
18
2.2.4
Tiled Block
Tiled block is a simple data structure that partitions the terrain data into
square patches or tiles (Figure 2.4 and Figure 2.5). It can be used for handling large
geometry and imagery data. There are two types of approaches commonly being
applied: naïve and overlapping tile (VTP, 1999). The difference between two of
them is the dimension or size of the data to be used. The dimension for the naive
approach is power-of-two (2n), while the overlapping tile approach uses power-oftwo plus one (2n+1) dimension. The second approach is frequently being
implemented in the visualization of large-scale data due to its capability to reduce
and avoid the cracks occurrence between the boundaries of tiles.
Lindstrom et al. (1996) have implemented this approach in their block-based
simplification of real-time continuous LOD rendering of height fields. Then,
Lindstrom et al. (1997) extended their research by developing a tile data structure in
integrated global GIS and visual simulation system that is based on globe or
spherical representation. Pajarola (1998) developed a dynamic scene management,
which consists of two-dimensional of terrain tiles (also known as scene map) for the
visible scene in his ViRGIS. Ulrich (2002) implemented the tile approach for both
textures and terrain elevation data in his Chunk LOD.
Partition Data
Original Data
Figure 2.4 Partitioning data into tiles
Tiles
19
Figure 2.5 Tree structure for tiles
2.2.5
Comparative Studies
The comparison is made in order to distinguish the capabilities of each
technique described earlier. The advantages and disadvantages of each are stated out
and the evaluation is made regarding on the investigation done by previous
researchers on related techniques.
Kd-tree is a simple data structure in managing data. Although the structure
can minimize the main memory usage but it is sensitive to the order in which the
points are inserted. Furthermore, data points are scattered all over the tree (Gaede
and Günther, 1998). Searching and insertion of new nodes are straightforward but
deletion may cause reorganization of the tree under the deleted node, which makes it
more complicated to handle. In Kd-tree data structure, the decomposition process is
made depending on position of the points, thus resulting non-uniform spaces.
Consequently, it does not divide the plane at the best possible positions and produce
an unbalanced tree (Ahn et al., 2001).
20
Quadtree is one of the early proposed techniques that being used until
recently. It has ability to manage multidimensional and vast databases capacity with
its simple representation. However, the resolution of the data used in quadtree is
limited by the size of the basic grid cell (Guttman, 1984). Besides that, each node in
quadtree needs additional space for the pointer to its sons. The space is wasted
frequently on null pointers for each invisible node and consequently the overhead
will occur (Samet, 1984, Burrough and McDonnell, 1998).
Dynamic data structure of R-tree (Guttman, 1984) makes it attractive for
many fields. It allows for the efficient dynamic update, handling of large geographic
databases and range searching (Sheng and Liu Sheng, 1990). The disadvantage of
this structure is the complex algorithms have been used to split nodes, thus it will
result in either slow insertions or horrible bounding boxes. Furthermore, the
handling of overlapping is difficult to implement (Kofler, 1998).
Tiled block is easy to understand and implement. In general, it is crucial for
handling massive data with limited main memory (Peng et al., 2004). Memory can
be managed efficiently with the combination of existing optimization techniques.
However, due to size of data need to be in the form of 2n or 2n+1, the unnecessary
data will be generated, thus consuming a lot of storage space (VTP, 1999). Besides
that, when integrating this approach with multiresolution technique, the cracks will
be occurred automatically (VTP, 1999; Ulrich, 2002). Another algorithm needs to be
inserted in order to solve the problem.
2.3
Visibility Culling Methods
Visibility culling is one of the optimization techniques often being adopted in
computer graphics to enhance the system performance. It is developed in order to
reduce the complexity of large data geometry especially in real-time that requires
higher frame rate. In the next subsections, the types of culling methods will be
reviewed in further details.
21
2.3.1
Types of Culling Methods
The main objective of visibility culling methods is to remove as much as
possible the unneeded objects or data. This will accelerate the rendering process
during run-time. In general, visibility culling can be categorized into backface
culling, detail culling, occlusion culling and view frustum culling.
2.3.1.1 Backface Culling
Backface culling is based on the observation that if all objects in the world
are closed, then the polygons that do not face the viewer cannot be seen. The status
of the polygon whether it is front or back from the viewer can be determined by the
vector angle between the view direction and the normal of the polygon (Laurila,
2004). If the angle is more than 90 degrees, the polygon can be discarded (Figure
2.6). Backface culling is often performed by the rendering Application Programming
Interface (API) such as Dierct3D and OpenGL.
Figure 2.6 Vector angle between view direction and polygons normal
22
2.3.1.2 Small Feature Culling
Detail culling or small feature culling is a method that sacrifices quality for
achieving fast rendering. It is based on the size of the projected bounding volume. If
the object is too small in world space according to the distance between viewpoint
and position of object, it can be discarded (Haines, 2001; Shen, 2004).
This culling method was implemented in the development of
OpenSceneGraph (Burns and Osfield, 2001) and ABB Robotics Product (Assarsson,
2001). Figure 2.7 depicts the related objects and the implementation of small feature
culling by Assarsson (2001). According to the figures, there is no much difference
but actually, it is 80 to 400 percents faster than the original scene.
(a)
(b)
Figure 2.7 Small feature culling a) OFF, b) ON
23
2.3.1.3 Occlusion Culling
Occlusion culling is a method that discards objects, which are occluded.
There are few previous works have been done related to this method. Stewart (1997)
has integrated hierarchical occlusion culling in his terrain model in order to test the
visibility of terrain surfaces based on camera position. Mortensen (2000) has
inserted this method into his real-time rendering of height fields system, which
consists of different LOD. Martens (2003) have developed occlusion culling for the
real-time display of complex 3D models. Figure 2.8 shows an example of occlusion
culling.
Figure 2.8 Occlusion culling of occluded object
24
2.3.1.4 View Frustum Culling
View frustum culling is based on the fact that only the objects in the current
view volume must be rendered. View volume is defined by six planes, which are
near, far, left, right, top and bottom clipping planes whereby all the planes together
form a truncated pyramid (Figure 2.9).
top
left
frustum
right
bottom
near
far
Figure 2.9 Camera’s view frustum
If the objects are entirely outside of the view volume, it cannot be visible and
can be discarded as illustrated in Figure 2.10. If object is partially inside (intersect
with view frustum), then it is clipped or special refinement method need to be done
to the related planes so that the outside parts are removed.
View frustum culling
View
frustum
Figure 2.10 The red objects lie completely outside the view-frustum and will be
discarded from rendering when view frustum culling is performed
25
Previous works have been made by several researchers in order to achieve
higher system performance. Hoff (1996) explored the effective intersection
mechanism of view frustum and Axis Aligned Bounding Box (AABB) or Oriented
Bounding Box (OBB). Most of the terrain rendering algorithms (Duchaineau et al.,
1997; Röttger et al., 1998) used this method to simplify the number of polygons in a
scene. Assarsson and Möller (2000) proposed one basic intersection test and four
optimization techniques (plane-coherency test, octant test, masking and translationand-rotation coherency test) related with the fundamental comparison of view
frustum and bounding volume of objects.
2.3.1.5 Comparative Studies
The comparison is made in order to identify the strengths and drawbacks of
four different visibility culling methods that have been explained earlier. All the
information have been collected from various researchers (Stewart, 1997; Mortensen,
2000; Assarsson, 2001; Burns and Osfield, 2001; Martens, 2003; CGL, 2004;
Laurila, 2004; Shade, 2004). Table 2.1 summarizes the advantages and
disadvantages of backface, small feature, occlusion and view frustum culling.
Table 2.1: Advantages and disadvantages of visibility culling methods
Type of
Culling
Advantages
Backface
• Fast computation
culling
• Discard roughly half of the
Disadvantages
• Not a complete solution if
the objects are not convex
polygons inside the view
frustum
Small feature
• Speed up the system
culling
• Efficient when the
movement is made
• Decrease the quality of a
scene
26
Occlusion
culling
• Capable to remove lots of
occluded objects efficiently
• Not all algorithms work
well on all kinds of scenes
• Lots of steps and complex
calculations need to be
faced
View frustum
• Easy to implement
culling
• Easy to integrate with other
techniques
• Effective when only a small
part of the scene is inside
• Additional optimization
techniques are required for
complex environment that
has lots of geometry inside
the view frustum
the view frustum
2.3.2
Intersection Test Techniques
Intersection test is a main component in culling process. In general, it
depends on two criteria: the tightness of input geometric primitives and speed of
intersection test (Klosowski et al., 1998). Intersection test can be represented in
point-based or bounding volume-based. Point-based intersection test can be
performed on huge model polygonal model but the test itself would probably become
slower (Morley, 2000). Therefore, the bounding volume-based intersection test is
the better way in handling complex models.
The purpose of this process is to determine the object’s location is whether
entirely inside, outside or partially inside/outside. According to the recent research
works, there are four types of bounding volumes can be used to enclose the objects,
which are Axis Aligned Bounding Box (AABB), Oriented Bounding Box (OBB),
Bounding Sphere (BS) and k-Discrete Orientation Polytope (k-DOP). The detailed
information on those bounding volumes is explained in the next subsections.
27
2.3.2.1 Axis Aligned Bounding Box
Basic idea came from Beckmann et al. (1990) when they attempt to develop
an efficient and robust access method using R*-tree. They manipulated the
hierarchical tree in order minimize the area of each enclosing rectangle (MBR) for
spatial objects. In this technique, box is parallel to the world axis. It needs eight
corner points or one minimum and one maximum point to form the bounding
volume. The main process is to test all eight corners against each plane and then
count how many corners are in front of the plane. If none of the corners is in front,
the entire box is outside the view frustum. If all of the corners are in front, the entire
box is inside the view frustum. Otherwise, the intersection has occurred (Morley,
2000). Figure 2.11 illustrates the bounding box of an object.
Figure 2.11 AABB of an object
2.3.2.2 Oriented Bounding Box
Oriented Bounding Box (OBB) has become popular bounding volume
technique, which surrounds an object with a bounding box where the orientation is
arbitrary with respect to the coordinate axes. It is based on local coordinate system.
28
One leading system publicly available for performing collision detection
among arbitrary polygonal models is the RAPID (Rapid and Accurate Polygon
Interference Detection) system, which is based on a hierarchy of OBBs called
OBBTree, implemented by Gottschalk et al. (1996). They introduced fast box-box
intersection test where the main parameters are the radii (half-dimension of box), unit
vectors for each radii, unit vector that parallel to the axis and radius of the box.
Given ai is a length of radii, Ai is a unit vector of radii, L is a unit vector that parallel
to the axis. Here is the formula to obtain the radius, r of the bounding box:
r = ∑ ai Ai • L
(2.1)
i
In view frustum-OBB intersection test, eight corner points need to be tested
with the plane like AABB. If the result of distance calculation is more than r, the
object is outside the view frustum. If the result of distance calculation is more than
–r, the object is intersect. Otherwise, the object is entirely inside the view frustum
(Assarsson and Möller, 2000). Figure 2.12 depicts the bounding box that encloses an
object.
Figure 2.12 OBB of a rotated object
29
2.3.2.3 Bounding Sphere
Sphere is among the simplest geometric structures to represent an object.
Sphere-tree has been developed by Hubbard (1995) in order to solve the collision
detection for interactive applications. He has integrated the hierarchical octree with
bounding sphere. Bounding sphere is defined by a 3D coordinate representing the
center of the sphere and a scalar radius that defines the maximum distance from the
center of the sphere to any point in the object (Dunlop, 2001).
For view frustum-bounding sphere intersection test, if the result of distance
calculation is less than negative radius of the sphere, then it is outside the view
frustum. Otherwise is whether all inside or partially inside depending on the total
number of intersection counter. If the value of counter is six, then the object is
entirely inside the view frustum. Otherwise, the intersection has occurred (Morley,
2000; Picco, 2003). Figure 2.13 shows the bounding sphere of an object.
Figure 2.13 BS of an object
30
2.3.2.4 k-Discrete Orientation Polytope
A k-Discrete Orientation Polytope (DOP) has been proposed by Klosowski et
al. (1998) to handle complex polygonal convex hull effectively. It minimizes the
bounding volume that encloses the object depending on the k value. The tight fit
bounding volume will be produced if the higher k value is set. In their research, they
have produced 6-dops (AABB), 14-dops, 18-dops and 26-dops. Intersection test can
be done by testing the surface of the bounding volume with each frustum plane.
Figure 2.14 depicts the example of hierarchical k-DOP with 6-dops, 14-dops, 18dops and 26-dops.
(a)
(b)
(c)
(d)
Figure 2.14 Aircraft model and the corresponding k-dops a) Level 1, b) Level 2,
c) Level 3, d) Level 4
31
2.3.2.5 Comparative Studies
The comparison has been made for intersection test techniques according to
the previous works (Gottschalk et al., 1996; Klosowski et al., 1998; Morley, 2000).
All the information have been tabulated in Table 2.2.
Table 2.2: Comparison of intersection test techniques
Features
Accuracy
AABB
OBB
BS
k-DOP
Not accurate
Not accurate
Accurate
Accurate
(return false
(return false
(no false
(tightly fit
positive)
positive)
positive)
the object
shape)
Parameter
Eight corner
Eight corner
Center
Depend on
involved
vertices of box
vertices of
coordinates
k value
box
and radius of
sphere
Memory
Take more
Take more
Take less
Take more
requirement
memory
memory
memory
memory
Object
Not suitable
Suitable for
Not suitable
Suitable for
handling
for long-thin
long-thin
for long-thin
complex
oriented object
oriented
oriented object convex
(produce an
object
(produce an
empty space)
object
empty space)
Motion object
Need to
Need to
No need to
Need to
handling
redefine
redefine
redefine
redefine
bounding box
bounding
polytope
area
box area
area
Fast
Slow
Very fast
Slow
Simple
Simple
Difficult
Speed
Implementation Simple
complexity
due to
complex
structure
32
2.4
View-Dependent LOD Techniques
The purpose of view-dependent LOD technique is to build meshes with a
minimum number of generated polygons through the approximation model. It is
done continuously on-the-fly and the viewer’s position (viewpoint) plays an
important role as an element that influences the updating of a polygon meshes in
each frame. The computation of deviation between corresponding points on the two
meshes in object space is needed for measuring the level of terrain roughness
(Lindstrom and Pascucci, 2001). It depends on the approach to be used in
representing terrain data whether using top-down approach (refinement methods) or
bottom-up approach (decimation method). The next subsections will explain the
terrain representation, geometric primitives and previous research works that have
been done by researchers for the last few years.
2.4.1
Terrain Representation
In computer graphics, terrain can be represented as a regular grid or
triangulated irregular network (TIN). Each representation has its own capabilities to
perform terrain model. The detailed information about those representations is
described in the next subsections.
2.4.1.1 Regular Grid
Regular grid is a straightforward method. Fundamentally, it divides the
terrain data into several blocks, which are then sub-sampled to obtain lower
resolution models (Blekken and Lilleskog, 1997). It uses an array of height values at
regularly space x and y coordinates. There are many variations in generating the
regular grid depending on the developed system as depicted in Figure 2.15. For
33
examples, this representation was used by Lindstrom et al. (1996), Duchaineau et al.
(1997) and Röttger et al. (1998) in testing their proposed technique.
Figure 2.15 Variations of grid representations
2.4.1.2 Triangulated Irregular Network
Triangulated irregular network (TIN) is a contiguous representation of the
terrain consisting of the non-overlapping or arbitrary variable sized triangles
(Blekken and Lilleskog, 1997). It allows changeable spacing between vertices. This
representation has been adopted in view-dependent progressive meshes by Hoppe
(1998) and multi-triangulation (DeFloriani et al., 2000). Figure 2.16 shows the TINbased geometric representation of terrain model.
Figure 2.16 TIN representation
34
2.4.1.3 Advantages and Disadvantages
The comparison between the regular grid and TIN has been made to obtain
the strengths and the weaknesses of each terrain representation. It is compiled from
various sources including Kumler (1994), Blekken and Lilleskog (1997), Ögren
(2000) and He (2000).
Regular grid is a simple representation that easy to store and manipulate. It is
easy to integrate with raster databases and file formats such as DEM, DTED and
TIFF formats. Besides that, it requires less storage for the same number of points
because only an array of elevation values needs to be stored rather than full
coordinates (x, y, z). It provides fast rendering using triangle meshes (triangle strip
or triangle fan). Unfortunately, it tends to be far less optimal then TIN approach
because the same resolution is used across the entire terrain.
On the other hand, TIN can approximate a surface to a required accuracy with
fewer polygons generated than regular grid. Furthermore, it offers great flexibility in
the range and accuracy of the features that can be modeled. For examples are the
ridges, valleys, caves and coastlines. Nevertheless, TIN makes the related functions
such as view culling, collision detection, paging and dynamic deformation become
more complex because of arbitrary position or coordinate of data points. Besides
that, it needs a lot of memory to store data. The use of independent triangles for
tessellating the whole data has decreased the system performance (rendering speed).
35
2.4.2
Geometric Primitives
All geometric primitives are eventually described in terms of their vertices,
which are coordinates that define the points themselves, the endpoints of line
segments or the corners of polygons (Woo et al., 1999). In order to display the 3D
objects, the use of these primitives are very important. The explanation will focus on
how to construct polygons according to the 3D geometric objects coordinates. The
basic representation of 3D objects can be done in triangle form. In current graphics
APIs such as OpenGL and DirectX, the triangle representation is utilized in three
ways: independent triangle, triangle fan and triangle strip. The following subsections
will describe in details about those geometric primitives.
2.4.2.1 Independent Triangles
Triangles are formed by drawing the sequence of three vertices v0, v1, v2, then
v3, v4, v5 and so on (Figure 2.17). If the number of vertices, n is not an exact
multiple of three, the final one or two vertices are ignored.
Figure 2.17 Independent triangles
36
2.4.2.2 Triangle Fans
Triangles are formed by drawing the three vertices v0, v1, v2, then v0, v2, v3,
then v0, v3, v4, and so on (Figure 2.18). A v0 is an important vertex that works as a
center of fan.
Figure 2.18 Triangle fan
2.4.2.3 Triangle Strips
Triangles are formed by drawing the three vertices v0, v1, v2, then v2, v1, v3,
then v2, v3, v4, and so on (Figure 2.19). The ordering is to ensure that the triangles
are all drawn with the same orientation so that the strip can correctly form part of a
surface. Preserving the orientation is crucial for some operations such as culling.
Figure 2.19 Triangle strip
37
2.4.2.4 Comparative Studies
There are many aspects need to be considered in choosing the type of
geometric primitive to be used. It involves the triangle count, triangle definition,
redundancy, performance and complexity. The comparison has been made and
summarized in Table 2.3. Given n is a total number of vertices.
Table 2.3: Comparison among the geometric primitives
Independent
Triangle
Triangle
Triangle
Fan
Strip
Triangle Count
n/3
n-2
n-2
Triangle
Vertices 3n-2, 3n-1
Vertices 1, n+1
For odd n:
Definition
and 3n
and n+2
vertices n, n+1
Feature
and n+2.
For even n:
vertices n+1, n
and n+2.
Redundant data
Yes
No
No
Performance
Slow
Faster
Fastest
Complexity of
Easy
Easy
Difficult because
implementation
need an efficient
indexing
mechanism
2.4.3
Previous Works
Rendering algorithms are necessary in reducing the complexity of the
generated polygon in a large-scale scene. The purpose of this operation is to obtain
higher frame rate especially in real-time visualization and navigation. In the next
subsections, four popular triangle-based LOD techniques are explained in details.
38
2.4.3.1 Real-time Optimally Adapting Meshes
Duchaineau et al. (1997) have proposed their technique called Real-time
Optimally Adapting Meshes (ROAM). ROAM is usually being used in game
development. For examples, several game companies such as Trade Marks,
Genesis3D and Crystal Space have implemented it (Luebke et al., 2002). ROAM
uses an incremental priority-based approach with a binary triangle tree (bintree)
structure. A bintree is a tree in which each node is a right triangle (an isosceles
triangle with one right angle) and has either zero or two children. The children are
formed by splitting a node along the midline of the triangle (the line between the
apex and the center of the base). Figure 2.20 shows the steps of building a bintree
with a depth of six.
Figure 2.20 Levels 0-5 of a triangle bintree
A continuous mesh is produced using this structure by applying a series of
split and merge operation on triangle pairs that share their hypotenuses referred to as
diamonds (Figure 2.21).
39
Figure 2.21 Split and merge operations on a bintree triangulation
The ROAM algorithm uses two priority queues to drive split and merge
operations. One queue maintains a priority-ordered list of triangle splits so that
refining the terrain simply means repeatedly splitting the highest-priority triangle on
the queue. The second queue maintains a priority-ordered list of triangle merge
operations to simplify the terrain. This allows ROAM to take the advantage of frame
coherence. Besides that, ROAM automatically eliminates the crack between two
difference blocks of terrain by using force split operation which take into
consideration the neighbor triangle. The current neighbor triangle must have the
same level of resolution with the current triangle. If the condition is not fulfilled, the
recursive triangle subdivision needs to be done. The forced splitting of triangle is
showed in Figure 2.22.
Figure 2.22 Forced splitting process
40
The priority of splits and merges in the queues is determined using a number
of error metrics. The principal metric is a screen-based geometric error that provides
a guaranteed bound on the error. This is done by using a hierarchy of bounding
volumes called wedgies that envelop around each triangle. A wedgie covers the (x,
y) extent of a triangle and extends over a height range z - eT through z + eT where z is
the height of the triangle at each point and eT is the wedgie thickness, which is all in
world-space coordinates.
A pre-processing step is performed to calculate appropriate wedgies that are
tightly nested throughout the triangle hierarchy, thus providing a guaranteed error
bound. Figure 2.23 illustrates side view of the wedgies in triangle meshes. At
runtime, each triangle’s wedgie is projected into screen space and the bound is
defined as the maximum length of the projected thickness segments for all points in
the triangle. This bound is used to form queue priorities and could potentially
incorporate a number of other metrics such as backface culling and view frustum
culling.
Figure 2.23 Wedgies in ROAM implementation
Duchaineau et al. (1997) have constructed two algorithms that generate
optimized triangulations: split queue and merge queue. The first queue is based on a
rough triangulation that is iteratively refined by a sequence of forced splits. A
41
priority split queue (Qs) contains monotonic priorities (pf(T)) for all triangles in the
bintree to determine the order of these splits. The priority of a triangle (T) is
determined by the error caused by using T instead of the finest triangulation. The
detailed algorithm of split queue is listed in Algorithm 2.1. The |T| denotes the
number of triangles in T and E(T) denotes as the maximum error of all triangles in T.
The first two statements in the while loop create the triangulation, while the last two
update the split queue.
Input:
(i)
A base triangulation (T)
(ii)
An empty priority queue (Qs)
(iii)
Maximum number of triangles (Nmax)
(iv)
Maximum error (εmax)
Output:
An optimal triangulation
Pseudo code:
1.
for all triangles T
2.
insert T into Qs
∈ T do
3.
end for
4.
while |T| < Nmax or E(T) > εmax do
5.
identify highest-priority T in Qs
6.
split(T)
7.
remove T and other split triangles from Qs
8.
add any new triangles in T to Qs
9.
end while
Algorithm 2.1: Pseudo code for split queue
A second priority queue, that is, merge queue (Qm) contains the priorities for
all mergeable diamonds in the current triangulation. The priority for a mergeable
diamond (T, TB) is defined as the maximum priority, mp = max{pf(T),pf(TB)} of the
individual triangles. The detailed algorithm of merge queue is listed in Algorithm
2.2. The Smax(T) denotes the maximum split priority and Mmin(T) is defined as the
minimum merge priority. In general, this algorithm is done by computing the
minimum number of split and merge operations. It exploits coherence between the
consecutive frames to reduce the time to calculate and generate triangles for next
frame.
42
Input:
(i)
A base triangulation (T)
(ii)
An empty priority split queue (Qs) and merge queue (Qm)
(iii)
Maximum number of triangles (Nmax)
(iv)
Maximum error (εmax)
Output:
An optimal triangulation for all frames
Pseudo code:
1.
for all frames f do
2.
if f = 0 then do
3.
compute priorities for T’s triangles and diamonds
4.
insert them into Qs and Qm respectively
5.
else
6.
let T = Tf-1
7.
update the priorities for all elements of Qs and Qm
8.
end if
9.
while |T| > Nmax or E(T) > εmax or Smax(T) > Mmin(T) do
10.
if |T| > Nmax or E(T) < εmax then do
11.
identify the lowest-priority pair (T,TB) in Qm
12.
merge(T,TB)
13.
remove the merged children from Qs
14.
add the merged parents T and TB to Qs
15.
remove (T,TB) from Qm
16.
add all newly mergeable diamonds to Qm
17.
else
18.
identify highest-priority T in Qs
19.
split(T)
20.
remove T and other split triangles from Qs
21.
add any new triangles in T to Qs
22.
remove any diamonds whose children were split
from Qm
23.
add all newly mergeable diamonds to Qm
24.
end if
25.
end while
26.
set Tf = T
27.
end for
Algorithm 2.2: Pseudo code for merge queue
43
2.4.3.2 Real-time Generation of Continuous LOD for Height Fields
Röttger et al. (1998) have developed a continuous terrain rendering algorithm
that extends the previous work done by Lindstrom et al. (1996). This algorithm uses
top-down strategy in order to simplify the terrain data. The hierarchical quadtree
data structure has been adopted. This structure is a tree where each node has either
zero or four children where nodes with zero children are the leaf nodes. In order to
represent the quadtree structure, a numeric matrix of size W x H is used where W is
the width of terrain and H is the height. This matrix is called the quadtree matrix.
Each node in the tree corresponds to one value in the quadtree matrix, which
indicates if the node should be subdivided or not. A value of zero represents a leaf
node and a value other than zero represents a parent node with four children. Figure
2.24 shows the matrix representation of the quadtree. The arrows indicate the parentchild relations in the quadtree.
(a)
(b)
Figure 2.24 A sample triangulation of a 9 x 9 height field a) quadtree matrix, b) the
corresponding quadtree structure
This algorithm also dealt with mesh discontinuities (cracks) between the
adjacent levels of the quadtree by skipping the center vertex of the higher resolution
edge. To simplify this solution, Röttger et al. (1998) implemented a bottom-up
44
process from the smallest existing block to guarantee that the level difference
between adjacent blocks did not exceed 1.
In order to generate the triangulation, they introduced a new error metric that
took into consideration the distance from the viewer and the roughness of the terrain
in world space. The formula is as follows:
f =
l
d × C × MAX (c × d 2,1)
(2.2)
Here, l is the distance to the viewpoint, d is the edge length of a quadtree
block, C is a minimum global resolution (a value of 8 was found to provide good
visual results) and c specifies the desired global resolution that can be adjusted per
frame to maintain a fixed frame rate. The quantity d2 incorporates the surface
roughness criteria by representing the largest error delta value at six points in the
quadtree: the four edge midpoints and the two diagonal midpoints. An upper bound
on this component is computed by taking the maximum of these six absolute delta
values (Figure 2.25). This criterion is used to decide the subdivision process. If the
value of f is less than one, the subdivision is done to produce the children nodes.
Otherwise, the process is stopped.
Figure 2.25 Measuring surface roughness
45
2.4.3.3 View-Dependent Progressive Meshes
Hoppe (1998) from Microsoft Research developed view-dependent
progressive meshes (VDPM) technique. It provides a TIN-based framework. In
general, the terrain model is divided into several blocks and then recursively
simplifying and merging the blocks. Out-of-core algorithm has been implemented in
his system through the Windows API, VirtualAlloc() and VirtualFree(). The
core operations in VDPM are the vertex split and edge collapse.
The vertex split is parameterized as vsplit(vu, vs, vt, fl, fr, fn0, fn1, fn3, fn4) and
edge collapse is parameterized as ecol(vu, vs, vt, fl, fr, fn0, fn1, fn3, fn4). As shown in
Figure 2.26, the vertex split replaces a vertex vu by two other vertices vs and vt. Two
new triangle faces are introduced in this operation, fl = (vl, vs, vt) and fr = (vs, vr, vt),
between two pairs of neighboring faces (fn0, fn1) and (fn3, fn4). The edge collapse
applies the inverse operation. Two vertices vs and vt are merged into a single vertex
vu. The two faces fl and fr vanish in this process. Meshes with boundary are
supported by letting the face neighbors fn0, fn1, fn3 and fn4 representing special nil
values. A vertex split with fn3 = fn4 = nil creates only a single face fl.
Figure 2.26 Vertex split and edge collapse operations
Cracks between adjacent blocks are handled by not allowing simplification of
those vertices that lie on a block boundary. This restriction will produce larger
triangle counts. However, Hoppe (1998) performed a pre-process in which blocks
46
were hierarchically merged and then resimplified at each step so that the actual
polygon increase was small (Figure 2.27). Error metric that being used in VDPM is:
δ v > k (v − e ) • e
where k = 2τ tan
φ
2
,
(2.3)
δ v is the neighborhood’s residual error (i.e. the vertex’s delta
value), e is the current viewpoint, e is the viewing direction, v is the vertex in world
space, τ is the screen space error threshold and φ is the field of view (FOV) angle.
Figure 2.27 Steps in pre-processing block-based simplification
Hoppe (1998) has developed an algorithm that incrementally adapts a mesh
for selective refinement. The vertex front (leaf nodes) is stored as a list of active
vertices together with a list of all active triangles. The vertex list is traversed before
each frame is rendered and a decision is made for each vertex whether to leave it as it
is, split it or collapse it. A query function (qrefine) determines whether a vertex (v)
should be split based on the current view parameters. It returns false either if v is
outside the view frustum, if it is oriented away from the observer or if a screen-space
geometric error is below a given tolerance. Otherwise, it returns true. The algorithm
for selective refinement is listed in Algorithm 2.3.
47
Input:
A set of active vertices (V) in a VDPM
Output:
A refined triangulation
Pseudo code:
1.
2.
3.
4.
5.
for each v
∈ V do
if v has children and qrefine(v) then do
force vsplit(v)
else
if v has a parent and edge collapse is legal for
v.parent and not qrefine(v.parent) then do
6.
7.
8.
9.
ecol(v.parent)
end if
end if
end for
Algorithm 2.3: Pseudo code for selective refinement
Algorithm 2.3 checks each vertex and splits if necessary. If qrefine(v)
evaluates to true, the vertex v should be split. If the preconditions for splitting v are
not fulfilled, a sequence of other vertex splits is performed in order for vsplit(v) to
become legal. This is performed by Algorithm 2.4 as listed below. If either v has no
children or qrefine(v) returns false, v is checked for a parent. If v has a parent and
the edge collapse of v and its sibling is legal, the edge collapse is performed if
qrefine returns false for the parent of v.
Algorithm 2.4 keeps all vertices that have to be split in a stack. The parent of
each vertex (v) in the stack is pushed onto the stack if v is not active. If v is active
and vsplit(v) is legal, then v is split. This is repeated until the original vertex is split.
48
Input:
A vertex v in a VDPM
Output:
A refined triangulation
Pseudo code:
1.
insert v into stack
2.
while v is equal to stack.pop() do
if v has children and v.fl ∈ F then do
3.
4.
stack.pop()
5.
else
6.
if v
7.
∉ V then do
stack.push(v.parent)
8.
else
9.
if vsplit(v) is legal then do
10.
stack.pop()
11.
12.
vsplit(v)
else
13.
for each of i, i=0,1,2,3 do
if v.fi ∉ F then do
14.
15.
stack.push(v.fni.parent)
16.
end if
17.
18.
19.
end for
end if
end if
20.
end if
21.
end while
Algorithm 2.4: Pseudo code for force vsplit
2.4.3.4 Out-of-Core Terrain Visualization
Lindstrom and Pascucci (2001) have presented a new terrain LOD approach
that is memory efficient and independent of the particular error metric used.
Different from the previous work done by Lindstrom et al. (1996), they used topdown approach rather than bottom-up strategy. Out-of-core view-dependent
refinement of large terrain surfaces has been performed via mmap() function which is
handled by operating system. This algorithm builds the terrain in a unique way
compared to the other algorithms described earlier. Instead of formatting the data as
49
a tree and simply traversing the tree for rendering, the terrain is built into a single
long triangle strip.
In order to build the triangle strip effectively, the actual vertex data is stored
in interleaved quadtrees. Each vertex is labeled as white if it is added to the triangle
strip at an odd level of refinement or black if added at an even level. The sequence
of white vertices forms a white quadtree and the black vertices form a black
quadtree. Figure 2.28 shows the first three levels of the white quadtree (top) and the
black quadtree (bottom).
Figure 2.28 Interleaved quadtree structure
The refinement algorithm recursively subdivided each triangle using longestedge bisection (other terms are bintree or restricted quadtree triangulation) as
depicted in Figure 2.29. It guarantees that a continuous is formed with no cracks by
enforcing the nesting of error metric terms and hence implicitly forcing parent
vertices in the hierarchy to be introduced before their children. Nested spheres have
been used in order to handle this situation.
50
Figure 2.29 Four levels of longest-edge bisection operation
There are three main procedures have been developed for generating the
triangulation: mesh refinement, sub mesh refinement and triangle stripping. Pseudo
codes for those procedures are listed in Algorithm 2.5, Algorithm 2.6 and Algorithm
2.7 respectively. The refinement procedure builds a generalized triangle strip V =
(v0, v1, v2, …, vn) that is represented as sequence of vertex indices. The outermost
procedure mesh-refine starts with a base mesh of four triangles and calls submeshrefine once for each triangle. Here ic is the vertex at the center of the grid, {isw, ise,
ine, inw} are the four grid corners and {in, ie, is, iw} are the vertices introduced in the
first refinement step. The triangle strip is initialized with two copies of the same
vertex to allow the condition on line 1 in trianglestrip-append to be evaluated. The
first vertex, v0, is then discarded after the triangle strip has been constructed.
The procedure submesh-refine corresponds to the innermost recursive
traversal of the mesh hierarchy, where cl and cr are the two child vertices of the
parent, j. This procedure is always called recursively with j as the new parent vertex
and the condition on the line 1 is subsequently evaluated twice, once in each subtree.
It is done because the evaluation constitutes a significant fraction of the overall
refinement time so that it is more efficient to move it one level up in the recursion,
thereby evaluating it only once and then conditionally making the recursive calls.
A vertex (v) is appended to the strip using the procedure trianglestripappend. Line 5 is used to “turn corners” in the triangulation by effectively swapping
the two most recent vertices, which results in a degenerate triangle that is discarded
by the graphics system. Swapping is done in order to ensure that the parity (whether
a vertex is on an even or odd refinement level) is alternating, which is necessary to
51
form a valid triangle mesh. To this end, the two-state variable parity(V) records the
parity of the last vertex in V.
Input:
(i)
A triangle strip (V)
(ii)
Number of refinement levels (n)
Output:
An optimal triangulation
Pseudo code:
1.
parity(V) = 0
2.
V = (isw, isw)
3.
submesh-refine(V, ic, is, n)
4.
trianglestrip-append(V, ise, 1)
5.
submesh-refine(V, ic, ie, n)
6.
trianglestrip-append(V, ine, 1)
7.
submesh-refine(V, ic, in, n)
8.
trianglestrip-append(V, inw, 1)
9.
submesh-refine(V, ic, iw, n)
10.
V = (V, isw)
Algorithm 2.5: Pseudo code for mesh refinement (mesh-refine)
Input:
(i)
A triangle strip (V)
(ii)
Directed edge (i, j) where i is a vertex at the apex of the triangle and
j is a midpoint of triangle’s hypotenuse
(iii)
Output:
Level of interleaved quadtree (l)
An optimal triangulation
Pseudo code:
1.
if l > 0 and active(i) then do
2.
submesh-refine(V, j, cl(i,j), l-1)
3.
trianglestrip-append(V, i, l mod 2)
4.
submesh-refine(V, j, cr(i,j), l-1)
5.
end if
Algorithm 2.6: Pseudo code for sub mesh refinement (submesh-refine)
52
Input:
(i)
A triangle strip (V)
(ii)
Vertex (v)
(iii)
Parity of a vertex (p)
Output:
List of triangle strips
Pseudo code:
1.
2.
3.
4.
5.
if v is not equal to vn-1 and vn then do
if p is not equal to parity(V) then do
parity(V) = p
else
V = (V, vn-1)
6.
end if
7.
V = (V, v)
8.
end if
Algorithm 2.7: Pseudo code for generating triangle strips (trianglestrip-append)
2.4.3.5 Comparative Studies
Ögren (2000) has developed the ROAM algorithm with some modification
and then compared the result with VDPM technique. According to him, ROAM has
shown to generate a triangle mesh faster than VDPM due to geomorphing process.
Besides that, ROAM is capable to avoid slivers (thin) triangles automatically, while
VDPM needs extra computations to guarantee their absence.
Youbing et al. (2001) have pointed out that VDPM demands too much
storage and suffered from many constraints due to its lack of generality. Though it
can produce a triangle mesh with far less triangle than regular grid, it spends too
much time on optimization (complex calculations are needed).
53
Bradley (2003) has developed three terrain rendering algorithms which are
ROAM (Duchaineau et al., 1997), Real-time Generation of Continuous LOD
(Röttger et al., 1998) and out-of-core terrain visualization (Lindstrom and Pascucci,
2001). The evaluation has been measured depending on several criteria. The
summary of the comparison is displayed in the Table 2.4 below:
Table 2.4: Comparison of view-dependent LOD techniques
Criteria
Duchaineau et al.
Röttger et al.
Lindstrom and
(1997)
(1998)
Pascucci (2001)
Frame rate (fps)
Low
Medium
High
Polygon count
Fewest triangles
Few triangles
Fewer triangles
generated
generated
generated
Produce optimal
Produce nearly
Low
meshes
optimal meshes
Flat area handling
Low
High
High
Polygon
Independent
Triangle fan
Triangle strip
tessellation
triangle
Implementation
Moderate because
Easy (based on
Difficult because
complexity
it involves the dual
quadtree
there are many
queue
decomposition)
complex
Terrain accuracy
maintenance.
algorithms (error
metrics) need to
be done
2.5
Summary
There are four well-known hierarchical-based spatial data structures can be
used to organize data, which are Kd-tree, quadtree, R-tree and tiled block. Each
technique has its strengths and weaknesses depending on its complexity to
implement, memory management and computation time. Among of them tiled block
54
is the simplest data structure and capable to provide fast data retrieval as well as
dynamic data loading with simple indexing scheme.
Visibility culling method is used to remove the unneeded data from being
processed. Backface, small feature, occlusion and view frustum culling are examples
of culling methods. Based on the comparison that has been made, the research
focuses on view frustum culling due to its potential to remove lots of polygons,
which are outside the viewing volume and it is suitable for real-time visualization
and navigation system. In testing the intersection between view frustum and the
desired object, there are four techniques can be adopted, which are Axis-Aligned
Bounding Box (AABB), Oriented Bounding Box (OBB), Bounding Sphere (BS) and
k-Discrete Orientation Polytope (k-DOP). BS has better characteristic compared to
the other techniques due to its simplicity, fast computation and capability to handle
various types of objects efficiently.
View-dependent LOD technique is developed in order to reduce the number
of polygons for a scene and give details to the desired area continuously in real-time.
Terrain can be represented as regular grid or TIN approach. Regular grid is easy to
implement and integrate with other optimization techniques compared to complex
operations in TIN. In general, the terrain surfaces can be generated by using three
types of geometric primitives: independent triangles, triangle fans and triangle strips.
According the comparison that has been done, the triangle fans can be implemented
easily without requiring a complex indexing mechanism. The examples of viewdependent LOD techniques are ROAM (Duchaineau et al., 1998), Continuous LOD
(Röttger et al., 1998), VDPM (Hoppe, 1998) and Out-of-core terrain visualization
(Lindstrom and Pascucci, 2001). Due to its simplicity and capability to give details
on certain terrain region, Röttger’s algorithm has been chosen to integrate with the
data structure, dynamic data loading and culling method that have been designed and
proposed in this research.
CHAPTER 3
METHODOLOGY
3.1
Introduction
The algorithm that is based on hybrid method has been proposed as a
foundation in solving the problems in this research. It is integration of overlapping
tile data structure, dynamic load scheme, view frustum culling and triangle-based
LOD technique. In general, the system has two main phases: pre-processing and runtime processing. Each of these phases is implemented as a separate subsystem. As
shown in Figure 3.1, there are four main steps involved in developing an effective
terrain visualization and navigation system. Each step represents the objective need
to be achieved in this research.
In fact, the proposed algorithm can be used to organize and visualize different
formats of terrain datasets. These include procedural data, which is generated using
algorithm and data obtained from real world. Generally, terrain data is represented in
the form of grid, vector, raster or image. In order to use the data within the proposed
algorithm, the structure of data format need to be studied, so that the important
information such as elevation profiles can be retrieved and processed easily. Due to
this research is focusing on GIS field, it is appropriate to use the data that is taken
from real world as testing data.
56
STEP 1: Partition terrain data
Extract data
PHASE 1:
Pre-Processing
Split data into tiles
STEP 2: Load terrain tiles
Identify tile index of current camera
position
Determine tiles to be loaded
STEP 3: Cull invisible terrain tiles
Extract frustum’s planes
Test intersection between view frustum
and tiles
STEP 4: Simplify geometry of visible terrain tiles
Calculate distance between the
viewpoint and the center of current
node in quadtree
Refine node
Generate triangle fans
Figure 3.1 Research framework
PHASE 2:
Run-time
Processing
57
In this research, a standard digital elevation model (DEM) provided by
United States Geological Survey (USGS, 1992) is used as a raw terrain data because
the file data format is well-documented, so that it will facilitate the conversation
process of DEM data into proposed data structure; and it provides large amount of
real terrain data with different terrain size. This data is based on regular grid
representation and in the form of ASCII data format. Besides that, it encloses
Arizona area which uses Universal Transverse Mercator (UTM) coordinate system
with 10-meter x 10-meter data spacing between each elevation data. A DEM
consists of three data records named Record A, Record B and Record C. Record A
contains the header information of terrain data; Record B contains the elevation
profiles; and Record C contains the statistic on the accuracy of the data.
In pre-processing, the conversion of raw data into proposed data structure is
done by partitioning the terrain data into small tiles. Firstly, a DEM data is read to
determine the location of Record A, Record B and Record C. Then, the important
information are extracted and divided into two parts: general information and
elevation profiles. According to the general information, the elevation profiles are
split into small data that is based on tile approach. Finally, these data are stored into
database.
In run-time processing, there are three steps need to be made in order to
develop a continuous real-time terrain visualization. The dynamic load management
is applied in the research where only several terrain tiles are selected and loaded into
main memory based on current camera position. The purpose of this step is to
reduce memory usage to store data as well as to provide fast data loading. Then, the
loaded tiles are tested with the viewing volume (view frustum) for removing the
unseen tiles. The final step is to reduce the geometry of visible terrain tiles before
transferring the data for rendering process. The steps in run-time processing are
repeated and updated if there is any change of camera position or orientation within
the scene.
58
3.2
Partitioning Terrain Data
The main purpose of this step is to transform raw terrain data (DEM) to the
proposed data structure, which works as an internal representation for the system.
This is done in pre-processing phase. In order to store terrain data into data structure,
spatial data retrieval based on hierarchical method is employed. For this research,
the overlapping tile technique as implemented by Pajarola (1998) has been chosen
due to the fact that it is simple and easy to implement. The difference between the
Pajarola’s technique and the proposed technique is on how the elevation data are
stored. In Pajarola’s technique, the elevation data are stored in a single file. The
data are retrieved using complex searching algorithm, thus wasting time to find the
exact data. While the proposed technique stores the elevation data into several files
with simple indexing scheme that is assigned to the filename. Each file stores the
elevation data for one tile. Actually, this step is useful for rapidly accessing data
during run-time process. At the same time, the proposed technique attempts to
develop an algorithm that is able to support arbitrary size and shape of terrain model.
This step contains two sequential processes: extract data and split data into tiles,
which are covered in details in the next subsections.
3.2.1
Extracting Data
The process is divided into two parts: the extraction of general information
and the extraction of elevation profiles. General information is obtained by reading
Record A in DEM data format which contains header information about the related
DEM data. The important parameters are extracted from this record. These include
ground coordinates for the corners; minimum and maximum elevation values; and
column and row data. The ground coordinates consist of four control points
(Northwest, Northeast, Southwest and Southeast) work as the boundaries of data.
The minimum and maximum elevation values are the range of z coordinates to
represent the terrain height. Those are useful for normalizing the elevation values in
certain extent. The column and row data are the number of data points or vertices for
59
determining the terrain size in vertical and horizontal directions respectively. All the
information are stored into database in the form of flat text files.
Then, the elevation profiles are read one by one from Record B according to
the general information obtained before. An index for each elevation data is created
based on one-dimensional array. This means the index value will be in the range of 0
to [(2n+1) x (2n+1)-1]. This approach is applied instead of using two-dimensional
array in order to reduce the memory usage for holding data when running the
program especially for very large dataset. Each elevation data and its related index
are stored in another text file. Information in Record C is ignored because there is no
important information can be used for this research.
3.2.2 Splitting Data into Tiles
This process is the core component of the pre-processing phase. In this
research, the overlapping tile technique is adopted due to its simplicity and indexing
efficiency. However, the terrain data must be in the size of (2n+1) for both vertical
and horizontal directions (Duchaineau et al., 1997; Röttger et al., 1998; Pajarola,
1998; Lindstrom and Pascucci, 2001; Kim and Wohn, 2003).
Firstly, the terrain size needs to be determined in order to ensure the size
conform the above rule. There are two suggested steps:
(i)
Find the maximum number of vertices per side for terrain data (Vmax).
(ii)
Assign an appropriate power-of-two plus one (2n+1) value to the Vmax.
The first step can be done by comparing the number of vertices for vertical
direction (Vv) with the number of vertices for horizontal direction (Vh). In order to
ensure that Vmax must be in the size of (2n+1), the second step is required, which the
new value is assigned to the Vmax by detecting the range of Vmax obtained from the
first step.
60
After calculating and obtaining the new number of vertices per side (Vmax) for
terrain data, it is necessary to specify manually the tile size per side (number of
vertices per side for a tile). Choosing a proper tile size depends on data density, CPU
speed and available memory. Hence, the size must not too small or too big for the
efficiency purpose (Peng et al., 2004). The tile size per side (Vt) must also be in the
size of (2n+1). The Vmax and Vt are used to calculate the number of generated tiles
per side.
Number of tiles per side =
Vmax − 1
Vt − 1
(3.1)
Then, the starting point for each tile is determined by computing the index of
terrain vertices. The calculation is made column-by-column beginning from left
column and the tile is read from bottom to up. For this reason, Vmax and Vt are
required to calculate the number of tiles’ vertices per column for obtaining the
starting point of each lowest tile in the vertical direction. The formula is:
Number of tiles ' vertices per column = Vmax × (Vt − 1)
(3.2)
At the same time, a test is made to check the existence status for each tile.
This is done because when the terrain size is expanded to the (2n+1) x (2n+1), the
possibility to generate non-existing tiles (‘dummy’ tiles) is high (Lindstrom and
Pascucci, 2001). The purpose of this step is to reduce the amount of hard disk
storage to be used after creating files for tiles. This can be made by not storing the
unnecessary tiles (non-existing tiles). Figure 3.2 shows the related calculation
involves in implementing the overlapping tile technique.
The maximum number of vertices per side of existing tiles for both vertical
(column) and horizontal (row) direction are needed before each tile is assigned with a
flag value. The existing tiles are assigned with 1 (true flag), while the non-existing
tiles are assigned with 0 (false flag).
61
After gaining all basic requirements, the elevation values are read from the
output text file created before by searching and comparing the starting point for each
existing tiles with the index from the output file. Subsequently, the normalization of
elevation data is made for each existing tiles. Thus, the elevation values will be in
the range of 0 to 255 for facilitating the computation for the run-time process. The
minimum and maximum elevation values from the general information are required
for this computation. The formula is:
Elevation values
after normalization
=
(Elevation values − Minimum Elevation )
(Maximum Elevation − Minimum Elevation )
× 255
(3.3)
Each tile’s starting coordinate (based on column and row location) and
elevation values are stored in different output file (*.tile file). This approach is used
in order to achieve fast searching and retrieving of terrain tiles rather than using
single output file with complicated searching mechanism that will slow down the
whole system (Lindstrom et al., 1997; Pajarola, 1998).
62
Figure 3.2 Overlapping tile
63
3.3
Loading Terrain Tiles
The idea behind this step is to load only several terrain tiles instead of loading
the whole data. This can reduce the amount of data to be stored in main memory.
The tiles to be loaded only depend on current camera position. The total number of
tiles will be loaded are M x M where M refers to the number of tiles per side that
manually set for a scene. There are two steps need to be followed:
(i)
Identify the tile index corresponding to the current camera position.
(ii)
Determine the tiles to be loaded.
The first step can be done by testing the current camera coordinate with the
bounding square of corresponding tile. After obtaining the desired tile index, the
indices for M x M tiles to be loaded are determined beginning from the lowest-left
side.
After that, tile indices are compared with the name of text files created in preprocessing, which the elevation values are stored. Then, M x M tiles can be loaded
into main memory. When the camera moves to the other tile, the processes are
repeated. Several old tiles are discarded from main memory and remainders of them
are retained. The new tiles are loaded to replace the discarded tiles. Time for
loading data can be reduced with this approach. Figure 3.3 illustrates this updating
process.
Figure 3.3 Dynamic load management
64
3.4
Culling Invisible Terrain Tiles
View frustum culling technique is utilized to remove the unseen terrain tiles
depending on viewing volume and therefore reduce polygons to be computed in the
next step. This technique involves two main operations, which are plane extraction
and intersection test.
3.4.1
Plane Extraction
View frustum consists of six planes: left, right, top, bottom, near and far as
shown in Figure 3.4. Each plane has its own plane equation. General plane equation
is:
Ax + By + Cz + D = 0
Figure 3.4 Six planes of view frustum
(3.4)
65
The co-efficients, A, B, C and D need to be defined in order to complete the
frustum’s plane equation. This involves three steps:
(i)
Get the clipping matrix.
(ii)
Extract the plane.
(iii)
Normalize the plane.
In order to obtain the clipping matrix (Cl), the multiplication of the model
view matrix (M) and the projection matrix (P) need to be calculated as indicated
below:
Cl = M × P
⎡ M 11
⎢M
Cl = ⎢ 21
⎢ M 31
⎢
⎣ M 41
M 12
M 22
M 32
M 42
M 13
M 23
M 33
M 43
M 14 ⎤ ⎡ P11
M 24 ⎥⎥ ⎢⎢ P21
×
M 34 ⎥ ⎢ P31
⎥ ⎢
M 44 ⎦ ⎣ P41
⎡Cl11
⎢Cl
Cl = ⎢ 21
⎢Cl 31
⎢
⎣Cl 41
Cl12
Cl 22
Cl 32
Cl 42
Cl13
Cl 23
Cl 33
Cl 43
Cl14 ⎤
Cl 24 ⎥⎥
Cl 34 ⎥
⎥
Cl 44 ⎦
P12
P22
P32
P42
P13
P23
P33
P43
P14 ⎤
P24 ⎥⎥
P34 ⎥
⎥
P44 ⎦
(3.5)
Plane extraction can be made by obtaining the co-efficients, A, B, C and D in
plane equation. Gribb and Hartmann (2001) have done the extraction of view
frustum’s planes and the coefficients for the plane equation are listed in Table 3.1.
66
Table 3.1: Clipping planes with corresponding co-efficients
Clipping plane
Left
Coefficients for the plane equation
A = Cl41 + Cl11
B = Cl42 + Cl12
C = Cl43 + Cl13
D = Cl44 + Cl14
Right
A = Cl41 - Cl11
B = Cl42 - Cl12
C = Cl43 - Cl13
D = Cl44 - Cl14
Bottom
A = Cl41 + Cl21
B = Cl42 + Cl22
C = Cl43 + Cl23
D = Cl44 + Cl24
Top
A = Cl41 - Cl21
B = Cl42 - Cl22
C = Cl43 - Cl23
D = Cl44 - Cl24
Near
A = Cl41 + Cl31
B = Cl42 + Cl32
C = Cl43 + Cl33
D = Cl44 + Cl34
Far
A = Cl41 - Cl31
B = Cl42 - Cl32
C = Cl43 - Cl33
D = Cl44 - Cl34
67
Thus, the plane equations for each frustum’s plane are:
Cl L = (Cl 41 + Cl11 )x + (Cl 42 + Cl12 ) y + (Cl 43 + Cl13 ) + (Cl 44 + Cl14 )
(3.6)
Cl R = (Cl 41 − Cl11 )x + (Cl 42 − Cl12 ) y + (Cl 43 − Cl13 ) + (Cl 44 − Cl14 )
(3.7)
Cl B = (Cl 41 + Cl 21 )x + (Cl 42 + Cl 22 ) y + (Cl 43 + Cl 23 ) + (Cl 44 + Cl 24 )
(3.8)
ClT = (Cl 41 − Cl 21 )x + (Cl 42 − Cl 22 ) y + (Cl 43 − Cl 23 ) + (Cl 44 − Cl 24 )
(3.9)
Cl N = (Cl 41 + Cl 31 )x + (Cl 42 + Cl 32 ) y + (Cl 43 + Cl 33 ) + (Cl 44 + Cl 34 )
(3.10)
Cl F = (Cl 41 − Cl 31 )x + (Cl 42 − Cl 32 ) y + (Cl 43 − Cl 33 ) + (Cl 44 − Cl 34 )
(3.11)
After that, all plane equations need to be normalized. Normalizing a plane
equation, Ax + By + Cz + D = 0 means to change the plane equation to become a
unit vector. The calculation for normalizing process is to divide each of its
components by the magnitude of the vector. The formula is:
Ax + By + Cz + D
A2 + B 2 + C 2
3.4.2
=0
(3.12)
Intersection Test
Intersection test is done in order to check the visibility status of terrain tiles
has been loaded. In this research, plane-sphere intersection test technique has been
selected. The key factor of this selection is due to its simplicity and fast computation
capability as mentioned in Section 2.5.2.5. Each frustum’s plane is tested with
bounding spheres that represent terrain tiles (Figure 3.5).
Each terrain tile is assigned with particular flag: ALL-OUTSIDE, ALLINSIDE or INTERSECT. This can be done by calculating the distance between
plane and center of related bounding sphere. The terrain tiles with the status ALLINSIDE or INTERSECT will go to the next step. Otherwise, the terrain tiles are
deleted from being processed for the next step.
68
(a)
(b)
(c)
Figure 3.5 Plane-sphere intersection test for view frustum culling technique a)
sample scene, b) intersection test, c) result after culling
3.5
Simplifying Geometry of Visible Terrain Tiles
View-dependent triangle-based LOD technique is used to simplify the
polygons of visible terrain tiles. The Röttger’s algorithm is chosen due to its
simplicity to divide the space for a scene and capability to reduce lots of polygons
per frame as well as preserving the quality of visual appearance (Röttger et al., 1998;
Kim and Wohn, 2003). Hierarchical quadtree with top-down approach is a core of
this technique. The important steps in developing hierarchical quadtree are described
in the next subsections.
69
3.5.1
Distance Calculation
The distance between the center of node and the viewpoint are calculated
using L1-Norm representation. It is a faster version of the distance formula and
linear rather than quadratic like L2-Norm (Polack, 2003). The L1-Norm distance
calculation equation is:
l = x 2 − x1 + y 2 − y1 + z 2 − z1
(3.13)
where l is the distance parameter. Figure 3.6 illustrates the related distance in
general.
Figure 3.6 Distance (l) and edge length (d)
3.5.2
Subdivision Test
Subdivision test is needed to refine the mesh at the optimal triangulation.
The decision variable, f is computed based on Equation 2.2. This variable is used to
decide whether to subdivide a node or not.
70
3.5.3
Generation of Triangle Fans
The main elements in generating triangle fans are the center of the terrain
block (node) and the order of the fan. The test is essential for determining what kind
of triangle fan (full or partial) is used for triangulation.
Firstly, the node is checked whether it is the highest LOD. If it is and the
adjacent nodes are at the same LOD, the full triangle fan can be generated. If the
adjacent nodes are at the lower LOD, skip the vertex on that side of the fan that
shared edge (Figure 3.7). This way automatically eliminates the cracks that can be
occurred in the boundaries of the terrain blocks. If the node is not the highest LOD,
the special calculation needs to be made for adapting the current triangle mesh with
the adjacent triangulations. The sample output after this step is shown in Figure 3.8.
Figure 3.7 Generation of triangle fans
71
Triangles of a
tile that have
been simplified
Camera’s viewing volume
Figure 3.8 Terrain tiles after applying view-dependent LOD technique
3.6
Summary
Research methodology has been discussed in details in this chapter. The
proposed algorithm which is based on hybrid method has been designed. There are
two main phases involved: pre-processing and run-time processing with four
important steps. Spatial data structure that is based on overlapping tile has been
designed and adopted in order to convert standard USGS DEM data into tiles and
then store them into database (text files). This is done in pre-processing phase.
During run-time process, dynamic load management is employed for
minimizing main memory usage and therefore, providing fast data loading for the
proposed system. View frustum culling that is based on plane- sphere intersection
test is used for deleting the unseen terrain tiles whereas view-dependent LOD that is
based on Röttger’s algorithm is used to reduce the geometric complexity of visible
terrain tiles. The integration of all techniques is necessary for providing an effective
real-time rendering system for the implementation of prototype system.
CHAPTER 4
IMPLEMENTATION
4.1
Introduction
A prototype has been developed in order to test the efficiency of the proposed
technique. This chapter is divided into two parts: pre-processing and run-time
processing. In general, the first part works in converting the original terrain data
format (DEM) to the specific data structure, while the second part is used for realtime rendering and visualization. This research was implemented using Microsoft
Visual C++ 6.0 in Windows environment.
For pre-processing, the simple program was developed under Win32 console
application. For run-time processing, the graphics library, OpenGL has been used to
assist the rendering of terrain model. The reasons of using this library are due to its
simplicity, capability of improving rendering speed using available graphics
processing hardware and platform independence compared to Microsoft DirectX
(Hawkins and Astle, 2002).
73
4.2
Pre-Processing
Original USGS DEM data was used to extract the important parameters and
then store them into internal representation that is based on overlapping tile
technique. There are two pairs of steps exploited sequentially for completing the
pre-processing part:
(i)
Read a DEM file and create TXT files.
(ii)
Read TXT files and create TILE files.
In order to facilitate those steps, a simple menu has been created in the
program as shown in Figure 4.1.
Figure 4.1 Menu for partitioning DEM data
4.2.1
Reading a DEM File and Creating TXT Files
An original USGS DEM file (*.dem) is read depending on its file format as
written in National Mapping Program Technical Instructions (USGS, 1992). This
ASCII data format can be viewed in any text editor. Figure 4.2 illustrates an
example of DEM file of Adams Mesa area which consists of Record A (header
74
information), Record B (elevation profiles) and Record C (statistic of accuracy of the
data). Record C was ignored because there is no important information can be used
for the implementation phase. For more detailed information about Record A and
Record B of DEM data, please refer to Appendix A and Appendix B.
Figure 4.2 Structure of a DEM file
Firstly, the data structure was constructed in order to define the input fields
for holding DEM information. The variables with the corresponding data types for
defining the data structure are listed in Table 4.1. The first sixteen variables came
directly from the DEM file, while the remaining variables were used to compute and
store related information that is not provided in DEM file. Besides that, the
declaration of variables for file pointers was made for reading existing DEM file and
creating new output text files.
75
Table 4.1: Data structure
No
Data Type Name of variable
Description
1
char
cMapLabel[145]
DEM quadrangle name
2
int
iDEMLevel
DEM level code
3
int
iElevationPattern
Elevation pattern
4
int
iGroundSystem
Ground planimetric reference
system
5
int
iGroundZone
Zone in ground planimetric
reference system
6
double
dProjectParams[15]
Parameters for various
projections
7
int
iPlaneUnitOfMeasure
Unit of measurement for
ground plani-metric reference
system
8
int
iElevUnitOfMeasure
Unit of measurement for
elevation coordinates
throughout the file
9
int
iPolygonSizes
Number of sides in the
polygon, which defines the
coverage of the DEM file
10
double
dGroundCoords[4][2]
Ground coordinates for all four
corners (SW, NW, NE and SE)
11
double
dElevBounds[2]
Minimum and maximum
elevation for the DEM
12
double
dLocalRotation
Counterclockwise angle in
radians
13
int
iAccuracyCode
Accuracy code for elevations
14
double
dSpatialResolution[3]
DEM spatial resolution for x, y
and z
15
int
iProfileDimension[2]
Number of rows and columns
of profiles in the DEM
76
16
float
fElevation[]
Temporary variable for storing
elevation values per column
data
17
18
double
double
dEastMost
Furthest east/west easting
dWestMost
value
dSouthMost
Furthest south/north northing
dNorthMost
value
19
int
iEastMostSample
Furthest east sample point
20
int
iWestMostSample
Furthest west sample point
21
int
iSouthMostSample
Furthest south sample point
22
int
iNorthMostSample
Furthest north sample point
23
int
iRows
Regular elevation grid
iColumns
dimensions
fDEMFile
File pointers
24
FILE*
fOutputHeader
fOutputProfiles
Before reading a DEM file, all variables in data structure must be initialized
based on the corresponding data types. After that, the DEM data was read starting
with Record A. Each parameter in Record A was stored in its matching variable.
Then, according to the header information, the comparisons were made among the
four corners of ground coordinates (SW, NW, NE and SE) to obtain the furthest
east/west easting value and furthest south/north northing value. This step was done
because most of the DEM data is not originally in the form of square or rectangle and
therefore it is important to detect where the starting data point needs to be read
(Figure 4.3).
77
(a)
(b)
Figure 4.3 First data point for DEM data a) West of the central meridian b) East of
the central meridian
78
Then, the calculation was made in order to obtain number of vertices for
column and row directions. All these information were stored in output text file
named “OutputHeader.txt” (Figure 4.4).
Figure 4.4 Parameters after extraction of important header information
After gaining the general information about DEM data, the elevation profiles
(Record B) were read column-by-column (left to right) and each data point was read
from bottom to up. At the same time, TXT file for storing elevation data was created
and it was named “OutputProfiles.txt”. A variable that based on floating-point array
(fElevation[]) was used to store elevation data per column instead of storing the
whole elevation data. This was done in order to reduce the memory usage when
running the program especially when dealing with massive data. The variable stored
the elevation data for current column temporarily starting from bottom to up. The
index and its corresponding elevation data were written to the output text file as
shown in Figure 4.5. For the next column, the new elevation data were assigned to
the same variable and the other processes were repeated until the end of the elevation
data.
79
Indices
Elevation values
Figure 4.5 The indices and the corresponding elevation values
4.2.2
Reading TXT Files and Creating TILE Files
The output files (OutputHeader.txt and OutputProfiles.txt) that have been
created before were used as input files in this step. Firstly, several variables were
declared. These include the variables for holding general DEM properties and tile
information as well as for reading the input files (*.txt) and creating the output files
(*.tile). Table 4.2 lists the related variables for creating TILE files.
80
Table 4.2: Declaration of variables
Data Type
Name of variables
Description
int
iColumns
Original number of vertices for vertical
iRows
and horizontal direction
fMinElev
Minimum/maximum elevation value for
fMaxElev
the USGS DEM
iTerrainSize
Number of vertices per side for terrain
float
int
data
int
iVertTile
Number of vertices per side for a tile
int
iNumTiles
Number of tiles per side
int
iMaxColumn
Maximum number of vertices for
iMaxRow
specific column and row
int
iTilesVertPerColumn
Number of tiles’ vertices per column
int*
iPStart
Dynamic array for storing the starting
point for each tile
int*
iIsExist
Dynamic array for storing the flag (0 or
1) to ensure the status of existing tiles
int
iStartingPointTile
Starting point (X, Y) of a tile
FILE*
fInputHeader
File pointers
fInputProfiles
fTile
The initialization of all variables was made except the variable for storing the
starting point for each tile (iPStart) and the status of existing tiles (iIsExist)
because they rely on number of tiles per side (iNumTiles). The number of vertices
per side for a terrain tile (iVertTile) must be assigned first because it will be used
as a parameter for splitting the DEM data into small square tiles.
Then, the OutputHeader.txt was read in order to obtain the general
information of DEM data. After reading and storing the DEM information into
corresponding variables, the maximum number of vertices per side for terrain data
(iTerrainSize) was determined which must be followed the size of (2n+1).
81
Algorithm 4.1 and Algorithm 4.2 present the pseudo codes for obtaining maximum
number of vertices per side for terrain data (Vmax) as suggested in Section 3.2.2.
Subsequently, the number of tiles per side and the number of tiles’ vertices
per column were calculated according to Equation 3.1 and Equation 3.2 respectively.
This is crucial for identifying the starting point for each tile.
Input:
(i)
Number of vertices per side for vertical direction (Vv)
(ii)
Number of vertices per side for horizontal direction (Vh)
Output:
Maximum number of vertices per side for terrain data (Vmax)
Pseudo code:
1.
2.
3.
4.
5.
if (Vv is larger than Vh)then do
Vmax = Vv
else
Vmax = Vh
end if
Algorithm 4.1: Pseudo code for obtaining maximum number of vertices per side for
original terrain data
Input:
Maximum number of vertices per side for original terrain data (Vmax)
Output: New maximum number of vertices per side for terrain data (Vmax) which in
the size of (2n+1)
Pseudo code:
1.
initialize counter, c = 0
2.
while Vmax is not in the range of 2c+1 and 2c+1+1 do
3.
c = c + 1
4.
end while
5.
Vmax = 2c+1+1
Algorithm 4.2: Pseudo code for assigning new maximum number of vertices per
side for terrain data
82
This was followed by calculating the maximum number of vertices per side
for existing tiles in both vertical and horizontal direction. The detailed algorithm for
determining the maximum number of vertices per side for tiles in vertical direction
(Vt_v_max) is shown in Algorithm 4.3. The same algorithm was used for determining
the maximum number of vertices per side for tiles in horizontal direction (Vt_h_max).
Input:
(i)
Number of vertices per side for vertical direction (Vv)
(ii)
Number of vertices per side for tile (Vt)
Output:
Maximum number of vertices per side for tiles in vertical direction
(Vt_v_max)
Pseudo code:
1.
initialize temporary variable, temp = 0
2.
while Vv is not in the range of temp and (temp+Vt-1) do
3.
temp = temp + (Vt - 1)
4.
end while
5.
Vt_v_max = temp + (Vt - 1)
Algorithm 4.3: Pseudo code for determining the maximum number of vertices per
side for tiles in vertical direction (Vt_v_max)
Then, the two remaining variables, iPStart and iIsExist were initialized.
Depending on the total number of tiles, the first dynamic array was assigned with 0
for initial value and the second dynamic array was assigned with 1, which assumes
all the tiles are exist for the first time.
The next process is to identify the starting point and the existence status of
each tile. To determine the starting points, the calculations as depicted in Figure 3.4
were adopted beginning from the lowest-left tile. This depends on the number of
vertices per side for a tile (Vt) and the number of tiles’ vertices per column (Vt_c) to
assign the right index for each tile’s starting point. At the same time, the existence
status for each tile was checked one by one. The existence status (iIsExist) was
assigned with 0 for representing the non-existing tile (‘dummy’ tile). Algorithm 4.4
indicates the pseudo code of related process.
83
Input:
(i)
Number of vertices per side for a tile (Vt)
(ii)
Number of tiles’ vertices per column (Vt_c)
(iii)
Maximum number of vertices per side for terrain data (Vmax)
(iv)
Maximum number of vertices per side for tiles in vertical direction
(Vt_v_max)
(v)
Maximum number of vertices per side for tiles in horizontal direction
(Vt_h_max)
Output:
(i)
Starting point for each tile (iPStart)
(ii)
Existence status for each tile (iIsExist)
Pseudo code:
1.
initialize counter, c = 0
2.
for each of i, i=0,Vt_c,…,[(Vt_h_max-1)*Vmax] do
for each of j, j=0,(Vt-1),…,[(Vmax-1)-( Vt-1)] do
3.
4.
iPStart[c] = i + j
5.
if iPStart[c] is larger than [(Vt_v_max-1)+i] and
less than [(Vmax-1)+i] then do
6.
iIsExist[c] = 0
7.
end if
8.
if iPStart[c] is larger than [(Vt_h_max-1)*Vmax] then do
9.
iIsExist[c] = 0
10.
end if
11.
c = c + 1
12.
13.
end for
end for
Algorithm 4.4: Pseudo code for obtaining starting point and existence status for each
tile
Then, the partitioning of DEM data was done by creating the TILE files
(*.tile). The starting coordinate and elevation data of a tile were stored in
OutputN.tile where N refers to the number of tiles to be created as shown in Figure
4.6 and Figure 4.7. The detailed processes of partitioning DEM data are listed in
Algorithm 4.5 and Algorithm 4.6.
84
Input:
(i)
Number of vertices per side for a tile (Vt)
(ii)
Existence status for each tile (iIsExist)
(iii)
Starting point for each tile (iPStart)
(iv)
Maximum number of vertices per side for terrain data (Vmax)
(v)
Number of tiles per side (Nt)
Output:
TILE files (*.tile)
Pseudo code:
1.
2.
for each of c, c=0,1,…,(Nt2-1) do
if iIsExist[c] is not equal to 0 then do
3.
open file: “OutputProfiles.txt”
4.
create new TILE file: “Outputc.tile”
5.
get starting coordinates of current tile
6.
store starting coordinates into “Outputc.tile”
7.
end if
8.
for each of i, i=0,Vmax,…,[Vmax * (Vt-1)] do
9.
for each of j, j=0,1,…,(Vt-1) do
10.
CalculatedIndex = iPStart[c] + i + j
11.
if iIsExist[c] is not equal to 0 then do
12.
while not end of file “OutputProfiles.txt” do
13.
read index and elevation data
14.
if CalculatedIndex is equal to index from
“OutputProfiles.txt” then do
15.
normalize elevation data
16.
store elevation data into “Outputc.tile”
17.
exit loop
18.
end if
19.
end while
20.
21.
end if
end for
22.
end for
23.
close file: “OutputProfiles.txt”
24.
close file: “Outputc.tile”
25.
end for
Algorithm 4.5: Pseudo code for partitioning data and creating TILE files
85
Input:
(i)
Maximum number of vertices per side for terrain data (Vmax)
(ii)
Starting point for each tile (iPStart)
Output:
Starting coordinates of current tile (TSCoordx, TSCoordy)
Pseudo code:
1.
for each of i, i=0,1,…,(Vmax-1) do
2.
for each of j, j=0,1,…,(Vmax-1) do
3.
CalculatedPStart = i * Vmax + j
4.
if CalculatedPStart is equal to iPStart then do
5.
TSCoordx = i
6.
TSCoordy = j
7.
exit loop
8.
9.
10.
end if
end for
end for
Algorithm 4.6: Pseudo code for obtaining starting coordinates of existing tile
Figure 4.6 List of generated TILE files
86
Starting coordinates of a tile (x, y)
Elevation values after
normalization
Figure 4.7 Structure in a TILE file
87
4.3
Run-time Processing
For this part, the object-oriented programming approach has been chosen and
implemented. This is because it provides a systematic management of entities or
objects in the form of classes, which makes an easy way for the development of this
research.
4.3.1
Creation of Classes
In this research, the creation of classes can be divided into two types:
(i)
Main classes
Main classes consist of terrain (CTERRAIN), quadtree (CQUADTREE) and
camera (CCAMERA).
(ii)
Base classes
Base classes are entities that support the main classes. They are
OpenGL application (CGL_APP), mathematic operations (CVECTOR),
image (CIMAGE), timer system (CTIMER) and log (CLOG).
The relationship among classes with the corresponding attributes (variables)
and behaviors (functions) in C notation is depicted in Figure 4.8.
88
CLOG
CTIMER
m _s zFileNam e
m _bEnabled
m _fTim e1
m _fTim e2
m _fDiffTim e
m _fFPS
m _iFram es Elaps ed
Init()
Write()
Enable()
Dis able()
Is Enabled()
Init()
Update()
GetTim e()
GetFPS()
CTERRAIN
m _heightData
m _texture
m _bTextureMapping
m _iTris PerFram e
m _iSize
CVECTOR
m _fVec[]
Trans form ()
Norm alize()
Length()
DotProduct()
Cros s Product()
Negate()
Get()
Set()
LoadDEM()
UnloadDEM()
GetTextureCoords ()
SetHeightAtPoint()
GetTrueHeightAtPoint()
GetNum Tris PerFram e()
LoadTexture()
UnloadTexture()
CIMAGE
m _ucpData
m _uiWidth
m _uiHeight
m _uiBits PerPixel
m _iID
m _bIs Loaded
LoadData()
LoadTGA()
Create()
Unload()
GetData()
GetWidth()
GetHeight()
GetBits PerPixel()
GetID()
SetID()
Is Loaded()
CQUADTREE
m _ucpQuadMatrix
m _pCam era
m _fDetailLevel
m _fMinRes olution
Init()
Render()
Update()
Shutdown()
PropagateRoughnes s ()
RefineNode()
RenderNode()
RenderVertex()
GetMatrixIndex()
SetDetailLevel()
SetMinRes olution()
GetQuadMatrixData()
Figure 4.8 Class diagram
CCAMERA
m _vecEyePos
m _vecLookAt
m _vecUp
m _vecForward
m _vecSide
m _fYaw
m _fPitch
m _viewFrus tum [][]
Com puteViewMatrix()
SetViewMatrix()
CalculateViewFrus tum ()
SetPosition()
SphereInFrus tum ()
89
4.3.2
Initialization
This process was only made once before the rendering process and need not
to be updated. There are three components should be initialized. The first one is to
configure the environment of OpenGL application. These include:
(i)
Initiating the timer system
(ii)
Initiating the log file
(iii)
Customizing the window’s parameter
(iv)
Creating the window
(v)
Setting up the perspective view
(vi)
Initializing the movement speed and the mouse sensitivity
The timer system was used for calculating the frame rate of the system. The
log file was created in order to report the status of certain processes that involve
when running the program. While the parameters involved in creating the window
include the window’s position, width and height. The movement speed and the
mouse sensitivity were initialized for navigation purpose.
The second component is to set up the camera position and orientation (pitch
and yaw) for navigation purpose. The last component is to set up the desired
resolution (detail level) and minimum resolution for generating hierarchical
quadtree-based view-dependent LOD technique.
4.3.3
Rendering
The rendering process was done continuously for each frame. It consists of
six main steps. Every step is explained in further details in the following
subsections.
90
4.3.3.1 Configuring Information for Camera
It involved the determination of the position of the eye point (camera
position), the reference point (camera look at) and the direction of the up vector.
These parameters were gathered and put in the OpenGL function called
“gluLookAt()”. In general, this function created a viewing matrix for the system.
The matrix mapped the reference point to the negative z-axis and the eye point to the
origin. The direction described by the up vector was mapped to the positive y-axis.
4.3.3.2 Loading and Unloading Terrain Tiles
In this research, the 5 x 5 tiles have been used because it is not too small or
too big for loading purpose. Firstly, the current camera position was obtained to
determine which tiles to be loaded. Then, the algorithms for obtaining the current
tile index and all tiles to be loaded were applied as shown in Algorithm 4.7 and
Algorithm 4.8. The TILE files (OutputN.tile) were retrieved and loaded regarding on
the tile indices obtained from those algorithms.
When the camera moves to the other tile, the system needs to be updated.
Several tiles were unloaded in order to allow the loading of new tiles. The number of
tiles needs to be loaded depends on two conditions:
(i)
if the camera moves to the left, right, upper or lower tile, only five
tiles will be loaded
(ii)
if the camera moves to the upper-left, upper-right, lower-left or lowerright tile, only nine tiles will be loaded
91
Input:
(i)
Camera position (CamPosx, CamPosy)
(ii)
Number of tiles per side (Nt)
(iii)
Number of vertices per side for tile (Vt)
Output:
Current tile index (It)
Pseudo code:
1.
initialize temporary variables, temp1 = 0, temp2 = 0
2.
for each of i, i=0,1,…,(Nt-1) do
3.
if CamPosx is in the range of temp1 and
[temp1+(Vt-1)]then do
4.
5.
for each of j, j=0,1,…,(Nt-1) do
if CamPosy is in the range of temp2 and
[temp2+(Vt-1)]then do
6.
It = Nt * i + j
7.
exit loop
8.
end if
9.
temp2 = temp2 + (Vt-1)
10.
end for
11.
end if
12.
temp1 = temp1 + (Vt-1)
13.
temp2 = 0
14.
end for
Algorithm 4.7: Pseudo code for obtaining the current tile index based on camera
position
92
Input:
(i)
Current tile index (It)
(ii)
Number of tiles per side (Nt)
(iii)
Number of loaded tiles per side (Nl_t)
Output:
(i)
Starting tile index (Is_l_t)
(ii)
Array of tile index (I[Nl_t* Nl_t])
Pseudo code:
1.
Is_l_t = It – [(Nl_t-1)/2] * (Nt+1)
2.
initialize temporary variable, temp = 0
3.
for each of i, i=0,1,…,(Nl_t-1) do
4.
for each of j, j=0,1,…,(Nl_t-1) do
5.
I[Nt*i+j] = Is_l_t + j + temp
6.
end for
7.
temp = temp + Nt
8.
end for
Algorithm 4.8: Pseudo code for obtaining the terrain indices to be loaded
4.3.3.3 Calculating Frustum’s Planes
Firstly, the current projection matrix and model view matrix were obtained
through the OpenGL function called “glGetFloatv()”. The results from this
function (16 floating-point values) were stored separately into two variables (onedimensional array). Then, those two matrices were multiplied in order to get the
clipping matrix based on Equation 3.5. Depending on clipping matrix, the coefficients of six planes (A, B, C and D) were calculated as indicated in Table 3.1 and
then, saved in one-dimensional array consists of four values. Lastly, all planes were
normalized based on Equation 3.12.
93
4.3.3.4 Removing Unseen Terrain Tiles
Each loaded tiles has its own bounding sphere. This bounding volume was
tested with six planes of view frustum in order to determine the visible tiles.
Algorithm 4.9 presents the pseudo code for detecting the visibility status and
assigning flag for loaded tiles. The terrain tile that has ALL-INSIDE or
INTERSECT flag will go to the next step.
Input:
(i)
Center of bounding sphere (Sc)
(ii)
Radius of bounding sphere (Sr)
(iii)
Number of view frustum planes (Nv_f_p)
(iv)
Co-efficient of view frustum plane
Output:
(i)
Distance between bounding sphere and plane (D)
(ii)
Flag represents the visibility status of loaded tiles
Pseudo code:
1.
for each of i, i=0,1,…,Nv_f_p do
2.
D = Sc(x)*Co-efficient(Ai) + Sc(y)*Co-efficient(Bi) +
3.
if D is less than -Sr then do
Sc(z)*Co-efficient(Ci) + Co-efficient(Di)
4.
return ALL-OUTSIDE
5.
end if
6.
if D is less than Sr then do
7.
return INTERSECT
8.
end if
9.
end for
10.
return ALL-INSIDE
Algorithm 4.9: Pseudo code for detecting the visibility status and assigning flag for
loaded tiles
94
4.3.3.5 Generating Quadtree Traversal
Each tile represents a single quadtree structure. There are two main processes
need to be done. First, the initialization of quadtree was made. These include:
(i)
Creating and allocating memory for the quadtree matrix in the form
one-dimensional dynamic array.
(ii)
Initializing the quadtree matrix to 1 for all possible nodes.
(iii)
Propagating the roughness in the elevation data in terrain tiles so that
more triangles can be applied in rough spots of terrain surfaces using
bottom-up approach.
Next, the refinement of quadtree node was done. Based on top-down
approach, the root node (original tile) was split into four small tiles and the process
was repeated until certain condition is met. Firstly, the distance between the current
camera position and current node in quadtree (center of tile) is calculated using L1Norm distance calculation (Equation 3.13). Then, the f value (decision variable) is
computed based on Equation 3.14 for subdivision test. The current node is
subdivided if the f value is less than 1. Otherwise, the subdivision is stopped.
4.3.3.6 Generating Triangle Fans
The purpose of this step is to render the triangle fans that represent the nodes
in quadtree. According to the explanation in Section 3.5.3, there are two conditions
need to be considered. If the current node is the smallest node in quadtree, the
rendering of a triangle fan is done based on the center vertex, which is the middle of
current node. During the rendering process, lower-middle vertex, right-middle
vertex, upper-middle vertex and left-middle vertex are compared with the adjacent
node. If the adjacent node is of a lower detail level then the generation of triangle
that has this vertex is skipped. Actually, the process was done in order to avoid
cracks between two boundaries.
95
If the current node is not the smallest node, there are 16 possibilities of
generating and rendering the triangle fans as illustrated in Figure 4.9. It involves the
rendering of complete fans and partial fans as well as the node that needs to be
traversed down. In order to implement it, the calculation of the bit-code for the fan
arrangement that is based on binary representation was used.
There are four calculations were made sequentially as suggested by Polack
(2003):
(i)
For upper right of current node, the quadtree matrix data was
multiplied with 8.
(ii)
For upper left of current node, the quadtree matrix data was multiplied
with 4.
(iii)
For lower left of current node, the quadtree matrix data was multiplied
with 2.
(iv)
For lower right of current node, the quadtree matrix data was
multiplied with 1.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Indicator:
Render the node
Recurse down to the node’s children
Figure 4.9 Sixteen possibilities of generating triangle fans
96
Then, the calculated bit code was compared with the declared bit code in
order to detect the fans to be rendered. Lastly, the triangle fans were rendered using
OpenGL function, GL_TRIANGLE_FAN for generating the desired triangle meshes.
4.3.4
Interaction
The keyboard and the mouse were used as input devices for interaction
between the user and the system. The functions of keyboard are to:
(i)
Move forward and backward for camera
(ii)
Strafe left and right for camera
(iii)
Enable or disable texture mapping
(iv)
Show the wireframe model or shaded model
(v)
Quit the system
On the other hand, the mouse has been used for changing the orientation of
the camera in the related axis (pitch and yaw).
4.3.5
Shutdown
The purpose of making shutdown procedures is to free the allocated memory
being used during the system is running. This situation was done when user exits the
system. It involves unloading of quadtree matrices for each terrain tile, elevation
data for each tile, imagery data (texture map) and application’s configuration.
97
4.4
Summary
The implementation can be broken down into two important steps, which are
pre-processing and run-time processing. Pre-processing step focuses on splitting the
DEM data into patches for efficient data retrieval, while the run-time processing
works as real-time visualization and navigation tasks that need to be maintain
continuously.
In pre-processing, original DEM data was converted into TXT files in order
to extract the general information and elevation profiles. Then, according to the TXT
files, the TILE files were generated.
In run-time processing, object-oriented approach has been exploited in this
research for facilitating the integration of various methods into the system. The
classes were created and categorized into two parts: main and base classes. There are
four major processes have been done, which are initialization, rendering, interaction
and shutdown procedure. The combination of all those processes is crucial for
producing an effective real-time terrain visualization and navigation system.
CHAPTER 5
RESULTS, ANALYSIS AND DISCUSSIONS
5.1
Introduction
In this chapter, the results from the implementation of the prototype system
are presented. Three different sizes of terrain datasets have been used to assess how
far the efficiency of the proposed algorithm and it affects the performance of the
system. The terrain size is determined by the number of vertices of original DEM
dataset for both vertical and horizontal direction.
Then, the discussions are made and divided into two parts: (i) evaluation of
the prototype system and (ii) comparison with the previous technique. For the first
discussion, the system was evaluated based on two different numbers of vertices per
tile to represent the terrain. For the second discussion, the comparison was made
among four techniques, which are brute force, triangle-based LOD, Pajarola’s
algorithm, and the proposed algorithm. Several criteria were taken into
consideration. These include time for partitioning data, file size of converted data,
memory usage, loading time, frame rate, triangle count and geometric throughput.
99
5.2
System Specifications
The prototype system has been developed depending on several
specifications. The detailed information of hardware and software specifications are
listed in Table 5.1 and Table 5.2 respectively.
Table 5.1: Hardware specifications
Hardware
Specification
Motherboard
Gigabyte GA-74400-L1 (AGP 8x)
Processor
AMD Athlon XP 1800+ (1.53 GHz)
Main memory (RAM)
1 GB DDR400 Kingston
Graphics card
Asus NVIDIA GeForce FX 5700
(256 MB, 128-bit, AGP 8x)
Hard disk
40 GB Maxtor (7200 rpm)
Table 5.2: Software specifications
Software
Specification
Operating system
Microsoft Windows XP Professional
(Service Pack 1)
Development tools
Microsoft Visual C++ 6.0
Graphics library
OpenGL 1.2
DEM viewer and
Global Mapper v6.03
texture generation
Image editor
Adobe Photoshop CS
100
5.3
Testing Data
The original DEM datasets were used in this research to test and evaluate the
performance of proposed algorithm as well as the prototype of real-time terrain
visualization and navigation system. All of them are 7.5-minute DEM (Arizona
area). Each dataset corresponds to the USGS 1:24,000-scale topographic quadrangle
maps which is represented in Universal Transverse Mercator (UTM) coordinate
system and have a grid or data spacing of 10 meters in both directions. The size of
terrain datasets is determined based on the total number of vertices. According to the
research works that have been done by Duchaineau et al. (1997), Pajarola (1998),
Röttger et al. (1998) and Lindstrom and Pascucci (2001) in developing terrain
rendering algorithm, one million vertices is sufficient to be classified as a large
dataset. In order to facilitate the explanations and discussions on the next sections,
the testing data were categorized into three types, which are SMALL, MODERATE
and LARGE. Table 5.3 shows the details on the testing data for this research.
Table 5.3: Testing data
Types of Data
Feature
SMALL
MODERATE
LARGE
Number of vertices
1167 x 1393
3992 x 4032
10115 x 11082
Minimum elevation
461.40
947.00
192.00
1122.80
2722.00
1467.00
Elevation data
value (meter)
Maximum elevation
value (meter)
Texture data
101
5.4
Screenshots from the Prototype System
The prototype system has been run depending on certain settings. Table 5.4
indicates the environment settings for this system. The first two parameters were
defined to generate the desired window. The movement speed and mouse sensitivity
were used for navigation purpose. The field of view (FOV), near clipping plane and
far clipping plane are the camera information. As listed in Table 5.4, three different
FOVs and far clipping planes are needed for testing the impact of those parameters to
the system performance. The remaining parameters are the constant values used for
determining the level of details for the specific terrain tiles.
Table 5.4: Environment settings for testing the proposed algorithm
Parameter
Description
Viewport resolution (column x row)
1024 x 768
Bits per pixel
16
Movement speed
5.0
Mouse sensitivity
1.0
Near clipping plane
0
Far clipping plane
500, 750, 1000
Field of view (FOV)
30o, 45o, 60o
Detail level
10.0
Minimum resolution
8.0
The following figures are some screenshots obtained from the
implementation of the proposed technique. The outputs comprise of wireframe and
textured model.
102
(a)
(b)
Figure 5.1 Small dataset a) wireframe, b) textured
(a)
(b)
Figure 5.2 Moderate dataset a) wireframe, b) textured
(a)
(b)
Figure 5.3 Large dataset a) wireframe, b) textured
103
5.5
Evaluations of the System
Each type of DEM dataset was partitioned into TILE files, which consists of
two different tile sizes (number of vertices per tile): 129 x 129 and 257 x 257. This
was done in order to evaluate on how the number of vertices per tile affects the
measurement criteria that have been mentioned earlier. The evaluations were made
on both of the pre-processing and run-time processing phase as described in the next
subsections.
5.5.1
Pre-Processing
Two criteria were considered in this phase: time for converting DEM data
into tiles and file size of each data format. Table 5.5 and Table 5.6 show the results
of those two criteria based on the types of data mentioned earlier and the number of
generated tiles respectively.
Table 5.5: The results of time required for converting DEM data and the
corresponding file size for number of vertices per tile: 129 x 129 and 257 x 257
Type of
File
Data
Format
SMALL
MODERATE
LARGE
Time Required
File Size
(seconds)
(MB)
129 x 129
257 x 257
129 x 129
257 x 257
DEM
-
-
10.1
10.1
TXT
9.672
9.671
44.8
44.8
TILE
194.171
62.016
7.1
7.4
DEM
-
-
93.5
93.5
TXT
50.063
49.734
205.0
205.0
TILE
10197.969
2632.375
72.8
75.2
DEM
-
-
657.0
657.0
TXT
711.172
710.594
3342.8
3342.8
TILE
704280.112
47160.645
453.3
515.4
104
Table 5.6: Number of generated tiles
Type of Data
Number of Tiles
129 x 129
257 x 257
SMALL
110
30
MODERATE
1024
256
LARGE
6960
1760
Total Time
800000
Time Required (seconds)
700000
600000
500000
129 x 129
400000
257 x 257
300000
200000
100000
0
SMALL
MODERATE
LARGE
Type of Data
Figure 5.4 Total time for converting DEM datasets to TILE files
According to the results, the time required for converting a DEM data into
tile files is depending on terrain resolution (number of vertices). If the higher
resolution terrain data is used, then the longer time is required to do the process.
This can be seen from the use of the abovementioned datasets. For example, the time
required for generating 1292-TILE files of LARGE dataset is around 8 days. Based
on Figure 5.4, the conclusion can be made that the time required for converting a
DEM dataset to TILE files increases exponentially over the terrain resolution.
105
Less time is needed to create TXT files compared to the time to generate
TILE files. This is because there is only the extraction of general information and
elevation profiles are made in creating TXT files. While the generation of TILE files
involve so many steps and tests before writing them into hard disk space (storage).
Besides that, the different number of vertices per tile (tile size) also has
affected the converting time. As indicated in Table 5.5, the time required to generate
the 1292-TILE files is three or four times longer than the creation of 2572-TILE files.
This is because a lot of testing procedures as well as read and write processes need to
be done for the small size of tiles.
However, the evaluation that is based on file size of each data format has
performed the opposite results. Although less time is required to generate TXT files,
but in size, it consumes a lot of storage that is between two and five times larger than
the original DEM data. This is due to the creation of additional non-existing data,
which match the size of (2n+1) x (2n+1) and extra indices for the elevation profiles.
In contrast, the total file size of generated tiles is smaller than the file size of DEM
data. The file size is reduced around 15 to 30 percents compared to the original data.
This is because no “dummy” tiles were created for the final output.
5.5.2
Run-time Processing
For this phase, a simple navigation path that consists of 1000 frames was
created in the scene for fairly evaluation of the system. This path includes the
locations that were close and far to the terrain surfaces as well as the flat and rough
regions. Five criteria were taken into account in evaluating the effectiveness of the
proposed algorithm in real-time.
106
5.5.2.1 Memory Usage
This criterion refers to the amount of main memory consumed along the
prototype system is run. These include the use of variables to store elevation data
and quadtree matrices of terrain tiles. Memory usage was obtained from the
Windows Task Manager, which shows the current running program with its memory
usage. Figure 5.5 illustrates the example of the memory usage of the prototype
system (TileLOD.exe). The results of memory usage for all type of data with two
different tile sizes are shown in Figure 5.6.
Figure 5.5 Memory usage for the proposed system
107
Memory Usage
14000
12850
12581
12881
Amount of Memory (KB)
12000
10000
9237
9213
9240
8000
129 x 129
257 x 257
6000
4000
2000
0
SMALL
MODERATE
LARGE
Type of Data
Figure 5.6 The results of memory usage
Based on the results above, there is no much difference amount of memory
used in the system among the different type of data which using the same tile size.
This is because only 5 x 5 tiles were loaded at once, which means the total number of
vertices to be loaded is fix during the system is running.
However, the comparison can be observed obviously between the different
tile sizes. According to the results in Figure 5.6, a lot of memory has been used for
loading the 2572-TILE files (approximately 12 MB) than the 1292-TILE files
(approximately 9 MB). This is due to the number of vertices need to be loaded into
the system.
108
5.5.2.2 Loading Time
This refers to the time used to load terrain tiles for the first time into the
system. It was obtained using C functions that capture the time before and after the
loading process. Figure 5.7 presents the results of the loading time.
Loading Time
0.171
0.18
0.17
0.16
0.147
Time (seconds)
0.14
0.12
0.1
129 x 129
0.08
257 x 257
0.06
0.04
0.041
0.031
0.028
0.02
0
SMALL
MODERATE
LARGE
Type of Data
Figure 5.7 The results of loading time
As the results show, there is no big difference the time needed to load tiles
among the different type of data which using the same tile size. Like the memory
usage, the loading time depends on amount of vertices involved. However, the time
required for loading the terrain tiles can be distinguished based on the tile size used
for each dataset. The higher tile size to be used, the longer time is needed to load
and store the elevation data into the specific variable.
109
5.5.2.3 Frame Rate
This parameter is used to evaluate overall performance of the system based
on rendering speed. Frame rate was obtained by calculating the number of frames
per second (fps). The equation is:
FPS =
FrameElapsed
(New Time − Pr evious Time )
(5.1)
The results are shown in Figure 5.8 and Figure 5.9, which are based on three
different field of view (FOV) and length of view frustum.
Vertices Per Tile and Type of Data
129 x 129 x 129 x 257 x 257 x 257 x
129
129
129
257
257
257
Average Frame Rate 1
41.682
46.813
LARGE
56.741
47.076
50.141
56.181
MODERATE
49.178
55.549
60.382
SMALL
60°
45°
63.999
67.015
LARGE
64.696
MODERATE
75.441
72.067
SMALL
0
20
40
60
30°
87.213
80
86.449
80.995
89.426
100
Frames Per Second (fps)
Figure 5.8 The results of average frame rate based on different FOV
110
Vertices Per Tile and Type of Data
129 x 129 x 129 x 257 x 257 x 257 x
129
129
129
257
257
257
Average Frame Rate 2
39.647
46.813
51.322
LARGE
44.311
50.141
MODERATE
59.699
48.583
55.549
60.458
SMALL
1000
750
60.508
67.015
75.132
LARGE
67.14
MODERATE
75.441
84.112
72.067
SMALL
0
20
40
500
60
80
80.995
87.165
100
Frames Per Second (fps)
Figure 5.9 The results of average frame rate based on different length of view
frustum
The minimum and maximum of frame rate obtained from the above tests for
all datasets are 33.267 fps and 119.919 fps respectively. In general, the prototype
system has successfully surpassed the interactive rate (more than 20 fps) which
means it qualifies to be classified as a ‘real-time’ application (Lindstrom et al., 1996;
Duchaineau et al., 1998; Pajarola, 1998; Hill, 2002).
According to the experiment that has been done, the adjustment of FOV and
length of view frustum are capable to influence the results of frame rate. The use of
larger values for both parameters has decrease the number of frames per second, thus
reducing the system performance (rendering speed). Another element needs to be
considered is how the tile size affects the frame rate of the system. Based on the
results, the higher fps can be obtained if the smaller tile size is adopted. This is due
to the number of vertices need to be processed for each frame.
111
5.5.2.4 Triangle Count
This parameter is calculated based on the number of triangles generated per
frame. It indicates how much detail can be represented in a frame. Figure 5.10 and
Figure 5.11 illustrates the results of triangle count.
Vertices Per Tile and Type of Data
129 x 129 x 129 x 257 x 257 x 257 x
129
129
129
257
257
257
Average Triangle Count 1
LARGE
25606
21274
MODERATE
21090
28133
25187
23944
60°
24149
23014
22051
SMALL
45°
30°
19423
17823
15988
LARGE
MODERATE
14479
SMALL
18848
17675
19037
15933
14834
0
5000
10000
15000
20000
25000
30000
Number of Triangles Per Frame
Figure 5.10 The results of average triangle count based on different FOV
Vertices Per Tile and Type of Data
129 x 129 x 129 x 257 x 257 x 257 x
129
129
129
257
257
257
Average Triangle Count 2
LARGE
22728
MODERATE
20227
SMALL
19620
26865
25606
24300
23944
24809
23014
1000
750
18316
17823
16696
LARGE
500
19135
17675
15930
MODERATE
15933
14748
SMALL
0
5000
10000
15000
18651
20000
25000
30000
Number of Triangles Per Frame
Figure 5.11 The results of average triangle count based on different length of view
frustum
112
Based on the results, the 2572-TILE data have generated many triangles than
the smaller one. This is because it involves many vertices to form the triangle
meshes that represent terrain surfaces, especially on rough and curvy areas. Besides
that, the number of triangles increases when the larger FOV and length of view
frustum are used. This occurs due to many terrain tiles need to be covered and
rendered, thus generating many triangles per frame are inevitable.
In fact, there exists a strong relationship between the triangle count and frame
rate as shown in Figure 5.12. The number of triangles per frame is inversely
proportional to the number of frames per second. If the triangle count is high, then
as a result, the low frame rate is obtained and vice versa.
Frame Rate versus Triangle Count
Frame Rate
Triangle Count
30000
90
25000
70
20000
60
50
15000
40
10000
30
20
5000
10
0
0
SMALL
MODERATE
LARGE
SMALL
MODERATE
LARGE
129 x 129
129 x 129
129 x 129
257 x 257
257 x 257
257 x 257
Type of Data a nd Vertices Per Tile
Figure 5.12 The results of frame rate versus triangle count
Number of Triangles
Frames Per Second (fps)
80
113
5.5.2.5 Geometric Throughput
Geometric throughput refers to the number of triangles can be rendered per
second. This gives an indication of how close the system comes to the theoretical
geometric throughput of the graphics card. It can be obtained by applying the
following formula:
Triangles Per Second = Triangle Per Frame × FPS
(5.2)
The results of geometric throughput are summarized in Figure 5.13 and
Figure 5.14. According to the results, the range of generated polygons is between
1.0 and 1.3 million triangles per second. Therefore, around 94 percents of triangles
has been reduced for each second (based on maximum geometric throughput for
AGP 8x). In addition, if the comparison is made between the results and the
maximum number of triangles can be transferred to the graphics card as shown in
Table 1.1 (Section 1.2.2), it indicates that all types of graphics cards can be adopted.
Thus, the bandwidth bottleneck can be solved.
257
LARGE
257
MODERATE
257
SMALL
129
LARGE
129
MODERATE
129
Vertices Per Tile and Type of Data
129 x 129 x 129 x 257 x 257 x 257 x
Average Geometric Throughput 1
SMALL
1172639
1198693
1207108
1185703
1200576
1184857
1187599
1278404
1331483
1243052
1194408
1394361
60°
45°
30°
1219390
1333419
1251695
1371939
1290493
1326545
0
500000
1000000
1500000
Tria ngle s Pe r Se cond
Figure 5.13 The results of average geometric throughput based on different FOV
114
257
257
257
SMALL
129
1076757
1200576
1207531
MODERATE
LARGE
129
1065116
1198693
1166446
LARGE
MODERATE
1284723
1333419
1339904
129
Vertices Per Tile and Type of Data
129 x 129 x 129 x 257 x 257 x 257 x
Average Geometric Throughput 2
SMALL
1344121
1290493
1285509
1205295
1278404
1186185
1108264
1194408
1254403
0
500000
1000000
1000
750
500
1500000
Tria ngle s Pe r Se cond
Figure 5.14 The results of average geometric throughput based on different length of
view frustum
5.6
Comparison with Previous Techniques
The comparison has been made between the proposed algorithm and the
previous techniques. These include brute force, triangle-based LOD and Pajarola’s
algorithm. Brute force technique is a conventional way of rendering terrain. It
provides the highest amount of detail to the terrain by rendering full resolution mesh.
Meanwhile, the triangle-based LOD technique to be compared is based on Röttger’s
algorithm (Röttger et al., 1998) as described in Section 2.6.2.2 and Section 3.5. The
last technique is a combination of tile-based data structure and triangle-based LOD
algorithm (restricted quadtree) developed for ViRGIS (Pajarola, 1998).
Four evaluation criteria were used to measure the performance of each
technique. They are memory usage, loading time, frame rate and triangle count. For
the first two criteria, all datasets were used. The remaining criteria only used the
SMALL dataset because the same results will be obtained if the other datasets are
implemented. The predefined navigation path is developed to fairly assess the
115
performance of 1000 consecutive frames. For the proposed algorithm and Pajarola’s
algorithm, the 5 x 5 tiles with number of vertices per tile: 129 x 129 are used. The
environment settings for running all of the techniques are presented in Table 5.7
below:
Table 5.7 Environment settings for comparison purpose
Parameter
Description
Viewport resolution (column x row)
1024 x 768
Movement speed
5.0
Mouse sensitivity
1.0
Near clipping plane
5.6.1
0
Far clipping plane
750
Field of view (FOV)
45o
Memory Usage
According to the results in Figure 5.15, triangle-based LOD technique has
consumed the highest amount of memory compared to the other techniques. This is
due to the data redundancy involved in order to implement this technique. All the
vertices and the corresponding quadtree matrices need to be stored in main memory
at once. If the larger size of terrain is employed using this technique, perhaps the
insufficient memory will be occurred where the amount of memory required to store
the data exceeds the available memory used in the system. Furthermore, the
consideration needs to be made in order to allocate main memory for the system
because this memory is also shared together with other applications run on operating
system.
The second highest of memory usage is using brute force technique. Unlike
the triangle-based LOD technique, this technique only load the data based on original
terrain size. The brute force technique only consumes less than half of triangle-based
116
LOD technique’s memory usage. In contrast, the Pajarola’s algorithm and proposed
algorithm show a great memory management for the system. This is because the size
of loaded data is small than the other techniques.
Techniques
Memory Usage
Proposed Algorithm
9237
9213
9250
Pajarola's Algorithm
9369
9297
9344
SMALL
MODERATE
17292
54296
Triangle-based LOD
LARGE
542808
8044
50893
Brute Force
220052
0
100000
200000
300000
400000
500000
600000
Amount of Memory (KB)
Figure 5.15 Comparison based on memory usage
5.6.2
Loading Time
In comparing the performance of the techniques in terms of loading time, the
results in Figure 5.16 shows that the brute force and triangle-based LOD technique
have used a lot of time to load data. This happens because they need to store
completely large data into memory at once.
On the other hand, there is no big difference in loading time between
Pajarola’s algorithm and the proposed algorithm. Both techniques took no longer
times to load the parts of terrain (tiles) because only 25 tiles were loaded for different
types of data and the size of data is small which involves 416,025 vertices (25 x
1292) compared to brute force technique, 112,094,430 vertices (10115 x 11082) and
triangle-based LOD technique, 268,468,225 vertices (16385 x 16385) for LARGE
dataset.
117
Techniques
Loading Time
Proposed Algorithm
0.028
0.041
0.031
Pajarola's Algorithm
0.042
0.048
0.05
SMALL
MODERATE
1.797
Triangle-based LOD
LARGE
17.75
153.793
Brute Force
0.109
12.765
96.218
0
20
40
60
80
100
120
140
160
180
Time (seconds)
Figure 5.16 Comparison based on loading time
5.6.3
Frame Rate
The drawback of applying brute force technique can clearly be observed in
Figure 5.17. Average frame rate achieved is lower than 1 fps, which means it is not
suitable for real-time application. In addition, it took several seconds to render only
a frame and thus, making the unsmooth navigation as well as slow down the system
performance. This is because there are too many triangles need to be sent to the
limited capability of graphics card.
The triangle-based LOD technique also shows not a good result when dealing
with large terrain data. Although this technique can improve the rendering speed of
the system compared to the brute force technique but it does not surpass the
interactive rate. It only obtained the average frame rate, 15.699 fps. This is because
the polygon simplification implemented using this technique took into account all the
vertices and triangles whether they are inside or outside the view frustum. As a
result, a lot of process needs to be done including the unneeded data.
118
Pajarola’s algorithm has surpassed the real-time rate but it cannot beat the
results obtained by using the proposed algorithm. The number of vertices needs to
be processed and number of triangles needs to be rendered influences the results of
those techniques. Pajarola’s algorithm simplified and rendered triangles for 5 x 5
tiles but the proposed algorithm only processed the data for visible loaded tiles. In
this matter, the proposed algorithm has shown best performance among the other
techniques.
Frame Rate
180
160
Frames Per Second
140
120
Proposed Algorithm
100
Pajarola's Algorithm
80
Triangle-based LOD
Brute Force
60
40
20
0
1
87
173 259 345 431 517 603 689 775 861 947
Number of Frames
Figure 5.17 Comparison based on frame rate
5.6.4
Triangle Count
In fact, the brute force technique has generated the highest number of
triangles which conforms the formula, 2 x (Vv - 1) x (Vh - 1) where Vv is the number
of vertices for vertical direction and Vh is the number of vertices for horizontal
direction. As mentioned earlier, this technique is used to generate full resolution
mesh to represent the terrain surfaces.
119
Based on Figure 5.18, the triangle-based LOD technique has successfully
reduced approximately 97.62 percents of triangles generated using the brute force
technique. This is because only the rough and close regions were given with more
triangles than the far and flat areas. However, this amount of polygons is still big
and capable to reduce the rendering speed. This is because the triangles were also
being rendered for each frame whether they were inside or outside the view frustum.
This situation is also similar in implementing Pajarola’s algorithm. Although the
average number of triangles rendered per frame is lower than using the trianglebased LOD technique, but the triangles that were out-of-sight still being rendered.
In contrast, the proposed algorithm only generated and rendered the triangles
for the visible tiles. Around 99.51 percents triangles have been reduced compared to
the brute force technique. The unseen terrain tiles have been removed using the view
frustum culling method, which is part of the proposed algorithm.
Average Triangle Count
3500000
3246144
Triangles Per Frame
3000000
2500000
2000000
1500000
1000000
500000
77145
26037
15933
0
Brute Force
Triangle-based LOD Pajarola's Algorithm Proposed Algorithm
Techniques
Figure 5.18 Comparison based on triangle count
120
5.7
Summary
In general, the number of loaded tiles and the tile size (number of vertices per
tile) have influenced the performance of the prototype system. The right decision
needs to be made in order to determine the right tile size to handle large terrain data.
It cannot be too small or too large. If a bigger number of vertices per tile are used,
then more data need be processed and this will slow down the system. Besides that,
there is a strong relationship between the frame rate and triangle count. More
triangles generated for each frame will decrease the frame rate or rendering speed of
the system and vice versa.
The results of implementing the prototype system performed that the
proposed algorithm is better than previous technique in many aspects. A dynamic
memory management was achieved by loading parts of tiles instead of entire data.
View frustum culling and view-dependent LOD technique was adopted in order to
optimize the rendering speed of the system, which can be seen in the results of frame
rate, triangle count and geometric throughput. The proposed system has achieved the
lowest memory usage, loading time and triangle count as well as the highest frame
rate when comparing with three other techniques: brute force, triangle-based LOD
and Pajarola’s algorithm.
There are two disadvantages of applying this algorithm. The first one is the
storage required for creating TXT files. Although these files are only used as
temporary files but they consumed more than three times of original DEM’s file size.
Secondly, the process of partitioning the USGS DEM data into TILE files in preprocessing is time-consuming especially when converting very huge data, which is
more than 500 MB. However, this process is done once for each dataset.
CHAPTER 6
CONCLUSION
6.1
Conclusion
In this research, a set of techniques has been developed to organize and
visualize large-scale terrain that considers the capability of current hardware
technology. Actually, the proposed algorithm is an extended version of existing
triangle-based LOD technique, which tends to overcome the problems that cannot be
handled by the triangle-based LOD technique especially the memory constraint and
limitation of geometric throughput can be transferred to the graphic cards. The main
contributions of this research are:
(i)
Developing a simple and efficient data structure based on overlapping
tile approach.
(ii)
Developing an algorithm that can manage memory usage and reduce
loading time in real-time.
(iii)
Developing an effective algorithm to cull and reduce the amount of
terrain geometry based on view frustum.
(iv)
Integrating the data structure and algorithms with the present trianglebased LOD technique in the implementation of prototype system.
122
For the first contribution, original terrain data has been converted into tiles
with a simple indexing scheme that is based on filename (OutputN.tile). This was
done in pre-processing phase. According to the results that have been produced, the
total size of tile files generated is smaller than original DEM data, thus reducing the
storage utilization. There are two drawbacks identified from the implementation of
this approach. Firstly, the generated TXT files have consumed three times more than
the original terrain’s file size. However, those files were temporarily being used and
have been deleted after the TILE files completed. Secondly, the time required to
complete the partitioning process took much time especially for a very large terrain
size. Therefore, tradeoff needs to be made in order to gain a great performance for
real-time visualization and navigation.
For the second contribution, the loading of several tiles (5 x 5 tiles) into the
system instead of all tiles is a better approach in order to solve the memory constraint
and time required to load terrain data. The loading and unloading of tiles that is
based on camera position is adopted to reduce the memory usage of the system in
real-time. At the same time, the straightforward tile searching and file retrieval have
been implemented in avoiding longer time to wait the loading process.
The polygon reduction at tile level using view frustum culling technique is
the third contribution in this research. This culling technique has successfully
reduced many triangles to be processed and rendered per frame and per second. This
can be clearly observed from the results of triangle count between the proposed
technique (with culling) and Pajarola’s algorithm (without culling). Besides that, the
bandwidth bottleneck and limited geometric throughput can be sent to graphics card
has effectively been tackled by not allowing the number of rendered triangles per
second exceed the maximum amount of data can be handled by the memory bus.
Lastly, it is a right decision to integrate the tile data structure, dynamic load
scheme, view frustum culling and triangle-based LOD technique into the system.
According to the analysis that has been done, the proposed technique has produced
better results than the other techniques. High frame rate has been achieved for all
different test datasets. Although only one triangle-based LOD technique has been
123
used in this research but actually, the proposed technique can be combined with any
triangle-based LOD techniques.
Although this research is focusing on GIS field, but the use of the proposed
technique is not limited to that area. It can be applied in the following fields:
(i)
Virtual tourism
Terrain can be combined with other objects such as buildings, roads,
trees and so on into the system. Virtual tourism needs an interactive
system that allows people or tourists to take a virtual walkthrough to
the desired places smoothly.
(ii)
Military
Terrain can be applied in battlefield simulation and flight simulator.
This is useful for new army in order to train them before facing with
the real environment and situation.
(iii)
Games development
Terrain models are frequently being used in outdoor-based games
especially in first-person and third-person games. The important
feature in developing such games is to achieve high frame rate
(frames per second). According to the impressive results obtained
from this research, it is really suitable for games development.
(iv)
3D modeling software
Most conventional 3D modeling software tools such as 3D Studio
Max, Maya and Softimage have very poor support for terrain model.
The only way to represent the terrain is with a very large number of
small triangles as implemented using brute-force technique. By
inserting the proposed technique in the research into these tools, it
will improve the performance of the software.
124
6.2
Recommendations
In order to enhance the performance of real-time terrain visualization and
navigation, several suggestions can be made for future works. These include:
(i)
Applying occlusion culling
In this research, only view frustum culling is adopted. Although this
technique has removed many triangles per frame but the triangles that
are behind the terrain surfaces are still being rendered. Occlusion
culling technique can be inserted into the proposed system in order to
reduce more triangles in a scene.
(ii)
Integrating with out-of-core algorithm
Actually, the proposed technique developed in this research can be
classified as an in-core technique, which means all the terrain data is
located in main memory. In out-of-core technique, only part of the
data is kept in main memory and the rest is left on secondary storage
(hard disk). Therefore, this approach can solve the memory constraint
when dealing with massive terrain data. However, the technique is
depending on operating system (OS) to be used. The developer needs
to identify the specific function embedded in OS that can allocate and
activate the usage of virtual memory (hard disk) for the system.
(iii)
Using graphics processing unit (GPU)
GPU can be exploited instead of using central processing unit (CPU)
power as implemented in this research. It has been designed for fast
display of geometric primitives. By using this processing, the
bandwidth bottleneck (limitation of memory bus) in order to transfer
the geometric primitives to the graphics card can be overcome. The
data is sent directly to the GPU for processing and rendering
purposes. Currently, the capability of GPU is still under the
performance of CPU. However, the peak performance of GPU has
been increasing at the rate of 2.5 - 3.0 times a year, much faster than
125
Moore’s law for CPU. Thus, the implementation of real-time terrain
system using this processing can be well-suited for the next few years.
(iv)
Implementing parallel processing
The prototype system developed in this research can be expanded to
the networking environment. By adopting the client-server
communication, parallel processing can be done in order to obtain
higher level of system performance. The use of load balancing
algorithm can be used to distribute the job efficiently based on priority
and burden of processing power.
REFERENCES
Ahn, H.K., Mamoulis, N. and Wong H.M. (2001). A Survey on Multidimensional
Access Methods. COMP630c - Spatial, Image and Multimedia Databases:
Technical Report UU-CS-2001-14.
Assarsson, U. (2001). Detail Culling for Robotic Products. ABB. http://www.abb.
com/global/abbzh/abbzh251.nsf!OpenDatabase&db=/global/seitp/seitp161.ns
f&v=17ec2&e=us&c=76133777536BC959C12569CE006C6EE3
Assarsson, U. and Möller, T. (2000). Optimized View Frustum Culling Algorithms
for Bounding Boxes. Journal of Graphics Tools. 5(1): 9-22.
Beckmann, N., Kriegel, H.P., Schneider, R. and Seeger, B. (1990). The R*-tree: an
efficient and robust access method for points and rectangles. Proceedings of
ACM SIGMOD International Conference on the Management of Data. May
23-25. New York, USA: ACM Press, 322-331.
Bentley, J.L. (1975). Multidimensional binary search trees used for associative
searching. Communications of the ACM. 18(9): 509-517.
Bittner, J., Wonka, P. and Wimmer, M. (2001). Visibility Preprocessing for Urban
Scene using Line Space Subdivision. Proceedings of Pacific Graphics 2001.
October 16-18. Tokyo, Japan, 276-284.
Blekken, P. and Lilleskog, T. (1997). A Comparison of Different Algorithms for
Terrain Rendering. Spring semester project. CS Dept., Norwegian U. of
Science and Technology. unpublished.
Bradley, D. (2003). Evaluation of Real-time Continuous Terrain Level of Detail
Algorithms. Charlton University: COM4905 Honours Project.
Burns, D. and Osfield, R. (2001). Introduction to the OpenSceneGraph. OSG
Community. http://www.openscenegraph.com
Burrough, P.A. and McDonnell, R.A. (1998). Principles of Geographical
Information Systems. New York: Oxford University Press Inc.
127
CGL (2004). Backface Culling. CS488/688: Introduction to Interactive Computer
Graphics. Computer Graphics Lab, University of Waterloo.
http://medialab.di.unipi.it/web/IUM/waterloo/node66.html
DeFloriani, L., Magillo, P. and Puppo, E. (2000). VARIANT: A System for Terrain
Modeling at Variable Resolution. GeoInformatica. 4(3), 287-315.
Duchaineau, M., Wolinsky, Sigeti, D.E., Miller, M.C., Aldrich, C., and MineevWeinstein, M.B. (1997). ROAMing Terrain: Real-time optimally adapting
meshes. Proceedings of the 8th Conference on Visualization ’97. October 1824. Los Alamitos, CA, USA: IEEE Computer Society Press, 81-88.
Dunlop, R. (2001). Collision Detection, Part 1: Using Bounding Spheres. Microsoft
DirectX MVP. http://www.mvps.org/directx/articles/using_bounding_spheres
.htm
Fagin, R., Nievergelt, J., Pippenger, N. and Strong, R. (1979). Extendible Hashing: A
Fast Access Method for Dynamic Files. ACM Transactions Database
Systems. 4(3): 315-344.
Faloutsos, C. (1988). Gray Codes for Partial Match and Range Queries. IEEE
Transactions on Software Engineering. 14(10): 1381-1393.
Faloutsos, C. and Roseman, S. (1989). Fractals for Secondary Key Retrieval.
Proceedings of the 8th ACM SIGACT-SIGMOD-SIGART Symposium on
Principles of Database Systems. March. New York, USA: ACM Press, 247252.
Fritsch, D. (1996). Three-Dimensional Geographic Information Systems – Status and
Prospects. Proceedings of International Archives of Photogrammetry and
Remote Sensing. July 12-18. Vienna, Austria: ISPRS, 215-221.
Gadish, D. (2004). Information System for Management. CIS 500 Summer 2004:
Slide show.
Gaede, V. and Günther, O. (1998). Multidimensional Access Methods. ACM
Computing Surveys. 30(2): 170-231.
Gottschalk, S., Lin, M.C. and Manocha, D. (1996). OBBTree: A Hierarchical
Structure for Rapid Interference Detection. Proceedings of the 23rd Annual
Conference on Computer Graphics and Interactive Techniques. New York,
USA: ACM Press, 171-180.
128
Gribb, G. and Hartmann, K. (2001). Fast Extraction of Viewing Frustum Planes from
the World-View-Projection Matrix. unpublished.
Guttman, A. (1984). R-trees: A Dynamic Index Structure for Spatial Searching.
Proceedings of SIGMOD ‘84. Jun 18-21. Boston, MA: ACM Press, 47-57.
Haines, E. (2001). 3D Algorithmic Optimizations. California, USA: SIGGRAPH
2001 Course.
Hawkins, K. and Astle, D. (2002). OpenGL Game Programming. Ohio, USA:
Premier Press.
He, Y. (2000). Real-time Visualization of Dynamic Terrain for Ground Vehicle
Simulation. University of Iowa: Ph.D. Thesis.
Hill, D. (2002). An Efficient, Hardware-Accelerated, Level-of-Detail Rendering
Technique for Large Terrains. University of Toronto: M.Sc. Thesis.
Hjaltason, G.R. and Samet, H. (1995). Ranking in Spatial Databases. Proceedings of
the 4th International Symposium on Advances in Spatial Databases. August 69. London, UK: Springer-Verlag, 83-95.
Hoff, K. (1996). A Fast Method for Culling of Oriented-Bounding Boxes (OBBs)
Against a Perspective Viewing Frustum in Large Walkthrough Models.
http://www.cs.unc.edu/hoff/research/index.html
Hoppe, H. (1998). Smooth View Dependant Level-of-Detail Control and its
Application to Terrain Rendering. Proceedings of IEEE Visualization 1998.
July 29-31. Los Alamitos, CA, USA: IEEE Computer Society Press, 35-42.
Hoppe, H. (1999). Optimization of mesh locality for transparent vertex caching.
Proceedings of the 26th Annual Conference on Computer Graphics and
Interactive Techniques. New York, USA: ACM Press, 269-276.
Hubbard, P.M. (1995) Collision Detection for Interactive Graphics Applications.
IEEE Transactions on Visualization and Computer Graphics. 1(3): 218-230.
Jagadish, H.V. (1990). Linear Clustering of Objects with Multiple Attributes.
Proceedings of the ACM SIGMOD International Conference on Management
of Data. May 23-26. New York, USA: ACM Press, 332-342.
Kim, S.H. and Wohn, K.Y. (2003). TERRAN: out-of-core TErrain Rendering for
ReAl time Navigation, Proceedings of EUROGRAPHICS 2003. September 16. New York, USA: ACM Press.
129
Klosowski, J.T., Held, M., Mitchell, S.B., Sowizral, H. and Zikan, K. (1998).
Efficient Collision Detection Using Bounding Volume Hierarchies of kDOPs. IEEE Transaction on Visualization and Computer Graphics. 4(1): 2136.
Kofler, M. (1998). R-trees for Visualizing and Organizing Large 3D GIS Databases.
Technischen Universität Graz: Ph.D. Thesis.
Kumler, M.P. (1994). An Intensive Comparison of Triangulated Irregular Networks
(TINs) and Digital Elevation Models (DEMs). Cartographica Monograph 45.
31(2): 1-99.
Langbein, M., Scheuermann, G. and Tricoche, X. (2003). An Efficient Point
Location Method for Visualization in Large Unstructured Grids. Proceedings
of 8th International Fall Workshop on Vision, Modeling and Visualization
2003. November 19-21. Munich, German: Aka GmbH, 27-35.
Larson, P.A. (1980). Linear Hashing with Partial Expansions. Proceedings of the 6th
International Conference on Very Large Data Bases. October 1-3.
Washington, DC, USA: IEEE Computer Society, 224-232.
Laurila, P. (2004). Geometry Culling in 3D Engines. GameDev.net, LLC.
http://www.gamedev.net/reference/programming/features/culling/default.asp
Lindstrom, P. and Pascucci, V. (2001). Visualization of large terrains made easy.
Proceedings of IEEE Visualization 2001. October 24-26. Washington, DC,
USA: IEEE Computer Society Press, 363–371.
Lindstrom, P., Koller, D., Ribarsky, W., Hodges, L.F., Bosch, A.O. and Faust, N.
(1997). An Integrated Global GIS and Visual Simulation System. Georgia
Institute of Technology: Technical Report GIT-GVU-97-07.
Lindstrom, P., Koller, D., Ribarsky, W., Hodges, L.F., Turner, G.A. and Faust, N.
(1996). Real-time, continuous level of detail rendering of height fields.
Proceedings of the 23rd Annual Conference on Computer Graphics and
Interactive Techniques. New York, USA: ACM Press, 109-118.
Litwin, W. (1980). Linear Hashing: A New Tool for File and Table Addressing.
Proceedings of the 6th International Conference on Very Large Data Bases.
October 1-3. Washington, DC, USA: IEEE Computer Society, 212-223.
130
Luebke, D., Reddy, M., Cohen, J.D., Varshney, A., Watson, B. and Huebner, R.
(2002). Level of Detail For 3D Graphics. San Francisco: Morgan Kaufmann
Publishers.
Martens, R. (2003). Occlusion Culling for the Real-time Display of Complex 3D
Models. Limburgs Universitair Centrum, Belgium: M.Sc. Thesis.
Mokbel, M.F., Aref, W.G. and Kamel, I. (2002). Performance of Multi-Dimensional
Space-Filling Curves. Proceedings of the 10th ACM International Symposium
on Advances in Geographics Information Systems. November 8-9. New York,
USA: ACM Press, 149-154.
Morley, M. (2000). Frustum culling in OpenGL. http://www.markmorley.com
Mortensen, J. (2000). Real-time rendering of height fields using LOD and occlusion
culling. University College London, UK: M.Sc. Thesis.
Nievergelt, J., Hinterberger, H. and Sevcik, K. (1984). The Grid File: An Adaptable,
Symmetric Multikey File Structure. ACM Transactions Database Systems.
9(1): 38-71.
Noor Azam, M. S. and Ezreen Elena, A. (2003). 3D Virtual GIS Architecture.
Proceedings of Advanced Technology Congress 2003. May 20-21. Putrajaya,
Selangor: The Institute of Advanced Technology (ITMA).
Ögren, A. (2000). Continuous Level of Detail In Real-Time Terrain Rendering.
Umea University: M.Sc. Thesis.
Orenstein, J.A. and Merrett, T.H. (1984). A Class of Data Structures for Associative
Searching. Proceedings of the 3rd ACM SIGACT-SIGMOD-SIGART
Symposium on Principles of Database Systems. April 2-4. New York, USA:
ACM Press, 181-190.
Pajarola, R. (1998). Access to Large Scale Terrain and Image Databases in
Geoinformation Systems. Swiss Federal Institute of Technology (ETH)
Zurich: Ph.D. Thesis.
Peng, W., Petrovic, D. and Crawford, C. (2004). Handling Large Terrain Data in
GIS. Proceedings of the XXth ISPRS Congress – Commission IV. XXXV(B4):
281-286.
Picco, D. (2003). Frustum Culling. FLIPCODE.COM, INC.
http://www.flipcode.com/articles/article_frustumculling.shtml
Polack, T. (2003). Focus on 3D Terrain Programming. Ohio, USA: Premier Press.
131
Röttger, S., Heidrich, W., Slusallek, P. and Seidel, H.P. (1998). Real-Time
Generation of Continuous Levels of Detail for Height Fields. Proceedings of
WSCG ‘98. February 9-13. Plzen, Czech Republic: University of West
Bohemia, 315-322.
Samet, H. (1984). The Quadtree and Related Hierarchical Data Structure. ACM
Computing Survey. 16(2): 187-260.
Samet, H. (1990). The Design and Analysis of Spatial Data Structures. Reading,
MA: Addison-Wesley.
Shade, J.W. (2004). View-Dependent Image-Based Techniques for Fast Rendering of
Complex Environments. University of Washington: Ph.D. Thesis.
Shen, H.W. (2004). Spatial Data Structures and Culling Techniques. Slide Show CIS 781.
Sheng, J.M. and Liu Sheng, O.R. (1990). R-trees for large geographic information
systems in a multi-user environment. Proceedings of the 23rd Annual Hawaii
International Conference on System Sciences. January 2-5. Washington, DC,
USA: IEEE Computer Society Press, 10-17.
Stewart, A.J. (1997). Hierarchical Visibility in Terrains. Proceedings of the
Eurographics Workshop on Rendering Techniques ’97. June 16-18. London,
UK: Springer-Verlag, 217-228.
Tamminen, M. (1982). The Extendible Cell Method for Closest Point Problems. BIT
22. 27-41.
Ulrich, T. (2002). Rendering Massive Terrains using Chunked Level of Detail
Control. Course Notes of ACM SIGGRAPH 2002. July 21-26. Texas, USA:
ACM Press, Course 35.
United States Army Topographic Engineering Center (2005). Survey of Terrain
Visualization Software. Alexandria (U.S.): Technical Report.
USGS (1992). Standards for Digital Elevation Models. U.S. Department of the
Interior, United States Geological Survey, National Mapping Division, USA:
National Mapping Program Technical Instructions.
VTP (1999). Applying Ground Textures. Virtual Terrain Project.
http://www.vterrain.org/Textures/ground_textures.html
Woo, M., Neider, J., Davis, T. and Shreiner, D. (1999). OpenGL Programming
Guide. Canada: Addison-Wesley Longman, Inc.
132
Youbing, Z., Ji, Z., Jiaoyang, S. and Zhigeng, P. (2001). A Fast Algorithm for Large
Scale Terrain Walkthrough. Proceedings of the International Conference on
CAD & Graphics. August 20-22. Kunming, China.
Yu, H., Mehrotra, S. Ho, S.S., Gregory, T.C. and Allen, S.D. (1999). Integration of
SATURN System and VGIS. Proceedings of the 3rd Symposium on the
Federated Lab on Interactive and Advanced Display. February 2-4.
Aberdeen, MD: Army Research Lab, 59-63.
Zlatanova, S. (2000). 3D GIS for Urban Development. Graz University of
Technology: Ph.D. Thesis.
133
APPENDIX A
DEM Elements Logical Record Type A
Data Element
1
Type
(FORTRAN
Notation)
Physical Record
Format
Starting
Ending
Byte
Byte
1
40
File name
ALPHA
Free Format Text
ALPHA
41
80
Filler
SE geographic
corner
--INTEGER*2,
REAL*8
81
110
109
135
Process Code
ALPHA
136
136
---
137
137
ALPHA
138
140
Filler
Sectional
Indicator
Description
The authorized digital cell
name followed by a comma,
space, and the two-character
State designator(s) separated
by hyphens. Abbreviations
for other countries, such as
Canada and Mexico, shall not
be represented in the DEM
header.
Free format descriptor field,
contains useful information
related to digital process such
as digitizing instrument, photo
codes, slot widths, etc.
Blank fill.
SE geographic quadrangle
corner ordered as:
x = Longitude =
SDDDMMSS.SSSS
y = Latitude =
SDDDMMSS.SSSS
(neg sign (S) right justified, no
leading zeroes, plus sign (S)
implied)
1=Autocorrelation
RESAMPLE Simple bilinear
2=Manual profile GRIDEM
Simple bilinear
3=DLG/hypsography CTOG
8-direction linear
4=Interpolation from
photogrammetric
system contours DCASS 4direction linear
5=DLG/hypsography
LINETRACE, LT4X Complex
linear
6=DLG/hypsography
CPS-3, ANUDEM, GRASS
Complex polynomial
7=Electronic imaging (nonphotogrametric),
active or passive, sensor
systems.
Blank fill.
This code is specific to 30minute DEM's. Identifies
1:100,000-scale sections.
134
2
Origin code
ALPHA
141
144
3
DEM level code
INTEGER*2
145
150
4
Code defining
elevation pattern
INTEGER*2
151
156
5
Code defining
ground
planimetric
reference system
INTEGER*2
157
162
INTEGER*2
163
168
REAL*8
169
528
Code defining
unit of measure
for ground
planimetric coordinates throughout the file
INTEGER*2
529
534
Code defining
unit of measure
for elevation
coordinates
throughout the
file
INTEGER*2
Number (n) of
sides in the
polygon which
defines the
coverage of the
DEM file
INTEGER*2
6
Code defining
zone in ground
planimetric
reference system
7
Map projection
parameter
8
9
10
535
540
Free format Mapping Origin
Code. Example: MAC, WMC,
MCMC, RMMC, FS, BLM,
CONT (contractor),
XX (state postal code).
Code 1=DEM-1
2=DEM-2
3=DEM-3
4=DEM-4
Code 1=regular
2=random, reserved for
future use.
Code 0=Geographic
1=UTM
2=State plane
Code 0 represents the
geographic (latitude/
longitude) system for 30minute, 1-degree and Alaska
DEM's. Code 1 represents
the current use of the UTM
coordinate system for
7.5-minute DEM's.
Codes for State plane and
UTM coordinate zones. Code
is set to zero if element 5 is
also set to zero, defining data
as geographic.
Definition of parameters for
various projections. All 15
fields of this element are set to
zero and should be ignored
when geographic, UTM, or
State plane coordinates are
coded in data element 5.
Code 0=radians
1=feet
2=meters
3=arc-seconds
Normally set to code 2 for 7.5minute DEM's. Always set to
code 3 for 30-minute, 1degree, and Alaska DEMs.
Code 1=feet
2=meters
Normally code 2, meters, for
7.5-minute, 30-minute, 1degree, and Alaska DEM's.
541
546
Set to n=4.
135
11
12
13
14
A 4,2 array
containing the
ground coordinates of the
quadrangle
boundary for the
DEM
A two-element
array containing
minimum and
maximum
elevations for the
DEM
Counterclockwise angle (in
radians) from the
primary axis of
ground planimetric reference
to the primary
axis of the DEM
local reference
system
Accuracy code
for elevations
REAL*8
547
738
REAL*8
739
786
REAL*8
787
810
INTEGER*2
811
816
The coordinates of the
quadrangle corners are ordered
in a clockwise direction
beginning with the southwest
corner. The array is stored as
as pairs of eastings and
northings.
The values are in the unit of
measure given by data element
9 in this record and are the
algebraic result of the method
outlined in data element 6,
logical record B.
Set to zero to align with the
coordinate system specified in
element 5.
Code 0=unknown accuracy
1=accuracy information
is given in logical
record type C.
Only integer values are
permitted for the x and y
resolutions. For all USGS
DEMs except the 1-degree
DEM, z resolutions of 1
decimal place for feet and 2
decimal places for meters are
permitted. Some typical arrays
are:
30, 30, 1; and 10, 10, .1 for
7.5-minute DEM
2, 2, 1; and 2, 2, .1 for 30minute DEM
3, 3, 1 for 1-degree DEM
2, 1, 1; and 2, 1, .1 for 7.5minute Alaska DEM 3,
2, 1; and 3, 2, .1 for 15-minute
Alaska DEM.
15
A three-element
array of DEM
spatial
resolutionfor
x,y,z.
REAL*4
817
852
16
A two-element
array containing
the number of
rows and
columns (m,n) of
profiles in the
DEM
Largest primary
contour interval
INTEGER*2
853
864
When the row value m is set to
1 the n value describes the
number of columns in the
DEM file.
INTEGER*2
865
869
Present only if two or more
primary intervals exist (level 2
DEM's only).
17
136
18
Source contour
interval units
INTEGER*1
870
870
19
Smallest primary
contour interval
INTEGER*2
871
875
20
Source contour
interval units
INTEGER*1
876
876
21
Data source date
INTEGER*2
877
880
22
Data inspection
and revision date
INTEGER*2
881
884
23
Inspection flag
ALPHA*1
885
885
24
Data validation
flag
INTEGER*1
886
886
Corresponds to the units of the
map largest primary contour
interval 0=N.A.,
1=feet, 2=meters (level 2
DEM's only).
Smallest or only primary
contour interval (level 2
DEM’s only).
Corresponds to the units of the
map smallest primary contour
interval. 1=feet, 2=meters
(Level 2 DEM’s only).
“YYYY” 4 character year, e.g.
1975, 1997, 2001, etc.
Synonymous with the original
compilation data and/or the
date of the photography.
“YYYY” 4 character year.
Synonymous with the date of
completion and/or the date of
revision.
"I" Indicates all processes of
part3, Quality Control has
been performed.
0= No validation performed.
1=RMSE computed from test
points (record C added), no
quantitative test, no interactive
DEM editing or review.
2=Batch process water body
edit and RMSE computed
from test points.
3=Review and edit, including
water edit. No RMSE
computed from test points.
4=Level 1 DEM's reviewed
and edited. Includes water
body editing. RMSE
computed from test points.
25
Suspect and void
area flag
INTEGER*1
887
888
5=Level 2 and 3 DEM's
reviewed and edited. Includes
water body editing and
verification or vertical
integration of planimetric
categories (other than
hypsography or hydrography
if authorized). RMSE
computed from test points.
0=none
1=suspect areas
2=void areas
3=suspect and void areas
137
26
Vertical datumn
INTEGER*1
889
890
27
Horizontal
datumn
INTEGER*1
891
892
28
Data Edition
INTEGER*2
893
896
29
Percent Void
INTEGER*2
897
900
30
Edge match flag
INTEGER
901
908
31
Vertical datumn
shift
REAL*8
909
915
1=local mean sea level
2=National Geodetic Vertical
Datum 1929 (NGVD 29)
3=North American Vertical
Datum 1988 (NAVD 88).
1=North American Datum
1927 (NAD 27)
2=World Geodetic System
1972 (WGS 72)
3=WGS 84
4=NAD 83
5=Old Hawaii Datum
6=Puerto Rico Datum
01-99 Primarily a DMA
specific field. (For USGS use,
set to 01).
If element 25 indicates a void,
this field (right justified)
contains the Percentage of
nodes in the file set to void (32,767).
Edge match status flag.
Ordered West, North, East,
and South.
Vertical datum shift - Value is
in the form of SFFF.DD Value
is the average shift value for
the four quadrangle corners
obtained from program
VERTCON. Always add this
value to convert to NAVD88.
138
APPENDIX B
DEM Elements Logical Record Type B
Data Element
Type
(FORTRAN
Notation)
Physical Record
Format
Starting
Ending
Byte
Byte
1
12
A two-element
array containing
the row and
column identifycation number of
the DEM profile
contained in this
record.
A two-element
array containing
the number (m,n)
of elevations in
the DEM profile
INTEGER*2
INTEGER*2
13
24
A two-element
array containing
the ground
planimetric coordinates (xgp,ygp)
of the first
elevation in the
profile
Elevation of
local datum for
the profile
REAL*8
25
72
REAL*8
73
96
5
A two-element
array of
minimum and
maximum
elevations for the
profile
REAL*8
97
144
6
An m,n array of
elevations for the
profile.
INTEGER*4
1
2
3
4
6x(146 or 170)
146 = max for first
block. 170 = max
for subsequent blocks
Description
The row and column
numbers may range from 1 to
m and 1 to n. The row
number is normally set to 1.
The column identification
is the profile sequence
number.
The first element in the
field corresponds to the
number of rows of nodes in
this profile. The second
element is set to 1, specifying
1 column per B record.
-
The values are in the units of
measure given by data
element 9, logical record type
A.
The values are in the units of
measure given by data
element 9 in logical record
type A and are the algebraic
result of the method outlined
in data element 6 of this
record.
A maximum of six characters
are allowed for each integer
elevation value. A value in
this array would be multiplied
by the“z “spatial resolution
(data element 15, record type
A)” and added to the
“Elevation of local datum for
the profile (data element 4,
record type B)” to obtain the
elevation for the point.
139
INDEX
14-dops 30
18-dops 30
26-dops 30
A
AABB (Axis Aligned Bounding Box) 3,
25-8, 30, 54
AGP 7, 99, 113
Alaska DEMs 134
Application Programming Interface (API)
21
automation system 1
Average Frame Rate 109-110, 117
Average Geometric Throughput 113-114
Average Triangle Count 111, 119
B
backface culling 3, 21, 40, 127
bandwidth bottleneck 113, 122, 124
bilinear 133
bintree 38, 41, 49
blocks 32, 44-45, 138
adjacent 44-45
tiled 3, 18, 20, 53
bounding box 16, 27-28, 126
Bounding Sphere (BS) 3, 26, 29, 54, 67, 93,
127
bounding volumes 25-27, 30, 40, 93, 129
C
camera 24, 63, 71, 87, 90, 96
camera position 4, 23, 57, 89-91, 122
central processing unit (CPU) 7, 124-125
CFD (computational fluid dynamics) 13
child nodes 13, 16
child-pointer 16
classes 87, 97, 130
clipping plane 66, 101, 115
co-efficients 65
complex algorithms 20, 53
complex searching algorithm 58
computational fluid dynamics (CFD) 13
Computer-based information system 1
computer graphics 20, 32, 127-129
coordinates 5, 29, 32, 34-35, 58, 135
ground 58, 75-76, 135
corner points 27-28
CPU (central processing unit) 7, 124-125
cracks 20, 39, 43, 45, 49, 70, 94
culling 10, 34, 36, 68, 122, 128
culling methods 3-4, 11, 20-22, 54
culling technique 9, 122, 131
D
data element 134-135, 138
data format 55, 58, 103, 105
data structure 3-4, 9, 11, 13, 18-19, 54,
57-58, 72, 74-76, 121, 130
Data Tiles 18
data types 74, 76
databases 1-2, 16, 57, 59, 71
datasets 4-5, 8, 15, 98, 100, 102-104, 108,
110, 114, 120
Decision Support System (DSS) 1
DEMs (Digital Elevation Models) 5, 129
detail culling 21-22
diamonds 38, 42
Digital Elevation Models (DEMs) 5, 129
Digital Terrain Elevation Data (DTED) 5
dimension 18
direction 21, 46, 60, 90
Discrete Orientation Polytope (DOP) 3, 26,
30, 54
distance 22, 44, 56, 67, 69, 93-94
distance calculation 28-29, 69
DSS (Decision Support System) 1
DTED (Digital Terrain Elevation Data) 5
Dynamic array 80, 82
E
edge 45, 47
Elements Logical Record Type 133, 138
elevation data 57-59, 61, 78, 83-84, 94, 96,
100, 106, 108
Elevation of local datum 138
elevation profiles 55, 57-59, 74, 78, 97,
105
140
elevation values 34, 58, 61, 63, 79, 86
elevations 61, 75, 135, 138
error metrics 40, 44, 46, 48, 53
ESS (Executive Support System) 1
Executive Support System (ESS) 1
existence status 60, 82-84
existing tiles 60-61, 82, 85
Expert System 1
extraction 58, 65, 78, 105
Guttman 3, 16, 20, 128
H
Hard disk 99, 124
Hardware specifications 99
hashing 12, 127
height fields 18, 43, 129-131
horizontal direction 59, 80-83, 98, 118
horizontal line 13
F
I
file pointers 74, 76, 80
file size 98, 103, 105
FOV 46, 101, 109-113, 115
fps 53, 109-110, 112, 117
frame rate 20, 37, 53, 89, 98, 109-110, 112,
114, 117-118, 120
frames 3, 5-6, 32, 41-42, 44, 46, 68, 89,
105, 109-112, 117-120, 122-124
frustum 5, 24-29, 46, 54, 56-57, 64, 93,
109-112, 114, 117, 119, 121
frustum culling 3, 21, 24-26, 40, 54-55, 71,
120, 122, 124, 130
frustum culling technique 64, 68, 122
frustum planes 30, 93
Furthest east/west easting value 76
Furthest south/north northing value 76
G
Geographic Resources Analysis Support
System (GRASS) 2
geometric primitives 11, 26, 32, 35, 37, 54,
124
geometric throughput 7, 98, 113-114,
120-121
geometry 5, 7-8, 26
geometry of visible terrain tiles 56-57
Gigabytes 5, 7
GIS (Geographic Information System) 1-2
GIS applications 2, 5
GPU (graphics processing unit) 124
graphics card 7, 113, 117, 122, 124
graphics hardware 7
graphics system 50
grid, regular 6, 32, 34, 52, 54
ground planimetric reference system 75,
134
Günther 3, 5, 12, 19, 127
imagery data 5, 18, 96
index 11, 16-17, 56, 59, 60-61, 63, 78, 82,
90-92
indices 50, 63, 79, 90, 92, 105
initialization 10, 80, 89, 94, 97
input data 12
input files 79
interaction 2, 9-10, 96-97
interleaved quadtrees 49
intersection test 3, 10-11, 25-26, 28-31, 64,
67-68, 71
K
k-Discrete Orientation Polytope 3, 26, 30,
54
Kd-tree 13-14, 19, 53
Knowledge Management System (KMS) 1
Latitude 133-134
leaf nodes 12, 16, 43, 46
length 28, 40, 44, 69, 109-112, 114
loaded tiles 57, 92-93, 118, 120
Longitude 133-134
M
main memory 3, 5, 8, 19-20, 57, 63, 71, 99,
106, 115, 124
Management Information System (MIS) 1
matrix 43, 65, 90, 92, 94-95
maximum elevation values 58, 61
maximum error 41, 42
MBB (minimal bounding box) 16
MBR (minimal bounding rectangle) 16-17,
27
memory bandwidth 7
memory bus 7-8, 122, 124
141
memory constraint 121-122, 124
memory usage 9, 19, 57, 59, 71, 78, 98,
106-108, 114-116, 120-122
mesh refinement 50-51
Meshes 4, 32-34, 38, 40, 45, 53, 96, 112
MIS (Management Information System) 1
multidimensional 12-13, 20
N
navigation system 4, 8-9, 54-55, 97, 100
nodes 12-13, 16, 19, 20, 43-44, 46, 70, 94,
137-138
O
OAS (Office Automation System) 1
OBB (Oriented Bounding Box) 25-28, 31,
54
occluded objects 26
occlusion culling 3, 21, 23, 26, 124
OpenGL application 87, 89
OpenGL function 90, 92, 96
optimization techniques 20, 25, 26, 54
Orientation 3, 26-27, 30, 36, 54, 57, 89, 96
Out-of-core algorithm 45, 124
output files 79
output text file 61, 74, 78
overlapping tile technique 58-60, 73
P
Physical Record Format 133, 138
plane equation 64-67
Plane Extraction 64-65
pre-processing 4, 10, 40, 46, 55-59, 71-73,
97, 103, 122
profile 55, 57-59, 74-75, 78, 97, 105, 133,
135, 138
prototype system 10, 71, 98-99, 101, 106,
110, 120-121, 125
Q
quadtree 3, 15, 20, 43-44, 49, 51, 53, 56,
68, 87, 89, 94-96, 106, 114-115
quadtree matrix 43, 94-95
quadtree structure 43, 49, 94
R
R-tree 3, 16-17, 20, 53
radians
radii 28
real-time 2-6, 8-10, 18, 20, 23, 37-38, 43,
53-54, 57, 71, 97, 100, 105, 110,
117-118, 121-122, 124-125
refinement algorithm 49
refinement levels 51
rendering speed 7, 34, 72, 109-110, 117,
119-120
resolutions 135
restricted quadtree algorithm 15
ROAM (Real-time Optimally Adapting
Meshes) 38-40, 52-54
root node 12-13, 94
run-time processing 10, 56-57, 71-72, 87,
97, 103, 105
S
simplicity 12, 15, 54, 59, 67-68, 72
space-filling curves 12
spatial data 1-2, 10-12, 16, 53, 58, 71
split queue 40-42
square tiles 80
subdivision 13, 39, 44, 69, 94
T
terrain blocks 70
terrain data 3-5, 8-10, 18, 32, 43, 55-60, 72,
80-81, 83-85, 104, 117, 120, 122, 124
terrain elevation data 5, 18
terrain geometry 4, 8-9, 15, 121
terrain model 6, 9, 15, 23, 32-33, 45, 58, 72,
123
terrain representation 11, 32, 34
terrain resolution 104
terrain size 57, 59-60, 98, 115, 122
terrain surfaces 23, 48, 54, 94, 105, 112,
118, 124
terrain tiles 4, 18, 56-57, 61, 63-64, 67-68,
71, 90, 93-94, 101, 106, 108, 112, 119
terrain vertices 60
text files 59, 63, 71, 74
tile index 56, 63, 90-92
tile size 60, 103, 105-108, 110, 120
TIN (triangulated irregular network) 32-34,
45, 54
142
tree structure 12, 15, 19
triangle-based LOD technique 4-5, 8-9, 11,
37, 55, 68, 114-117, 119, 121-123
triangle bintree 38
triangle count 37, 45, 98, 111-112, 114,
118-119, 120, 122
triangle faces 45
Triangle Fans 36, 54, 56, 70, 94-96
triangle meshes 34, 40, 96, 112
triangle pairs 38
triangle strip 34-37, 49-54
triangulation 33, 39, 40-44, 47-51, 69-70
TXT files
U
Universal Transverse Mercator (UTM) 57,
100
USGS 57, 71, 73, 80, 100, 120, 135, 137
V
VDPM (view-dependent progressive
meshes) 45-48, 52, 54
vector angle 21
vertex split 45, 47
view-dependent LOD techniques 10, 32,
53
Viewer 8, 17, 21, 32, 44, 99
viewpoint 22, 32, 44, 46, 56, 69
Virtual GIS 2, 8, 10, 17
visibility culling methods 3, 10-11, 20-21,
25
visual simulation system 18
visualization 1-6, 8-10, 13, 18, 37, 48,
53-55, 57, 72, 97, 100, 122, 124
wireframe 6, 96, 101-102