Photogrammetry Reconstruction Systems Master Thesis

Transcription

Photogrammetry Reconstruction Systems Master Thesis
UNIVERSITY OF ZIELONA GÓRA, POLAND
Faculty of Electrical Engineering, Computer Science and Telecommunications
MASTER THESIS
PHOTOGRAMMETRIC RECONSTRUCTION SYSTEMS
FOR PURPOSES OF VIRTUAL REALITY SYSTEMS
MAREK KUPAJ
Supervisor:
Dr inŜ. Sławomir Nikiel
Zielona Góra , June 2005
Abstract
Keywords: photogrammetry, 3D reconstructions, image-based reconstruction, parallel
reconstruction, lens distortion, radial distortion, perspective distortion, virtual reality, lowpolygon modelling, error analysis,
Virtual reality environments rely on methods for viewing and processing
three-dimensional graphics. Owing to required speed in a given environment
the whole system must be efficient and its resources should be of compact
size. Therefore, for 3D graphics, it means that low-count polygons models
should be used. Low-polygon models consist of very small count of vertices
and polygons (triangles – surface elements). It might be created in various
applications, starting from very simple generators, ending up to a very
complicated and complex systems.
Unfortunately present systems don’t make low-polygon modelling easy.
There are few applications (e.g. PhotoModeler) that support relatively fast
methods to model reconstruction. But even these methods don’t offer full
modelling possibilites.
In this thesis, the author want to introduce an interesting approach to the
geometry reconstruction problem. Through the use of photogrammetry, it is
possible to make reconstruction easier, faster and more efficient.
Photogrammetry describes various ways to object measurement by using
of non-invasive methods. It supports procedures that are usefu for low-count
polygon models. The thesis describes a few methods that make possible not
only geometry reconstruction but also preparation data for reconstruction.
With accuracy estimation, (described in thesis) user might calculate
appropriate
errors,
that
allows
to
enhance
final
precision
of
the
reconstruction.
Developed by the author, BluePrint Modeler application supports
described methods, makes reconstruction process easy and efficient. The
Application with practical samples is also described.
i
Contents
List of Figures .......................................................................................... v
List of Tables ......................................................................................... viii
1. Introduction ...................................................................................... 1
1.1. Master Thesis Definition .............................................................. 1
1.2. Thesis Outline ............................................................................ 1
1.3. Fundamentals ............................................................................. 3
1.3.1. 3D Model................................................................................. 3
1.3.2. 3D Scene................................................................................. 4
1.3.3. 3D Projection ........................................................................... 4
1.4. VR Languages as a way to describe the virtual reality ................... 5
1.4.1. Virtual Reality languages review ............................................... 5
1.4.2. Virtual Reality formats review ................................................... 6
1.5. Virtual Reality reconstruction systems .......................................... 7
1.6. Using photogrammetry in the reconstruction process .................... 7
1.7. Methods and techniques used in the reconstruction process .......... 9
1.7.1. Reconstruction from photographs ............................................. 9
1.7.2. Reconstruction from orthophotographs ..................................... 9
1.7.3. Reconstruction from blueprints ............................................... 10
1.7.4. Procedural and parametric modelling ...................................... 10
1.7.5. CSG modelling ....................................................................... 10
1.7.6. Manual entering 3D model data .............................................. 11
1.8. Restrictions for the modelling .................................................... 11
1.9. Targets and applications of the reconstruction system ................ 11
1.10. Requirements of the reconstruction systems ............................. 14
1.11. Review of existing applications for objects reconstruction .......... 15
2. Reconstruction process of 3D objects for the VR environment ............ 16
2.1. Selection of the object to reconstruct and the reconstruction aims
arrangement ............................................................................ 16
2.2. Definition of object features and choice of methods for data
acquisition ............................................................................... 17
2.3. Data acquisition – collecting detailed info about
the chosen object ..................................................................... 18
2.3.1. Photogrammetric methods ..................................................... 18
2.3.2. Blueprints and orthogonal projections ..................................... 19
2.3.3. Manual measurement of the object details .............................. 19
2.4. Selection of the obtained data ................................................... 20
2.5. Processing of the selected informations...................................... 20
2.5.1. Removal of the spherical distortion from photographs .............. 21
2.5.1.1. Idea of the spherical distortion ............................................ 21
2.5.1.2. Spherical distortion model ................................................... 22
2.5.1.3. Correction and calibration of radial distortion ........................ 23
2.5.2. Summary .............................................................................. 25
2.5.3. Perspective correction for perspective photographs .................. 26
2.5.3.1. The need for perspective correction ..................................... 26
2.5.3.2. Perspective correction ......................................................... 27
ii
3.
4.
5.
6.
2.5.3.3. Summary ............................................................................ 28
2.6. Object texture creation ............................................................. 29
2.7. Object modelling based on the processed informations ............... 29
2.7.1. Reconstruction from blueprints and orthophotographs ............. 30
2.7.1.1. Parallel (orthogonal) camera representation ......................... 31
2.7.1.2. Orthogonal camera calibration ............................................. 32
2.7.1.3. Determining geometry of the model ..................................... 36
2.7.2. Reconstruction from perspective photographs.......................... 40
2.7.2.1. Perspective camera representation ....................................... 41
2.7.2.2. Perspective camera calibration ............................................. 42
2.7.2.3. Determining geometry of the model ..................................... 42
2.8. Export of the model .................................................................. 43
BluePrint Modeler program as the sample reconstruction system ........ 44
3.1. History ..................................................................................... 44
3.2. Requirements ........................................................................... 44
3.3. System architecture .................................................................. 45
3.4. Interface and application features.............................................. 48
3.5. BluePrint Modeler applications ................................................... 62
3.6. Development and future directions ............................................ 63
Lens distortion correction – experiments and practical applications ..... 64
4.1. Experiments ............................................................................. 66
4.1.1. Tests of cameras ................................................................... 66
4.1.2. Full calibration profiles ........................................................... 68
4.1.3. Incomplete calibration profiles ................................................ 72
4.1.4. Comparison of selected cameras ............................................. 79
4.2. Calibration ................................................................................ 80
4.3. Correction ................................................................................ 81
Practical use of the BluePrint Modeler application .............................. 83
5.1. Lens distortion removal from perspective photographs ................ 83
5.2. Texture creation (extraction) ..................................................... 86
5.3. Blueprints calibration ................................................................ 89
5.3.1. Necessity for blueprints correction .......................................... 89
5.3.2. Correction of the blueprints .................................................... 91
5.3.3. Blueprints calibration .............................................................. 96
5.4. Orthophotographs .................................................................... 99
5.5. Modelling from parallel projections............................................101
5.6. Export model to other applications ............................................104
3D objects reconstruction examples.................................................105
6.1. First reconstruction example – Project „BDHPUMN”....................105
6.1.1. Guidelines and measurement.................................................106
6.1.2. Selection of taken materials and data processing ....................107
6.1.3. The second measurement .....................................................108
6.1.4. Orthophotograph creation .....................................................108
6.1.5. Calibration of the images .......................................................110
6.1.6. Main reconstruction of the object ...........................................111
6.1.7. Export to the other VR modelling environment........................113
6.1.8. Model statistics .....................................................................114
6.2. Second reconstruction example – Project „GDDKiA” ...................114
iii
6.2.1. Reconstruction of the object ..................................................115
6.2.2. Model statistics .....................................................................116
6.3. Summary ................................................................................116
7. Accuracy of the reconstruction process ............................................117
7.1. Acurracy for the perspective projection .....................................118
7.2. Acurracy for the parallel projection ...........................................120
7.3. Determining the interior camera parameters .............................122
7.3.1. Results of research ...............................................................124
7.3.2. Summary .............................................................................128
7.4. Sample height calculations .......................................................128
7.4.1. Height calculations for perspective projection .........................129
7.4.2. Height calculations for parallel projection ...............................131
7.5. Relationship between modelling errors and camera orientation
for the parallel projection .........................................................132
7.6. Summary ................................................................................134
8. Conclusions, observations and future development ...........................135
8.1. Conclusions on the data collecting ............................................135
8.2. Conclusions on the data processing ..........................................137
8.2.1. Conclusion on lens distortion correction ..................................137
8.2.2. Conclusion on perspective correction......................................138
8.3. Conclusions on modelling process .............................................140
8.3.1. Conclusion on modelling from parallel projections ...................141
8.3.2. Conclusion on modelling from perspective projections .............141
8.4. Future developments ...............................................................142
9. Acknowledgements.........................................................................143
10. Literature .......................................................................................144
iv
List of Figures
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
Figure
1-1.
1-2.
2-1.
2-1.
2-2.
2-3.
2-4.
2-5.
2-6.
2-7.
3-1.
3-2.
3-3.
3-4.
3-5.
3-6.
3-7.
3-8.
4-1.
4-2.
4-3.
4-4.
4-5.
4-6.
3D model example .................................................................. 3
3D Projection example (perspective projection) ......................... 4
Various kind of the spherical distortions (positive/negative) ..... 21
Photograph with spherical distortion effect.............................. 21
Photograph after the lens distortion correction ........................ 24
Photographs before and after perspective correction ............... 26
Parallel projection with use of the vector camera model ........... 31
Orthogonal camera calibration................................................ 33
Determining the point nearest to the two lines in space ........... 37
Perspective projection with use of the vector camera model .... 41
The BluePrint Modeler architecture ......................................... 45
The BluePrint Modeler architecture (main classes) ................... 46
The Photogrammetric Module class hierarchy .......................... 47
The Model Editor in the BluePrint Modeler application .............. 48
The Photogrammetric Unit with the Photoflow architecture ...... 56
The Lens Distortion Correction Module.................................... 57
The Perspective Correction Module ......................................... 59
The Orthocamera Calibration Module ...................................... 61
Extracted texture loaded with radial distortion ......................... 64
Extracted texture with removed radial distortion...................... 64
Zoom of selected region from Figure 4-1. ............................... 65
Zoom of selected region from Figure 4-2. ............................... 65
Sample lens profile calibration graph ...................................... 67
Relationship between focal length and lens distortion level
for Canon EOS 300D ............................................................. 68
4-8. Relationship between focal length and lens distortion level
for Olympus C-765 UltraZoom ................................................ 70
4-9. Relationship between focal length and lens distortion level
for Konica Minolta Dynax 7D .................................................. 71
4-10. Relationship between focal length and lens distortion level
for Sony Cybershot DSC-P100 .............................................. 72
4-11. Relationship between focal length and lens distortion level
for Nikon Coolpix 4600 ......................................................... 73
4-13. Relationship between focal length and lens distortion level
for Kodak EasyShare CX7525................................................ 75
4-14. Relationship between focal length and lens distortion level
for Kodak EasyShare Z700 ................................................... 76
4-15. Relationship between focal length and lens distortion level
for Konica Minolta Dimage Z3 ............................................... 77
4-16. Relationship between focal length and lens distortion level
for Canon PowerShot A510 .................................................. 78
4-17. Comparison of selected cameras (relationship between focal
length and lens distortion level) ............................................ 79
v
Figure 4-18. BPM: Lens Distortion Correction Module with active
calibration tab ..................................................................... 80
Figure 4-19. BPM: Lens Distortion Correction Module with active
correction tab ...................................................................... 81
Figure 5-1. Recognized calibration pattern ............................................... 84
Figure 5-2. First image corrected with usage of calibration pattern ............ 85
Figure 5-3. Finished lens distortion removal process for sample
photographs ......................................................................... 85
Figure 5-4. Source photographs for texture creation example .................... 86
Figure 5-5. Flow diagram for texture creation example ............................. 86
Figure 5-6. Settings the refernce points for distorted photo in the first
perspective correction module (reference points are indicated
by the red circles) ................................................................. 87
Figure 5-7. Settings the reference points for corrected photo in the second
perspective correction module (reference points are indicated
by the read circles) ............................................................... 87
Figure 5-8. Perspective correction result for texture creation example........ 88
Figure 5-9. Visualisation of the model with textures extracted by the
BluePrint Modeler application ................................................. 88
Figure 5-10. Reconstruction from non-corrected blueprints (only the
blueprints calibration were done).......................................... 89
Figure 5-11. Reconstruction from corrected blueprints (the blueprints
correction have been done before the right calibration) ......... 90
Figure 5-12. Photogrammetric data flow chart for the blueprints
correction purpose ............................................................... 91
Figure 5-13. The sample blueprints before their correction. ....................... 91
Figure 5-14. Distorted image of the „left” blueprint (help points
are marked with red color) ................................................... 92
Figure 5-15. Entering the help points in order to set the corrected main
points (usage of the „Calculate” method) .............................. 93
Figure 5-16. Main points after their calculation with the described method . 94
Figure 5-17. Wrong set of the destination main points. ............................. 94
Figure 5-18. Aligned main points just before performing correction ........... 95
Figure 5-19. The blueprints from Fig. 5-13 after the perspective
correction process ............................................................... 96
Figure 5-20. Photogrammetric flow chart for the calibration purposes ........ 96
Figure 5-21. Calibration of the left-oriented blueprint ................................ 97
Figure 5-22. The blueprints after the calibration process (view from the
Model Editor) ...................................................................... 98
Figure 5-23. Photoflow diagram used in the orthophotograph creation
process ............................................................................... 99
Figure 5-24. Corrected image - result of the orthophotograph creation
process ..............................................................................100
Figure 5-25. First stage of point reconstruction (indicating point
on the first plane) ...............................................................102
Figure 5-26. Second stage of point reconstruction (indicating point
on the second plane) ..........................................................102
Figure 6-1. The BDHPUMN reduction model (scale 1:87)..........................105
vi
Figure 6-2. The BDHPUMN measurement
with the Olympus C-765 UltraZoom camera ...........................106
Figure 6-3. Object’s drafts with plotted dimensions and local axis .............106
Figure 6-4. Two extreme photographs for left projection ..........................108
Figure 6-5. Orthophotograph of the model’s front side .............................109
Figure 6-6. Stitch artefact of the result image .........................................109
Figure 6-7. The Photoflow diagram created
for purposes of the calibration ..............................................110
Figure 6-8. Calibration reference points arrangement (left side) ...............111
Figure 6-9. Beginning of the BDHPUMN scale model reconstruction ..........112
Figure 6-10. The BDHPUMN’s modelling results .......................................113
Figure 6-11. Finished low-polygon train car scale model...........................113
Figure 6-12. The GDDKiA building in the Zielona Góra .............................114
Figure 6-13. Modelling the GDDKiA building from the parallel projections ..115
Figure 6-14. Finished outlook model of the GDDKiA building ....................115
Figure 7-1. Perspective projection...........................................................118
Figure 7-2. Parallel projection .................................................................120
Figure 7-3. Photographed calibration plane with marked measurement
quantities ............................................................................122
Figure 7-4. Parallel reconstruction made with right angle .........................133
Figure 7-5. Parallel reconstruction made with sharp angle ........................133
Figure 8-1. Difference between proper and improper lighting
(at left – photo with flash, at right – without flash
but with additional lighting) ..................................................136
Figure 8-2. Texture created from one perspective photograph taken
with sharp angle ..................................................................138
Figure 8-3. Texture created from two perspective photographs taken
with sharp angles .................................................................138
Figure 8-4. Visible stitch of the result image ............................................139
vii
List of Tables
Table 1-1. Combination of the formats used to describe virtual reality
scenes..................................................................................... 6
Table 1-2. Quality selection criteria in the reconstruction process
for selected issues ................................................................. 13
Table 3-1. List of the main Model Editor functions .................................... 48
Table 3-2. Edition modes of the Model Editor ........................................... 54
Table 3-3. Edition modes of the Photogrammetric Unit ............................. 54
Table 3-4. Processing modules used in the Photogrammetric Unit ............. 55
Table 3-5. The most significant buttons in the Lens Distortion
Correction Module .................................................................. 58
Table 3-6. The most significant buttons in the Perspective
Correction Module .................................................................. 60
Table 3-7. The most significant buttons in the Orthocamera
Calibration Module ................................................................. 61
Table 4-1. Index of tested cameras with division according
to created profiles .................................................................. 67
Table 4-2. Specifications for Canon EOS 300D .......................................... 68
Table 4-3. Survey statistics of measuring lens distortion
for Canon EOS 300D .............................................................. 68
Table 4-4. Specifications for Olympus C-2 Zoom ....................................... 69
Figure 4-7. Relationship between focal length and lens distortion level
for Olympus C-2 Zoom ........................................................... 69
Table 4-5. Survey statistics of measuring lens distortion
for Olympus C-2 Zoom ........................................................... 69
Table 4-6. Specifications for Olympus C-765 UltraZoom ............................ 70
Table 4-7. Survey statistics of measuring lens distortion
for Olympus C-765 UltraZoom................................................. 70
Table 4-8. Specifications for Konica Minolta Dynax 7D .............................. 71
Table 4-9. Survey statistics of measuring lens distortion
for Konica Minolta Dynax 7D ................................................... 71
Table 4-10. Specifications for Sony Cybershot DSC-P100 .......................... 72
Table 4-11. Survey statistics of measuring lens distortion
for Sony Cybershot DSC-P100 ............................................... 72
Table 4-12. Specifications for Nikon Coolpix 4600 ..................................... 73
Table 4-13. Survey statistics of measuring lens distortion
for Nikon Coolpix 4600 ......................................................... 73
Table 4-14. Specifications for Nikon Coolpix 5200 ..................................... 74
Figure 4-12. Relationship between focal length and lens distortion level
for Nikon Coolpix 5200 ......................................................... 74
Table 4-15. Survey statistics of measuring lens distortion for Nikon Coolpix
5200 .................................................................................... 74
Table 4-16. Specifications for Kodak EasyShare CX7525 ............................ 75
Table 4-17. Survey statistics of measuring lens distortion
for Kodak EasyShare CX7525 ................................................ 75
Table 4-18. Specifications for Kodak EasyShare Z700................................ 76
viii
Table 4-19. Survey statistics of measuring lens distortion
for Kodak EasyShare Z700 .................................................... 76
Table 4-20. Specifications for Konica Minolta Dimage Z3 ........................... 77
Table 4-21. Survey statistics of measuring lens distortion
for Konica Minolta Dimage Z3................................................ 77
Table 4-22. Specifications for Canon PowerShot A510............................... 78
Table 4-23. Survey statistics of measuring lens distortion
for Canon PowerShot A510 ................................................... 78
Table 7-1. Results of camera image plane dimensions measurement for
Olympus C-2 Zoom camera without lens distortion correction ..124
Table 7-2. Results of camera image plane dimensions measurement for
Olympus C-2 Zoom camera with lens distortion correction .......125
Table 7-3. Comparison between measurement with and without lens
distortion correction for Olympus C-2 Zoom ............................125
Table 7-4. Results of camera image plane dimensions measurement
for Olympus C-765 UltraZoom camera without lens distortion
correction .............................................................................126
Table 7-5. Results of camera image plane dimensions measurement for
Olympus C-765 UltraZoom camera
with lens distortion correction ................................................127
Table 7-6. Comparison between measurement with and without lens
distortion correction for Olympus C-765 UltraZoom .................127
Table 7-7. Sample height calculations for perspective projection ..............129
Table 7-8. Sample height calculations for parallel projection.....................131
ix
1. Introduction
The Virtual Reality Systems, also called VR Systems (Virtual Reality)
increasingly influences the field of multimedia. Of course, it is associated with
progress, which has begun within the space of recent years. Rapid
development of the Internet and associated techniques caused demand for
methods that support sharing not only still images or movies, but also threedimensional objects. From 3D objects there is a little step to whole scenes
and interactive environments that make possible interaction with the user
(VE – Virtual Environment).
Above mentioned interactions may only exist if the basic requirements are
fullfilled. The most important of these is to show the virtual scene as fast as
possible. Therefore, there was introduced a concept of real-time virtual
reality system, where the main pressure is put on the system efficiency. Not
forget about image quality aspects, compromise needed to be obtained.
For fast displaying of the result image it is neccessary to work with
objects (models) that consist of low-count of points used to create the object
mesh. Creating this objects (so-called low-count polygon models) is domain
of the applications for geometric reconstruction. The applications use specific
procedures in order to reconstruct geometry and model texture.
1.1. Master Thesis Definition
The thesis is brings to user efficient methods for reconstruction of threedimensional objects to be used in the virtual reality environments, with
special pressure on real-time purposes. The BluePrint Modeler application
that make reconstruction easier and possible, supports theoretical results
presented in the dissertation.
1.2. Thesis Outline
Structure of this dissertation is following:
1
First Chapter shows fundamentals of the Thesis. The Chapter describes
issues concerning the object structure and the reconstruction process. It also
describes practical applications of the reconstructed models in the various
field.
In the Second Chapter the author introduces in detail reconstruction
process for the 3D objects. It begins from the data collecting, through their
selection and right reconstruction (from selected data and materials) ends
with export of the result model.
Third Chapter is a presentation of the BluePrint Modeler, the application
illustrating the dissertation. The application is used for the objects
reconstruction showed in this work. The chapter describes features and
interfaces of the program.
In the Fourth Chapter the lens distortion correction problem is shown.
Also the practical experiments are described (because it is important to
skilfully remove existing distortions that are often observed in the
photographs).
Fifth Chapter will bring practical use of the BluePrint Modeler application
for selected cases that illustrates the reconstruction process.
Sixth Chapter describes sample reconstructions of the selected objects. As
distinct from the previuos in this Chapter the whole process from introducing
of the sample object through the data processing up to export the model and
analyze its statistics, is described.
Seventh Chapter describes accuracy of the whole reconstruction process,
starting from determine accuracy of the parallel and perspective projection
ending with a sample height calculations accuracy.
The author’s conclusions are put in the Eighth Chapter. The whole
reconstruction process was analyzed and conclusions and observations made
during the experiments are presented.
2
1.3. Fundamentals
1.3.1. 3D Model
Figure 1-1. 3D model example
In the computer graphics, 3D model is an object, which consists of one
or more elements. The model could be of the following type:
scatter/wireframe (the model consist of points/lines built on the
points),
surface (the model consist of surfaces built on the points)
volumetric (the model consist of cubic array of points),
For the thesis and virtual reality applications, there might be used only
scatter and surface models. Because the scatter models don’t ensure given
realism, first and foremost focus was on the aspects concerning development
of surface models.
Each model consists of some surfaces, also known as polygons (set of
triangles that create the surface of a given element). The polygon is a closed
planar path based on three points (vertices) that create the edges of the
polygon. The polygon could have assigned proper texture coordinates and
material, that allows to right display of the texture of the object.
3
The object might have its own structure1 in order to describe its spatial
properties. Thanks to that, the object might consists of the other objects
(based on the vertices, edges and polygons). Hierarchical structure makes
easier performing the skeletal animation2 as well as modelling process.
1.3.2. 3D Scene
Set of objects (3D models) that create specific segment of the virtual
reality is known to be a 3D Scene. The scene might be interactive, when
particular objects have assigned methods and allows user to manipulate
them and feedback with the user. In order to describe the scene with their
objects, there is need for a special format or a language.
1.3.3. 3D Projection
Figure 1-2. 3D Projection example (perspective projection)
The position of a given object is described by the absolute coordinates,
represented by the XYZ vector. Displaying the 3D point on the 2D plane is
impossible without special transformation, so a three-dimensional point has
to be mapped on the two-dimensional screen (image plane). This process
produces the XY coordinates for each XYZ vector.
Performing this transformation is called projection. Projection might be
perspective or parallel (isometric, without a typical perspective distortion).
1
2
in the form of the tree structure (that is natural representation of the hierarchy)
kind of the animation, where character consists of surface model and set of bones (skeletal
system) that allows to freely manipulate of the particular parts of the given object
4
1.4. VR Languages as a way to describe the virtual reality
When the model is choosen, it is neccessary to describe its properties in a
formal way. Additionally, if the model will be used in the interactive
presentation, it is required to reduce its size. Formalization and unification of
the virtual reality describing methods ended in virtual reality languages. Most
popular of these are VRML and X3D (as a VRML successor).
1.4.1. Virtual Reality languages review
VRML (Virtual Reality Modelling Language) has been developed since
1994. One of its task was to minimize capacity of (3D) data sent by Internet.
VRML is a result of the simplification of the Open Inventor language (known
from the Silicon Graphics computers). It supports three types of commands
(nodes) that allow to create three-dimensional objects, manipulate them
(rotation, translation, scale...etc) and grouping them in the hierarchical
structure.
Aside from possibility to store the geometry and texturing data, it is
possible to define the information about lighting, cameras, animations and
interactions. Since VRML language has a text form, one can create simple
scenes even with the notepad. Bigger and more complex objects are created
with specific reconstruction systems that support VRML format.
X3D (Extensible 3D) is a next generation of the open standard to describe
VR scenes. It is an open language (currently being developed) with the
following features:
backward compatibility with VRML, browsers and other tools for
visualization. Compatibility means that scenes created in VRML have to be
properly processed by the software designed for the X3D,
support for XML decoders,
extended mechanisms that allow to introduce new properties, fast
estimation of their usefullnes and right execution.
5
1.4.2. Virtual Reality formats review
Apart from the universal languages for description of virtual reality,
geometry and other model data might be written in the specific format (often
binary). In principle, every application for 3D processing has its own format
(3DStudio – ASC, 3DS and MAX formats, Cinema4D – C4D format...etc),
which is used exclusively with the given system. There are also available
some special conversion tools (converters) that allows to translate object
data to other formats. It’s obvious that in a specific format might be written
not only geometry and texturing data but also information about the
animation and lights, cameras and other features.
Below table lists selected formats used to describe VR scenes:
Table 1-1. Combination of the formats used to describe virtual reality scenes
Format
3DS
Description
Binary format; Supported by the most of the applications;
It allows to store data about animation, lighting , cameras ...etc
ASC
Text format; Simplified description for save the geometry and texture coordinates;
C4D
Binary format; Supported by the Cinema4D,
It allows to store data about animation, lighting , cameras ...etc
MAX
Binary format; Supported by the 3D Studio MAX,
It allows to store data about animation, lighting , cameras ...etc
LWO
Binary format; Supported by the Lightwave,
It allows to store data about animation, lighting , cameras ...etc
VRML
WRL
Text or binary format (packed text). Supports scenes in VRML language
X3D
Text or binary format (packed text). Supports scenes in X3D language
6
1.5. Virtual Reality reconstruction systems
Methods and procedures of data acquisition, feature extraction and object
construction form a concept of the VR reconstruction systems.
The procedures concern software realization, where the right interface
allows user to easy and effectively pass the reconstruction process.
The given reconstruction system can have a software form. But is also
possible to use the hardware implementation. In the hardware realization,
the system have specific kit of sensors which are used for the shape
detection of the given object (e.g. with use of the laser beam). The typical
result is a point cloud that create object geometry (unfortunately, this way
creates a model which takes a huge space on the disk – several Gigabytes of
data per model).
In the software realization user inputs proper data to the computer
(coordinates of the given points, photographs, geometry relationships).
Next, the data is used in the specific process (reconstruction) to determine
the wanted shape. The degree of automation process of the system depends
on the given application.
Advantages of the good reconstruction system include automation of
some particular processes (e.g. concerning geometry reconstruction). This is
possible when one uses right methods for reconstruction. For the purpose of
this Thesis, selected photogrammetric techniques are described.
1.6. Using photogrammetry in the reconstruction process
Photogrammetry is based on the measuring and interpreting objects from
the photographs that can be taken on the ground or in the air (see [7],[8]).
Photogrammetry consists of teledetection and photointerpretation, that
employ many techniques used to collect the photos of the objects and
information about them.
7
Because photographs introduce only projection of the given object, this
means that during taking the photos some information about the object is
lost (depth of the object).
The object can’t be reconstructed from the one photograph (without
other measurements), one has to acquire more information (and photos) of
the object.
To solve this problem one has to collect such quantity of the photographs,
taken under the suitable conditions (the redundancy must occurs) which will
permit to reconstruct the geometry. For the reconstruction purposes, the
min. two photographs need to be taken, but it is really the minimum quantity
- the better effect will be observed when one use more photographs than
two3. Of course, the techniques of the taking photos is important: the photos
can’t be blurred, they should have good exposure and be taken with the
proper angles4.
Even in the home conditions the proper photogrammetric survey is still
possible. It is possible to get the good results, that make the
photogrammetric methods available in the field of the reconstructions.
Digital cameras used to measurements (in this cases the cameras is a
casual cameras not specialistic devices) force some limits. For example – lens
system is loaded with specific radial distortions, that causes, sometimes
considerable, image distortion. This effect is possible to removal and in the
Thesis some methods for its correction are described.
The other thing is to combine with projection known from the
photographs. The perspective, which causes depth effect (in the cameras
where one-point perspective is used). This means that shape of the
projected object is decreasing with increasing distance of the object from the
camera.
3
It can be observed in the PhotoModeler application, when one try to reconstruct object
from two and next from the four photos,
4
It is recommended to set the camera in the main directions (front, left, rear...etc – under
the right angles)
8
Because the reconstruction methods that might work with perspective
photographs are complex, the perspective photos can be converted to the
orthophotographs (similar to the parallel projections5). Reconstruction from
the orthophotos allows to get the reasonable results in a short time.
1.7. Methods and techniques used in the reconstruction process
There following methods are used for object modelling:
reconstruction from photographs,
reconstruction from orthophotographs,
reconstruction from blueprints,
procedural and parametric modelling,
CSG modelling,
manual entering of the 3D model data,
1.7.1. Reconstruction from photographs
When
one
has
photographs
of
a
given
object
(free-positioned
photographs) he/she might extract geometry with a specific precision. In
this case, it is required to mark the reference points on the photographs.
With the right calibration procedures it is possible to obtain the geometry
structure and position of the cameras during taking shots. Also, the textures
might be extracted.
1.7.2. Reconstruction from orthophotographs
The orthophoto is a special case of the parallel projection. Thanks to the
parallel projection it is possible to easy get the geometry. Usually it is hard to
directly get the orthophotos (because there were used casual camera not
highly specialized measure device like a photogrammetric camera). Because
of that, orthophotos have to be prepared from the normal perspective
photographs. As a orthophoto it might been used also the telephoto images
(they are almost parallel).
5
known as orthogonal projections
9
After the preparation of orthophotos, reconstruction of the object is
possible. The user is required to indicate correspondent points on the
orthophotos. Textures might be extracted, as well.
1.7.3. Reconstruction from blueprints
Similarly to orthophotos, blueprints (architectural plans) are parallel
projections. Thanks to that, the geometry extraction is simplified (comparing
to perspective photographs) and similar to the reconstruction from
orthophotos. Unfortunately the texture extraction is usually impossible (in
fact the blueprints consist of edges and frames).
1.7.4. Procedural and parametric modelling
The modelling methods allow to create the objects by use of proper
mathematic formulas. Usually procedurally created objects are regular and it
is possible to control their level of detail. Regularity of their structure
facilitate texturing process.
1.7.5. CSG modelling
CSG modelling is a way to create an object with use of the CSG methods
(Constructive Solid Geometry). In short CSG consists of building solid objects
from other solids by the use of the boolean operators. Boolean operators
permit to make union, intersection and difference of the given solids. This
allows to „carving” – in the solids one might cut out some shapes to get the
wanted geometry. Texturing process is normally automated (mapping
coordinates are generated automatically). This way is rather distant from the
right reconstruction methods (but it might be used to assist in the main
reconstruction process).
10
1.7.6. Manual entering 3D model data
This method allows to create any objects by manual entering information
about model geometry and texturing. In the case of geometry form of the
data is a series of vertices described by the XYZ coordinates. For texturing
their description relies on the UV coordinates (texture coordinates).
1.8. Restrictions for the modelling
Every method of the reconstruction has its own limits. Additionally,
modelling for the real-time purposes also has some specific limits. There are
following things that have to be considered:
size of the scene in memory,
quality of textures,
speed of projection (every frame) within the target environment.
Since the virtual reality languages6 have been created also for the internet
purposes, the low complexity of the models is strongly recommended.
Needless to say, the right compromise have to be reached between image
quality and low complexity. Since the more the object is complicated, the
more memory has to be used for its storage. And the time to display given
object is longer, either.
It shouldn’t be treated as a disadvantage, it is recommended to treat
every case individually, according to the assumed reconstruction target.
1.9. Targets and applications of the reconstruction system
The main target of the every reconstruction is to recreate the geometric
structure. It is often required to reconstruct as much detail as possible (very
precise reconstruction). But for the real-time purposes it is common to
extract only main framework of the object (outlook reconstruction).
6
such as: VRML1.0, VRML 2.0, X3D and others
11
For the sake of modelling, several types might be mentioned:
outlook modelling, when the result is a model with lower-count of
polygons, usually without texturing or with using low quality textures. The
precision of performing model is not required,
simplified modelling, the result of this process is an object with low or
medium count of polygons. Texturing isn’t neccessary but the high quality
of textures might be used. Model precision is important,
precise modelling, where the result model has a big count of polygons,
making precision is high, and textures are usual with a high quality. But
complexity of the model disqualifies it for the real-time purposes.
Features that should be taken during the realization of the reconstruction
process:
geometric complexity – the more polygons object has, the more it is
complicated and realistic, but the more time is needed for its construction
and displaying. Also memory demands increase, either,
precise of the geometry extraction – most of the modelled objects are
regular and have symmetric elements. In other words it is precise for the
reconstruction of particular vertices of the model (significant attention –
object might be precise reconstructed and doesn’t have to be more
compound),
displaying time – if the objects are going to be used in the interactive
presentation, the matter fact is time of displaying (frame rate). Therefore
the object would not be too complex,
animation or interaction occurence – in that case ofr the geometry and
texturing data is an information about the animation. For the animation
the data has to describe motion of the object (translation, rotation and
scaling). For interaction – the data should describe the actions which
object might take.
texture quality and texture mapping precision – depend on requirements
the textures might be of low or of high quality. Precision of assigment of
the mapping coordinates has to be considered, as well.
12
In the Table 1-2 there are collected various issues with particular criterias
that concern the reconstruction purposes. The data is only overview of the
given problems (because each reconstruction has to be treated apart).
Table 1-2. Quality selection criteria in the reconstruction process
for selected issues
Geometry
complexity
(polygons
count)
Precision of
geometry
reconstruction
Displaying
time
Animation
Texture
quality
Simulations and
visualizations:
industrial processes,
car accidents,
medical simulations,
medium
high
medium
present
medium
Virtual presentations of
the cities
high
medium
medium
n/a
medium
Planning and spatial
economy
low
medium
medium
n/a
low
Photogrammetry and
measuring
high
high
long
n/a
high
Modelling for movies
high
high
long
present
high
Modelling for game
development
low
medium
short
present
medium
Architectural cataloguing
high
high
long
n/a
high
Reconstruction of the
static scenes
high
high
long
n/a
high
Interaction systems and
graphical 3D interfaces
low
medium
short
present
low
Preface objects modelling
medium
medium
medium
n/a
low
Interactive presentations
medium
medium
short
present
medium
Fields / Issues
The
green
color
marks
issues
concerning
real-time
For the rest of the issues it is hard to display their 3D graphics fast.
13
purposes.
1.10. Requirements of the reconstruction systems
The reconstruction system used for purposes of virtual reality should fulfill
following conditions7:
flexibility – it means simplicity for introducing new elements to the system
and modification of the existing ones (e.g. video drivers or plug-ins as
DLL libraries),
reliability – during the reconstruction process some errors might occur.
The system should be stable and correctly respond to these situations and
avoid data loss,
efficiency – one should have possibility to create good qualitative model
in a relatively short time (with a given precision...etc). In order to help
user with this question, the system should support combination of the
methods adapted to handle various kind of objects, that user wants to
reconstruct. These methods should increase reconstruction speed, too,
intuitivity and easiness of handling – first of all system should ensure
easy ways to pass the reconstruction process, it should has intuitive icons
and user-friendly interface. Also, short time to learn particular parts of the
application is required.
7
concern especially software reconstructions systems
14
1.11. Review of existing applications for objects reconstruction
Among existing applications for the reconstruction purposes one can find
the following programs:
3D Studio MAX 7, © Autodesk (http://www.discreet.com),
price ~ 3.500$,
AutoCad 2006, © Autodesk (http://www.autodesk.com), price ~ 3700$,
Cinema4D, © Maxon (http://www.maxon.net), price ~ 700 – 3000$,
Maya, © Alias (http://www.alias.com), price ~ 2200 – 7200$
Lightwave, © NewTek (http://www.newtek.com), price ~ 1700$,
Photomodeler, © Eos Systems Inc. (http://www.eossystems.com),
price ~ 900$,
Worldcraft/Hammer (specially dedicated to the Half-Life game, but it is
possible to make low polygon models),
and developed by the author the BluePrint Modeler application.
The Photomodeller has capability of making objects from the
photographs (as a result objects are produced low-polygons models) as well
as BluePrint Modeler, which support procedures specially dedicated to work
with the low-polygon models (reconstruction, modelling and texturing).
Reconstructions made with the applications written for modelling require
usually long time to find out the program abilities (at first there are demands
for help and tutorials before the actual reconstruction process starts).
In that case, the author proposes to use tool kind of Photomodeller or
BluePrint Modeler to reconstruction main shapes of the given object.
Reconstructed shapes might be then exported to the virtual reality format
and used in the presentation or game. It might be also edited with the
specialized applications (Cinema4D, 3DStudio... etc) to increase its level of
detail8.
8
But for the real-time purposes this stage is omitted, because increasing of the details level
is strictly connected with growth of the object size on disk and slows display of the model
15
2. Reconstruction process of 3D objects for the VR
environment
The three-dimensional object reconstruction is a complex problem. While
reconstruction of the simple objects (that consist of a few dozen of polygons)
is realatively easy, reconstruction of the complex objects is far more
complicated. That’s why proper methods for the data acquisition should be
established first.
2.1. Selection of the object to reconstruct and the reconstruction
aims arrangement
In this stage the object selection is done. Target object might be full-scale
object or reduction model. It might also exist only on the architectural plans
(blueprints). At the same time user has to choose reconstruction aims.
Additionally the following questions have to be considered:
will the object be used as a background,
will the object be used as a foreground (where observator’s attention will
be focused first and foremost on the given model),
will the model be used for measuring purposes,
will the model be used for the animation,
level of details of the model (which details have to be texture elements,
and which have to be modelled),
significance of the reconstruction precision,
complexity of the model (estimated count of the polygons),
Thanks to these questions, one might determine the modelling type (outlook,
simplified or precise reconstructions).
16
2.2. Definition of object features and choice of methods for data
acquisition
The user has to prepare for the data acquisition. One has to carry out the
right observations. This is done by examination of some features of a given
object:
symmetry / regularity,
details, their count and exposure (in order to decide which details have to
be treated as a part of the texture),
possibility of the direct survey,
After the observations, user has to choose right acquisition methods that will
be used for data collection. One of the most significant methods are:
photographs,
photogrammetric
methods
–
by
taking
perspective
photographs of the given object. The photos might be free-positioned
(taken with any angles) or oriented – e.g. perpendicular to the building
facade. This method allows to collect materials that are used for
geometry reconstruction as well as for texture extraction,
architectural plans or orthogonal (parallel) projections of the object –
availability of architectural plans (blueprints) allows for more precisely
(than for photographs) reconstruction of the given object. Owing to lack
of the texture on the blueprints, this way is mostly used to reconstruct
only geometry. Of course if the full orthogonal projections9 or
orthophotographs are available it is possible to extract both geometry and
textures,
measurement – manual survey for the object details. If the object exists
physically it is a good idea to collect the total object dimensions, or at
least dimensions of the given details. This make the reconstruction
process easier, especially orientation of photos and estimation of the
object scale. This method is used for small objects (e.g. reduction
models) or details of the bigger objects (e.g. a window in the building),
where the measurement is possible. When it is neccessary to measure the
9
orthogonal projection with texture (e.g. coloured blueprint),
17
distance details (especially inaccessible to survey) one have to use
photogrammetric methods, described above.
2.3. Data acquisition – collecting detailed info about the chosen
object
When the proper methods are chosen, user can start the data acquisition
process.
2.3.1. Photogrammetric methods
Photogrammetric methods are part of the teledetection methods and are
based on the data interpretation. The data is result of recording of the
electromagnetic radiation reflected or emitted by the various objects. The
result
of
this
process
is
a
photograph.
Interpretation
(so
called
photointerpretation) of the obtained image leads to extraction of particular
details from the given photo. Of course, the way of the taking photographs
is important. First of all, the photos have to fit each other by the common
details (known also as control points). It follows that photos have to cover
mutually with selected details which might be easy to measurement.
Additionally, the photos have to be taken without using too sharp angles
relatively to the selected datails. First and foremost, one should collect as
much as possible information on the object.
Undoubtedly, the equipment used to acquisition is not free from the
disadvantages. Before the data collecting, one has to perform the calibration
procedures. For digital cameras, the main problem is the lens distortion
effect (also known as radial distortion) as on of the most onerous. Reduction
of the above mentioned distortion leads to significant increase in precision of
the reconstructed geometry and correct quality of the extracted textures.
18
2.3.2. Blueprints and orthogonal projections
If the given object and its photos don’t exists, one may use the blueprints
(of course if they are available).
The blueprint is a parallel (orthogonal) projection and allows for the
easiest reconstruction than in case of modelling from the perspective
projection. Parallel projections might have got assigned texture, what
facilitate subsequent texturing of the object. For correct reconstruction,
number of planes has to be greater or equal than two (both planes have to
be perpendicular). The best choice, is when plans depict each side of the
object (front, rear, left, right, top and the bottom side). Additionally, the
plans should be scaled for arrangements to each other (relative orientation).
There is a possible situation, where one doesn’t have any blueprints but
only orthophotos which have been transformed from the casual perspective
photographs. Then the measurement of the selected details is necessary in
order to scale the orthophotos.
2.3.3. Manual measurement of the object details
In many cases, it is necessary to get the additional measures that
describe a shape of the model. Unfortunately not always the details can been
measured. For example - the measurement of the building. The orthophotos
might been prepared from the perspective photos, but still it is necessary to
get the proper dimensions. The dimensions of the building are too big for a
direct survey, one has to use specialized geodesic devices (like the
theodolites). But it’s still possible to measure the selected details like
windows or doors. It follows that one has to find some reference points (that
aren’t laying on one line in space), with coordinates that might been set or
calculated. Additionally, the measures of the rest of the details can be taken
in order to increase the redundancy.
19
2.4. Selection of the obtained data
Selected data might take more memory space. Some of the information
might be insufficient in order to use them in the reconstruction process (for
example – blurred photos). That’s why the selection of the obtained data is
needed. It is a good choice to create the documentation of the materials that
describe the source. This means that one has to pay attention which material
is related to given part of the geometry or texture. If the selection process is
well organized, the reconstruction process will be considerably easier and the
additional measurement won’t be necessary.
The following things have to be considered:
destination of the given source (photo, blueprint/plan, measured value),
e.g. if source is to be used in the blueprints orientation,
position of the object main axis (local reference system) – that
determines how the sources are positioned relatively to chosen system,
a way to extraction wanted data from the given source – an example:
necessity of making orthophotographs in order to create the geometry
(one might use photogrammetric method only for perspective photos),
a way to process rest of the data.
2.5. Processing of the selected informations
After the data selection, the proper materials should be prepared. It is
connected with necessity of improvement of their interpretation. First of all,
one has to process photographs and blueprints in order to use them in later
reconstruction. If a photo has to be used as an orthophoto one has to
perform the perspective correction. Additionally, for photos taken from
cameras, the lens correction process must be executed.
For the blueprints one should straighten plans that are parallel to each
other (e.g. front with rear, left with right...etc). It is necessary, because
blueprints are often taken with scanners and scanned image with blueprint is
sometimes slightly rotated.
20
2.5.1. Removal of the spherical distortion from photographs
2.5.1.1. Idea of the spherical distortion
Every photograph made with digital or analog cameras isn’t free from
distortions. Because of the various lens systems one might observe (on the
result photograph) geometry distortion – the radial image distortion. This
causes a excessive emphasize of central part of the image and its borders.
Normally, this effect might be observed in cheap digital cameras and web
cameras. In the specialized (photogrammetric) devices the distortion as a
rule aren’t so ardous, so the rectification of them is not neccessary.
The distortion might have a positive or a negative sign. Depending on the
sign, the image might be like a pincushion or a barrel.
Figure 2-1. Various kind of the spherical distortions (positive/negative)
Figure 2-1. Photograph with spherical distortion effect
21
2.5.1.2. Spherical distortion model
We can find several models for description of the geometrical distortion,
that are used in distortion correction procedures
In the research, the author suggests following equations (based partially
on the equations from [5]):
[
)⋅ [r
]
) ]
x src = x dst − ( xdst − c x ) ⋅ r0 + r1 ⋅ (rd ⋅ d w ) 2 + r2 ⋅ (rd ⋅ d w ) 4
(
y src = y dst − y dst − c y
0
+ r1 ⋅ (rd ⋅ d w ) 2 + r2 ⋅ (rd ⋅ d w
rd =
W

H

where: c x =  + ∆x0  , c y =  + ∆y 0  and
2

2

dw =
(2.1)
4
(xdst − c x )2 + (y dst − c y )2
2000
W2 +H2
Symbols:
x src , y src
source image coordinates
x dst , y dst
destination image coordinates
cx , c y
transformation center
rd
radial distance (from the point to the transformation
center)
dw
radial distance scaling factor
r0 , r1 , r2
internal distortion correction coefficients
k 0 , k1 , k 2 , ∆x0 , ∆y 0
W ×H
external distortion correction coefficients
( ∆x 0 , ∆y 0 - transformation center translate)
source/destination image size [width x height]
Inverse transformation (mapping destination coordinates into the source
coordinates) is important, because it allows to obtain image without artefacts
(discontinuity of the processed image). These artefacts might be observed in
the case of use the normal transformation (mapping source coordinates onto
the destination coordinates).
The coefficients used in the transformation ( r0 , r1 , r2 ) are very essential.
Because range values of the coefficients aren’t normalized, the the author
introduced
their
normalization
–
by
using
percentage
coefficients
( k 0 , k1 , k 2 ∈ [−100 ÷ +100] ) that are transformed to the non-linear values.
22
The way to calculate percentage coefficients into their non-linear equivalents
is following:
r0 =
k0
100
r1 = sgn( k1 ) ⋅ 10
−8+
r2 = sgn( k 2 ) ⋅ 10
The parameter
k1
50
−14+
dw
k2
50
,
v > 0 ⇒ sgn(v) = +1

where sgn(v) = v < 0 ⇒ sgn(v) = −1
v = 0 ⇒ sgn(v) = 0

(2.2)
used in the equation (2.1) makes coefficients
independent on the image size. Thanks to that, it is possible to correct the
image with any resolution when one has only coefficients obtained with one
resolution. It is advantage to previous works where this problem was
omitted.
2.5.1.3. Correction and calibration of radial distortion
When we have a given distortion model, it is neccessary to remember,
that the most significant issue in the lens distortion correction process is to
find proper coefficients. With these coefficients, one has to remove
distortion. Because photos can be taken with any focal length (practically,
the optical zoom option is avaiable in the most cameras) it is neccesary to
find coefficients for every focal length.
This process is called lens calibration and requires using of the
appropriate calibration plane. The plane has to be photographed with various
focal lengths, and next, the images have to be interpreted. Data obtained
from the interpretation has to be optimized to find the correct coefficients.
The plane recognition is automated and depend on the filtering of the
image and searching for given points (regular grid).
The author with Maciej Gaweł (student of University of Zielona Góra)
proposed use of the genetic algorithm as a fast and efficient way to calibrate
the lens distortion. Owing to automation of the recognition, as well as
calibration process, the whole task is automated and doesn’t require any help
from the user side.
23
When one has a given profile (obtained from the calibration process) it is
impossible to do the right correction. The Algorithm, used by the author, use
the EXIF format (photo description placed on the JPEG header) to recognize
focal length and camera model. Connection of the EXIF data with the
calculated coefficients is used later to automatic image correction
(application recognizes camera model, focal length and uses proper
coefficients).
During the correction process the correction application automatically
matches the best parameters profile. This means that whole process is totally
easy, automated and what follows, effective. The bonus is a possibility to
correct of the movie sequences, that is useful when one collect data for
reconstruction from the medium quality web cameras.
For the image one may use the following interpolation:
nearest neighbour method. X,Y values are rounded, so only one pixel is
used. This leads in many cases to not-smooth images. It is a fast method
which doesn’t require much calculations, but quality of the visual results is
poor,
bilinear method. X,Y values are averaged from the window (2x2 pixels) of
the neighbours of the given point. This produce more good quality image,
but requires more calculations.
Figure 2-2. Photograph after the lens distortion correction
24
2.5.2. Summary
The described solution is not the only one – other ways to solve this
problem might be found in following publications: [2], [5], [6] and [13].
Complex distortion correction tools are hard to available. In the most
applications for correction, user has to do this process manually (comparing
the equalized image). Application written by the author is based on the
above-mentioned procedure. Thanks to that, increase of the efficiency of the
calibration and correction process is obtained. Automation of the whole
process makes this processing stage easy, so the solution might be used
instead of the more complicated methods.
25
2.5.3. Perspective correction for perspective photographs
2.5.3.1. The need for perspective correction
Photographs taken for modelling purposes have usually one fundamental
feature – perspective, that means the objects that appear farther from the
objective are smaller, while objects lying closer are bigger. This effect is
especially visible with use of small focal lengths (for telephoto lens with long
focal length the result image is practically flat10). If one uses right modelling
methods from the perspective photos the procedure is not very complicated.
But even modelling from the perspective projections causes great difficulties
and precision of that process is not always satisfactory.
There are alternative ways to remove depth effect from the perspective
photo – in order to create the orthophotohraph. The orthophoto might be
used later in the reconstruction (with use of parallel projection – what
simplifies the restitution process). Of course perspective correction is not
limited to modeling goals but it is also useful in extraction textures from the
photos.
Figure 2-3. Photographs before and after perspective correction
10
the image seems devoid of the depth effect (perspective)
26
2.5.3.2. Perspective correction
For the imposed projection (perspective) it is hard to measure some
details, so it is neccessary to correct the photo. The concept of the correction
is to use the proper transformation. This transformation convert source
image into the destination image in order to remove the perspective for
selected details. The following camera model was proposed [1]:
 XW   a b
 YW  = d e

 
 W   g h
c  x
f  ⋅  y 
i   1 
(2.3)
where W = gx + hy + 1 .
Transforming this equation on the fraction form (like in [11]) one has:
ax + by + c
gx + hy + 1
dx + ey + f
Y=
gx + hy + 1
X =
(2.4)
where X , Y coordinates are destination coordinates and x, y means source
coordinates (the image with perspective).
Further transformation will give:
 x1
0

M

 x1
 0
y1 1
0
0
0
− X 1 x1
− Y1 x1
0
0 x1
y1 1
M
M
M
M
M
y1 1
0
0
0 − X 4 x4
0
0 x1
y1 1
M
− Y4 x 4
a
b 
− X 1 y1     X 1 
c 
− Y1 y1     Y1 
d
M ⋅  =  M 
 e  
− X 4 y4     X 4 
f
− Y4 y 4     Y4 
g
 
 h 
(2.5)
For above equation form – it is observed that it is neccessary to have 4
control (reference) points on the source and the destination images.
27
Above matrix form of the equation (2.3) allows simple calculation of the
correction coefficients ( because the equation has the form: A ⋅ x = b ).
If the matrix is treated as A ⋅ x = b it is possible to bring this to the normal
form:
A⋅ x = b
( AT A) ⋅ x = AT ⋅ b
(2.6)
x = ( AT A) −1 ⋅ ( AT b)
Thanks to that, it is possible to calculate the coefficients fast and efficiently.
One should remember about the transformation method (similarly to lens
distortion) – it is neccessary to use the inverse transformation to avoid the
image discontinuity.
Inverse transformation in this case relies on assigning source image
values to the X , Y
coordinates, destination image values to the x, y
coordinates and performing actual correction. One can apply of the above
method for every destination coordinate. The transformation will find the
correspondent coordinate on the source image. With right interpolation
(filtering) it will give the satisfactory results.
2.5.3.3. Summary
All modern applications for raster graphic processing (like Adobe
Photoshop, Corel PhotoPaint or GIMP) have proper perspective correction
tool. Unfortunately these tools are not always useful due to their simplicity.
The author implemented correction system that allows to perform the
transformation on the few images simulataneusly with possibility to use
alpha channel and manual entering of the reference points coordinates.
Thanks to that creation of the orthophotographs is simple and efficient.
28
2.6. Object texture creation
The objects used in the virtual reality environments should have texture
in order to increase the scene realism. The texture creation when one uses
only perspective photos is possible when the proper tools are available. In
this case, the tool is defined by methods for perspective correction and
image stitching/superimposing. In order to create texture, one should
perform the lens correction of the photos. And next, the photos free from
lens distortion should be exposed to the perspective correction (to determine
the right transformation matrix and to do the proper correction).
When one has only blueprint and information about the object texture the
applications for graphic processing should be used (in process similar to
painting). Of course it is possible to use specialized applications for texture
generation (e.g. Corel Texture, Texture Creator...etc).
2.7. Object modelling based on the processed informations
After the right input data processing, one might begin the proper
geometric reconstruction process. A few methods might be used. The focus
of this Thesis concerns two following methods:
reconstruction from
the parallel projections (e.g. blueprints and
orthophotographs),
reconstruction from the perspective photos.
In both cases one has to deal with projection of every point of the object to
the image plane.
The important question is when there is only one projection (one
photograph or a blueprint) it is impossible to reconstruct geometry without
further data. The minimum of two projections is needed. In the case of
orthogonal photos two reference (projected) points are enough – of course
when
above
orthogonal
photos
29
are
relatively
perpendicular.
In the event of perspective photos the additional problems is calibration of
the free-positioned cameras as well of the perspective effect.
2.7.1. Reconstruction from blueprints and orthophotographs
In the case of modelling from parallel projections, like blueprints, it is
necessary to solve following problems:
orthogonal camera representation (projection method),
orthogonal camera calibration relatively to created model (when one have
blueprints and proper reference points, it is necessary to use way to
calibrate the camera position relative to the model position),
the way to determine particular model points.
One of the most significant things is to take right camera model, since
camera is a fundamental element of the further reconstruction process
(e.g. the calibration method strictly depend on established camera model).
In the present Thesis the author use the vector camera model, due to its
simplicity and speed of basic operations.
30
2.7.1.1. Parallel (orthogonal) camera representation
In the three-dimensional graphics it is neccessary to use a given
projection type. The main target of the every projection is to transform
global coordinates of the object to its screen coordinates, which allows to
display of the object.
One of the most useful, is a parallel projection that appears below:
Figure 2-4. Parallel projection with use of the vector camera model
The camera on the above image has a given orientation (defined by the
vectors: R (X local axis), U (Y local axis) and D (Z local axis). Position of
the camera is marked by the O( X O , YO , Z O ) vector. Size of the projection
plane is defined by the S X and SY values.
A specific point in the space P( X P , YP , Z P ) is projected in a parallel way to
the projection plane (camera image plane) and refers as a p ( x p , y p ) point.
31
Vectors used in camera orientation ( R,U , D ) depend on a type of parallel
projection. The vectors are normalized (unit vectors). As a frame of
reference, the author established the right-hand Cartesian coordinate
system.
The vector camera model might be written as follows:
(
)
M ⋅ P −O = p
(2.7)
Where M denote projection matrix; below equation might be written as:
 RX
U
 X
RY
UY
 X P  X O 
 xp 
RZ   
⋅   YP  −  YO   =  

UZ  
 yp 




Z
Z
 P   O 
(2.8)
Obtained in this way, 2D coordinates are parallel projection of the 3D points
on the given plane.
2.7.1.2. Orthogonal camera calibration
When one have above camera model he/she should consider a method to
find the right camera position in order to make the reconstruction possible.
In fact that camera orientation is known (it result from orientation of the
blueprint or given parallel projection – the R,U , D vectors are known) the
unknowns are: size of the camera image plane S X , SY and its position
O( X O , YO , Z O ) .
32
To solve this problem the author proposes the following method: the
model is translated relatively to the camera on specific distance. Having this
distance and some control points the desired parameters can be calculated.
Below image refers to that method:
Figure 2-5. Orthogonal camera calibration
The d1 ÷ d 3 means distances from the particular reference points P1 ÷ P3 to
the camera image plane. Points p1 ÷ p3 are projection of the reference
points to the camera plane.
In order to perform the correct calibration one should avoid circumstances
when the reference points Pi are laying on the common line (in space). To
execute the calibration the 3 points are needed (with their coordinates on
the camera plane).
33
Once camera (plane/blueprint) orientation vectors are obtained, one might
calculate distances from the center of reference system to the particular
points. One can use following relationship:
d Pi =
D X Pi. X + DY Pi.Y + DZ Pi.Z
D X + DY + DZ
2
i =1..3
2
2
(2.9)
And next the minimal distance from the distance set is determined:
d MIN = min{d P1 , d P2 , d P3 )
(2.10)
This distance is neccessary to calculate the distance of the points relative
to the camera plane.
= d Pi − ( d MIN − D )
di
i =1 .. 3
(2.11)
Because the every point Pi might be calculated with below relationship
(which is taken from the Figure 2-5):
Pi = O +
1
1
S X ⋅ p i. x ⋅ R + S Y ⋅ pi. y ⋅ U + d i ⋅ D
2
2
(2.12)
After ordering the left and right side:
O+
1
1
S X ⋅ pi. x ⋅ R + S Y ⋅ pi. y ⋅ U = Pi − d i ⋅ D
2
2
(2.13)
With of above vector quation:
1
1
S X ⋅ pi. x ⋅ R X + S Y ⋅ pi. y ⋅ U X = Pi. X − d i ⋅ D X
2
2
1
1
YO + S X ⋅ pi. x ⋅ RY + S Y ⋅ pi. y ⋅ U Y = Pi.Y − d i ⋅ DY
2
2
1
1
Z O + S X ⋅ pi. x ⋅ RZ + SY ⋅ pi. y ⋅ U Z = Pi.Z − d i ⋅ DZ
2
2
XO +
(2.14)
Equations (2.14) might be also written in the matrix form:

1 0 0

0 1 0

0 0 1

1
pi. x ⋅ R X
2
1
pi. x ⋅ RY
2
1
p i. x ⋅ R Z
2
1
 X 
p i. y ⋅ U X   O 
2
P − di ⋅ DX 
Y
  O   i. X
1

pi. y ⋅ U Y  ⋅  Z O  =  Pi.Y − d i ⋅ DY 
2

 
S X   Pi.Z − d i ⋅ DZ 
1


p i. y ⋅ U Z 
  S Y 
2
(2.15)
From the above form follows that one has three equations and five
unknowns. It means that if one wants to determine all of these unknowns,
the minimum three control points have to be used.
34
The matrix for the above-mentioned set of the points is given:

1

0

0


1

0


0

1


0

0

0 0
1 0
0 1
0 0
1 0
0 1
0 0
1 0
0 1
1

p1. y ⋅ U X 
2

1
p1. y ⋅ U Y 
2

 P1. X
1

P
p1. y ⋅ U Z

 1.Y
2

X   P
1
p 2. y ⋅ U X   O   1. Z
2
  YO   P2. X
1
p 2. y ⋅ U Y  ⋅  Z O  =  P2.Y
 
2
 
  S X   P2.Z
1
p 2. y ⋅ U Z  
S  P
2
  Y   3. X
1
 P3.Y
p 3. y ⋅ U X 

2
P
 3.Z
1

p3. y ⋅ U Y 
2

1
p3. y ⋅ U Z 
2

1
p1. x ⋅ R X
2
1
p1. x ⋅ RY
2
1
p1. x ⋅ RZ
2
1
p 2. x ⋅ R X
2
1
p 2. x ⋅ RY
2
1
p 2. x ⋅ R Z
2
1
p 3. x ⋅ R X
2
1
p3. x ⋅ RY
2
1
p 3. x ⋅ R Z
2
− d1 ⋅ D X 
− d1 ⋅ DY 
− d1 ⋅ D Z 

− d 2 ⋅ DX 
− d 2 ⋅ DY 

− d 2 ⋅ DZ 
− d3 ⋅ DX 

− d 3 ⋅ DY 
− d 3 ⋅ DZ 
(2.16)
While equation (2.16) is treated as A ⋅ x = b , it might been solved.
In fact of occurrence of the more equations than unknowns, the equation
might be solved, after the reduction to the normal form:
(A
T
)
(
⋅ A ⋅ x = AT ⋅ b
)
(2.17)
That is a system of equations in the normal form is following:
(
x = AT ⋅ A
) ⋅ (A ⋅ b)
−1
35
T
(2.18)
2.7.1.3. Determining geometry of the model
Calculating of the individual points on the model is based on the idea of
finding coordinates of intersection of two rays in the space. In order to do
that, intersection is not always possible in the 3D space, the nearest point to
the rays is demanded.
The need for finding the properly algorithm to solve this problem is used
in the purposes of the collision detection or for the object reconstruction
issues (projection from the 2D space to 3D space).
Below is a sample solution for this problem, that is proposed in [3] and
gives good results.
The concept of this solution relies on finding points lying on the both
lines (the first straight is defined by A, B
points and second straight is
defined by the C, D points). These points have to be close to each other. By
the calculating of the average of these points results in a position of the
desired point.
Of course, if the lines intersect in the one plane, a 2D line intersection
algorithm might be used. But for the sake of use 3D space, other method
should be used.
Let wanted point, referred as G , be localized nearest to the both
segments. If this point exists, it has it’s projections on the both lines (marked
as E, F ). So that, finding of these points allows to find the wanted
coordinate.
When one has coordinates of the start and end of the rays, he/she might
calculate directional vectors v AB and vCD . These vectors with scalars d AB
and d CD are neccessary to estimate E and F positions. Scalars have values
that range from 0 to 1.
36
If one has E, F points it is possible to determine G point from equation:
G=
E+F
2
(2.19)
This point will be localized nearest to the both rays.
This algorithm doesn’t respect a case, when two lines covers each other,
and when the lines are parallel to each other (in this case the reconstruction
is impossible).
For the main cases this idea might be presented as following:
Wanted
Data
Figure 2-6. Determining the point nearest to the two lines in space
A( X , Y , Z ), B ( X , Y , Z )
First straight coordinates
C ( X , Y , Z ), D ( X , Y , Z )
Second straight coordinates
d AB , d CD
E, F
G
Scalars for both straights
Points referred as projection of the G point on
the both straights
Coordinates of the wanted point
37
First of all, it is neccessary to determine how to calculate given points:
E = A + d AB ⋅ v AB
F = C + d CD ⋅ vCD
G=
(2.20)
E+F
2
and determine directional vectors:
v AB
vCD
 B X − AX 
= B − A =  BY − AY 
 BZ − AZ 
DX − C X 
= D − C =  DY − CY 
 DZ − C Z 
(2.21)
As we can see, the solution is to calculate scalars d AB and d CD .
The points E and F make segment that is the smallest distance between
both lines, it ends in the following conclusion:
E − F → min , and aim is: E − F = 0
A + d AB ⋅ v AB − C − d CD ⋅ vCD = 0
(2.22)
By doing equation arrangement one has:
d AB ⋅ v AB − d CD ⋅ vCD = C − A
where:
C − A = v AC
C X − AX 
=  CY − AY 
 C Z − AZ 
(2.23)
(2.24)
By getting equation (2.24) to matrix form:
v AB. X 
vCD. X  v AC. X 
d AB ⋅  v AB.Y  − d CD ⋅  vCD.Y  =  v AC .Y 
 v AB.Z 
 vCD.Z   v AC.Z 
v AB. X
v
 AB.Y
 v AB.Z
− vCD. X 
v AC . X 
 d AB  


− vCD.Y  ⋅ 
 =  v AC.Y 
d
− vCD.Z   CD   v AC.Z 
(2.25)
(2.26)
By the treating above mentioned equation as A ⋅ d = b , it is easy to solve
it’s uknowns.
38
Because it is more equations than unknowns it is hard to solve this
equation in the classical way. This problem can be easily solved by using
normal forms where this equation is as follows: ( AT ⋅ A) ⋅ d = ( AT ) ⋅ b .
Writing equation (2.26) in normal form is following:

  v AB. X

 − vCD. X

− vCD. X  
  d AB 
− vCD.Y   ⋅ 
=
d CD 


− vCD.Z  
v AC . X 
v AB.Z  
⋅  v AC .Y 

− vCD.Z 
 v AC .Z 
v AB. X
v AB.Z  
⋅ v AB.Y
− vCD.Z  
 v AB.Z
v AB.Y
− vCD.Y
 v
=  AB. X
− vCD. X
v AB.Y
− vCD.Y
(2.27)
It can be possible to spot abilitiy to write equation (2.27) in the scalar
form, because:
v AB. X ⋅ v AB. X + v AB.Y ⋅ v AB.Y + v AB.Z ⋅ v AB.Z = v AB • v AB
− v AB. X ⋅ vCD. X − v AB.Y ⋅ vCD.Y − v AB.Z ⋅ vCD.Z = −v AB • vCD
(2.28)
vCD. X ⋅ vCD. X + vCD.Y ⋅ vCD.Y + vCD.Z ⋅ vCD.Z = vCD • vCD
So the term in the normal form might be written as:
 v AB • v AB

− vCD • v AB
− v AB • vCD   d AB   v AB • v AC 
⋅

=
vCD • vCD  d CD  − vCD • v AC 
(2.29)
One can see, equation (2.29) might be as following:
 a11
a
 21
a12   d AB   b1 
⋅
=
a 22  d CD  b2 
(2.30)
After use of the Cramer’s equations there is a solution:
d AB =
d CD
b1 ⋅ a 22 − b2 ⋅ a12
a11 ⋅ a 22 − a 21 ⋅ a12
b ⋅a −b ⋅a
= 2 11 1 21
a11 ⋅ a 22 − a 21 ⋅ a12
39
(2.31)
So that:
(
)(
) (
)(v • v )− (− v
)(
W = v AB • v AB vCD • vCD − − v AB • vCD − v AB • vCD
d AB =
d CD =
(v
AB
(− v
• v AC
CD
CD
CD
CD
• v AC − v AB • vCD
W
)(
)(
)
) (
)(
• v AC v AB • v AB − v AB • v AC − v AB • vCD
W
)
(2.29)
)
After calculation of the distances d AB and d CD , the E , F , G points might be
determined. This operation can be done on the equations (2.20):
E = A + d AB ⋅ v AB
F = C + d CD ⋅ vCD
G=
E+F
2
2.7.2. Reconstruction from perspective photographs
In case of the geometry reconstruction from the perspective photographs
the following questions should be answered:
the method of representation of the perspective camera (projection
method),
the method for calibration cameras position relatively to the created
model,
the method for determine geometry of the given model,
40
2.7.2.1. Perspective camera representation
A common method for projection of the three-dimensional graphics (used
in interactive presentation and games...etc) is a perspective projection.
Thanks to use of the projection centre F , which is simultaneously the
camera position, it is possible to get the depth effect. That type of projection
increases realism of the visualized scene. The vector camera model is
considered in the Thesis.
Figure 2-7. Perspective projection with use of the vector camera model
Similarly to the parallel camera model, the vectors are used to determine
camera orientation R,U , D and size of the image camera plane S X , SY . But
on contrary orthogonal projection, the projection centre is moved at focal
length
f
from the camera plane (for parallel projection, the distance
between hipothetical projection centre and camera plane is infinity). Thanks
to that, resulting image has depth effect.
41
Camera position is represented as F ( X F , YF , Z F ) while camera position has
O( X O , YO , Z O ) vector. Projected point p ( x p , y p ) results from intersection
camera plane with ray created by the point on the model P( X P , YP , Z P ) and
the projection centre F .
Perspective camera model can be described as following:
( )=x
D ⋅(P − F )
U ⋅(P − F )
f⋅
=y
D ⋅(P − F )
f⋅
R⋅ P− F
p
(2.30)
p
Implementing of the mentioned equation from the vector form results in:
f⋅
R X ⋅ ( X P − X F ) + RY ⋅ (YP − YF ) + RZ ⋅ (Z P − Z F )
= xp
D X ⋅ ( X P − X F ) + DY ⋅ (YP − YF ) + DZ ⋅ (Z P − Z F )
U ⋅ ( X P − X F ) + U Y ⋅ (YP − YF ) + U Z ⋅ (Z P − Z F )
f⋅ X
= yp
D X ⋅ ( X P − X F ) + DY ⋅ (YP − YF ) + DZ ⋅ (Z P − Z F )
(2.31)
Obtained in this way, 2D coordinates are perspective projection of the 3D
points on the given plane.
2.7.2.2. Perspective camera calibration
For the sake of complexity of calibration position and orientation of the
perspective cameras that problem was only mentioned in this Thesis. More
details about calibration method can be found in [7], [8] and [12].
2.7.2.3. Determining geometry of the model
Determining points on the model for perspective projection is identical to
the parallel projection. Istead of use the parallel rays, the perspective rays
(the single ray is created by point on the model, reference point on the
image plane and projection centre point) are used.
42
In the case of orientation of the free-positioned cameras, even at the
small differences with orientation some errors may occur during determining
point on the model (insufficient precision). In that case the method for
cameras calibration described in [12] should be used. It allows to
simultaneously orientate camera and determine geometry of the model.
2.8. Export of the model
The finished model should be written in a specific format appropriate to a
specific virtual reality environment. When further modelling is required,
model should be exported to the format compatible with the modelling.
As a universal format VRML 2.0 is recommended, with ability to fast
present the model and simplicity of import (most applications for 3D
processing and modelling supports VRML files).
Chosen format makes available presentation of the properties of the
model, which properties have been set during the reconstruction process (if
the model has information about the animation, the format should allow to
write this data).
43
3. BluePrint Modeler program as the sample
reconstruction system
In this Chapter author describes the BluePrint Modeler application – from
its history and requirements throught the interface (GUI) to the practical
applications of the program. Tutorials that are describing how to apply
BluePrint Modeler to specific problems are described in the Chapter 5.
3.1. History
BluePrint
Modeler
have
been
developed
since
september’2002.
The main idea of the program is to fast create outline (wireframe) of the
low-polygon model from the blueprints. First version of the application
(written in Delphi) allowed only for simple creation of vertices, lines and
polygons without any correction modules. Because of need for more
specialized environment, author decided to convert application code into the
C++ Builder. This was done in the beggining of 2003. Because the simple
conversion from Delphi to C++ has insufficient (low flexibility of this system)
author rewrote code from fundamentals (from the beginning of 2004) and
the present, more flexible version, differs many modelling and correction
operations.
3.2. Requirements
The fundamental requirements of the BluePrint Modeler application are:
installed OpenGL drivers (OpenGL 1.2),
min. 32 MB Ram nad 10 MB on hard disk + enough space to store
images and animations in workspace folder,
screen resolution of 800x600 (however it is hard to work with that
resolution), the 1024x768 or higher is recommended,
depth of colors – recommended 32 bits,
because of significant memory usage it is recommended not to use
the other complex applications in meantime (because of low speed of
the whole system and increasing memory demands),
44
3.3. System architecture
The system is divided into sub-systems.
Figure 3-1. The BluePrint Modeler architecture
Main structure of the BluePrint Modeler application is depicted on
Figure 3-1.
The Model Editor controls main reconstruction process, while the
Photogrammetric
Unit
The
approach
intuitive
is
used
to
simplify
prepare
the
data
for
reconstruction.
reconstruction
process.
The object-based structure allows for fast extending of application abilities.
Main classes are shown in Figure 3-2. The system is object-oriented.
45
Figure 3-2. The BluePrint Modeler architecture (main classes)
The main engine of the application consists of the function specific
managers. The managers support clasess and methods necessary in
reconstruction process. Beside of this, the application uses the DLL libraries
for displaying the viewports and for support of the multi-language versions
(currently being developed).
One of the most significant managers is the Messages Manager.
The application uses its own messages, hence there is a need for support
methods to handle their managing.
The Scene Manager, also very significant, consists of elements that hold
the data and support methods for manipulation of geometry, materials and
others (cameras,images..etc).
The Input Manager provides I/O procedures supporting mouse and
keyboard. The Output Manager supports procedures for data storage in
application format and supports export model to the VRML language.
The Viewport Manager is used to hold data about viewports and their
layouts. The Script Manager is used to process scripts.
46
The PhotoModule (visible on Figure 3-2), is a part of the Photogrammetric
Unit.
Figure 3-3. The Photogrammetric Module class hierarchy
The created PhotoFlow system (described in Section 3.4) allows for easy
manipulation of the data for reconstruction. It consists of several classes (see
Figure 3-3). The classes are based on the Base Module. Each class provides
elements and methods for a given target:
for the Lens Distortion Correction Module, the module class has the
information about photos, their distortion and coefficients that are used
for rectification process,
for the Perspective Correction Module, there is information about
perspective correction coefficients, distorted images and the result image,
for the OrthoCamera Calibration Module there is info about reference
points...etc,
The author implemented original method for creating and manipulating
objects (that are associated with particular classes) – what is essence of the
PhotoFlow system (makes the preparation data process simple and efficient).
47
3.4. Interface and application features
As was described, the main application is divided into the two modules:
the Model Editor, that supports methods for creation and edition of
the models,
the
Photogrammetric
Unit,
for
preprocessing
(e.g.
correction)
acquisted materials and for the camera calibration.
Figure 3-4. The Model Editor in the BluePrint Modeler application
The active tab of the Model Editor is shown on Figure 3-4. The Editor
consists of following elements, grouped in the proper tabs (Table 3-2).
Table 3-1. List of the main Model Editor functions
Tab icon
Function
Description
Vertices
Support methods for edit vertices
1. Create vertex
Allows to create vertex manually (by
entering XYZ coordinates). Given vertex
might be selected, frozen or/and hidden.
2. Modify selected vertices
Allows to modify selected vertices by
setting coordinates (with possibility to
change only selected coordinates, e.g. X
and Z). Modified vertices might be
selected, frozen or/and hidden.
48
Tab icon
Function
Description
3. Select / find vertices
4. Get properties of selected
vertices
Enables to select vertices by name or
select vertices by radius. Selection
includes the option of finding selected,
frozen or hidden vertices.
Application supports four modes of
selection:
add to existing selection,
create new selection,
invert selection for current selected,
unselect only current selected,
Acquires properties of the selected
vertices.
5. Create vertices on line
Allows to create given amount of
vertices on the segment (by input start
of the segment and define end of the
segment or segment vector). Created
points might be connected with lines.
The start and end points of the segment
might be created or not (user may use
existing points).
6. Duplicate selected vertices
Allows to create copy/multiple copies of
the selection by vector. Subsequent
copies are translated by defined vector.
User can define also copies count and
creation flags.
7. Weld vertices
Allows to weld the selected vertices into
one vertex. Weld vertices might be
choosen by radius or by labels (names).
Position of the new created vertex might
be averaged or its coordinates might be
entered manually.
8. Create vertices by mirror
Allows to duplicate selected vertices
(and elements that use selected
vertices, like lines or polygons) by
mirror. This function also supports
method for welding vertices after
mirroring – e.g. welding vertices that
are placed near the mirror plane with
given tolerance).
9. Remove vertices
It makes possible to remove selected
vertices or remove all vertices.
Lines
Support methods for edit lines
1. Create line / lines
Allows to create line that may consist of
two, three or four vertices. Of course
creation flag like select, frozen or hidden
might be also set.
2. Modify selected lines
It allows to modify selected lines state.
49
Tab icon
Function
Description
3. Select / find lines
Allows to select lines by name, finding
selected or/and frozen lines...etc. Four
selection modes are supported.
4. Get properties of selected
lines
Acquires properties of selected lines.
5. Weld (insert) lines
It allows to insert selected lines into the
defined segment (function find and
create projection of inserted line in main
straight – it easy stucking segment
together).
6. Remove lines
It makes possible to remove selected
lines or remove all lines.
Polygons
Support methods for edit polygons
1. Create polygon
Allows to create polygon based on three
or four vertices (triangle or quad) with
given material and mapping coordinates.
The creation state like selected, frozen
and hidden may be also adjusted.
2. Modify selected polygons
Allows to modify selected polygons (by
current selection or by name selection).
Method support flip normals option.
Modify may concern only mapping
coordinates. It is possible to create
planar mapping for selected polygons.
3. Select / find polygons
Similarly to select / find lines, this option
enables to select polygons by name,
finding selected or/and frozen lines...etc.
Four selection modes are supported.
4. Get properties of selected
polygons
Acquires properties
polygons.
5. Remove polygons
Allows to remove selected polygons or
remove all polygons.
Objects
of
the
selected
Support methods for edit objects
1. Add
Allows to insert child object (to object
hierarchy) based on selected object. It
provides dialog where one might set
object state (visibility, freeze), object
icon, bounding box, orientation and
object center. One might also set object
name and type identifier.
2. Remove
Allows to remove current object.
50
Tab icon
Function
Description
3. Clear All
Clear all objects.
4. Properties
Allows to modify the object properties
(that have been initially set during „Add”
operation).
5. AutoReference
It makes possible automatic duplication
of the object with given type identifier.
After click the all objects with given type
identifier (identifier equal with selected
object) are created as instance of the
selected object (with respect for their
orientation). This option is usefully with
creation windows where one create only
one window, set position of the rest and
perform the „AutoReference” process.
6. Link to object
Add (links) selected elements (vertices,
lines, polygons) to current selected
object. This option makes possible one
from two linking types:
link only selected elements,
link all within selected vertices,
(when the only vertices have to be
selected, and the lines and polygons
based
on
the
vertices
are
automatically included),
7. Unlink All
Removes all elements linked into the
selected object.
Cameras and Viewports
Make possible to edit cameras and
change viewport layouts
1. Add
Allows to add create new camera.
Camera might be:
parallel,
orthophotogrammetric (parallel with
photo),
perspective,
photogrammetric (perspective with
photo),
User might adjust following things that
consided new created camera:
orientation by orientation names
(front, rear, left...etc) or by
adjusting the proper angles,
position and target (XYZ),
width and height of the image
plane,
name,
visibility of camera’s cone,
2. Remove
Allows to remove selected camera.
51
Tab icon
Function
Description
3. Clear All
Allows to remove previously created
cameras. Please note that standard
cameras can’t be removed.
4. Properties
It makes possible to edit camera
parameters (options like with adding
new camera).
5. Viewport layouts
Allows to switch viewports
between the following styles:
layout
Provides automation of some
processes by script executing
Scripts
1. Command line
Some of operations (like vertex adding)
might be executed by performing
particular text commands. One might
enter commands with a given syntax in
order to perform the create/edit/remove
operations.
Images /
Image Sequences /
Movies
1. Add Images
Supports methods for managing
images, image sequences and
movies
Allows to add BMP/TGA/JPEG images
into the application.
2. Add Movies
Allows to add AVI movies into the
application. Please note that AVI files
must not be written in DIVX format
(because of complex compression).
3. Add Sequence
Allows
to
add
sequence
of
BMP/TGA/JPEG images – in order to
help with batch processing of these
images.
4. Remove
Allows to remove selected image/image
sequence or movie.
5. Clear All
Clears all images within Images/Movies
list.
6. Properties
It makes possible to adjust options like:
image name,
resolution of image texture (used in
texture mapping),
Of course one might view the image or
a movie (frame by frame).
52
Tab icon
Function
Description
Snappers
Supports methods to managing
snappers (edition helpers)
1. Add
2. Remove
Allows to add snapper (attractor used to
precise positioning of vertices). Every
snapper has a given name and makes
possible to attract in three axes (XYZ).
The attracted axes might be freely
adjusted. In case of choosing two axes
(e.g. XY) the snapper is a point on plane
(2D) . For three axes (XYZ) the snapper
is a point in 3D space...etc.
Of course one might ajust the attracting
(snapping) tolerance.
Allows to remove selected snapper.
3. Clear All
Allows to remove all snappers.
4. Properties
It makes possible
parameters.
to
edit
snapper
Supports managing materials
(for object texturing)
Materials
1. Add
2. Remove
Allows to add new material. For every
material the following properties might
been set:
name,
color,
texture,
transparency
Allows to remove selected material.
3. Clear All
Allows to remove all materials.
4. Properties
It makes possible to edit selected
material. Edit properties like for „Add”
material option.
53
Besides of methods described in Table 3-1, the Model Editor have following
edition modes (Table 3-2):
Table 3-2. Edition modes of the Model Editor
Mode icon
Function
1. Camera translation mode
Description
Enables to move camera position (with
simultaneous translating its target) in
order to change the camera view.
2. Camera zooming mode
Enables to change camera zoom by
manipulating camera plane dimensions.
3. Camera rotation mode
Enables to rotate given cameras
(standard cameras won’t be rotated
except isometric and perspective cams).
4. Camera FOV mode
Enables to change FOV by adjusting
focal of given perspective camera.
5. Object / elements
selection mode
It makes possible to select objects (class
of the selected objects might be
checked).
6. Object / elements
translation mode
It makes possible to translate (or select
and translate) selected objects.
7. Elements creation mode
It makes possible to enable creation
mode in order to create vertices, lines
or polygons.
The Photogrammetric Unit similarly to the Model Editor, has the three main
sections: viewports, icons/menu bar and toolbar tabs (at right). The following
modes are available (Table 3-3):
Table 3-3. Edition modes of the Photogrammetric Unit
Mode icon
Function
1. Camera translate mode
Description
Enables to move camera position in
order to navigate within photoflow
objects.
2. Camera zooming mode
Enables to zoom camera in order to
navigate within photoflow objects.
3. Photoflow edition mode
When enabled it allows to edit photoflow
elements: insert/remove/connect...etc
54
The Photoflow name is significant here, because it simply defines the main
operations that could be done with the Photogrammetric Unit. Instead of
casual image processing applications to perform specific corrections, the
Photoflow system introduced by the author, allows to manage them.
The system is based on the object approach to image processing.
When the „Photoflow edition mode” is enabled the user might insert
processing modules (by the drag-and-drop rule – simply drag module icon
from icon bar to destination place on the photoflow view). The following
modules are available (Table 3-4):
Table 3-4. Processing modules used in the Photogrammetric Unit
Mode icon
Function
1. Image Input
(Input Module)
Description
Makes possible to use earlier loaded
images
in
processing
modules.
Provides single image data on output.
2. Image Output
(Output Module)
Allows to view the given image and its
filename on disk within the workspace
folder.
Requires single image data on input.
3. Camera Output
(Output Module)
Allows to set given camera parameters.
Require single camera data on input.
Besides of this, one have to assign
specific camera name (by doubleclicking).
4. Lens Distortion Correction
Module
Allows to perform lens correction
operation. Lens correction might be
done by use of proper profile base or by
manually set the parameters.
Requires one or more image data on
input.
Provides one or more image data on
output.
5. Perspective Correction
Module
Allows to perform perspective correction
transformation.
Requires one or more image data (on
input).
Provides single output for image data.
6. Orthocamera Calibration
Module
Allows to perform calibration of the
parallel camera (in order to make the
blueprints calibration).
Requires one or more image data on
input.
Provides one or more camera data on
output.
55
The main thing for the modules processing is the following: every module
ensures specific inputs and outputs in the given amount. For example, for
perspective correction module for the image superimposing it can be only
one output (for images). All modules can be linked by linking right
connectors (inputs and outputs) of the mentioned modules. It’s possible to
link connectors with the same group (it is impossible to connect image input
to the camera output). Editon of the given module (an example – performing
the perspective correction in the selected module) is made by double-click on
the module object.
When the given module is edited the specific tab is shown at the right
toolbox. To return to the photoflow edition one have to click first icon on the
left (photoflow symbol). Owing to ease processing and navigate, the images
and cameras tabs were added (avoid to eventually switching between the
Model Editor and the Photogrammetric Unit).
The Photogrammetric Unit can bee seen in Figure 3-5.
Figure 3-5. The Photogrammetric Unit with the Photoflow architecture
56
The Lens Distortion Correction Module (Figure 3-6) consist of following
elements:
„Calibration image” section,
„Distorted image” section,
„Corrected image” section,
Supported methods allows for:
processing automatic recognize single calibration pattern,
processing automatic calibration of recognized pattern,
manual adjusting corection coefficients,
removing lens distortion with use of calibrated coefficients,
removing lens distortion with use of external source with calibration
profiles database (from file),
Figure 3-6. The Lens Distortion Correction Module
57
Meaning of the most significant buttons is described in Table 3-5.
Table 3-5. The most significant buttons in the Lens Distortion Correction Module
Buttons
Description
Refreshes the image at the input (it is useful when image has
been changed after loading).
On this tab there are placed buttons to support pattern
recognition.
On this tab there are placed buttons to support pattern
calibration.
With use of this tab is possible to set visual parameters:
„Show calibration grid” option is used to show recognized
points,
„Show distortion grid” option is used to show grid
distorted by adjusted coefficients,
Depends on selected tab:
for „Pattern Recognition” this method clear all recognized
points,
for „Pattern Calibration” these methods set all coefficients
to zeros,
Performs automatic pattern recognition. When recognition
went well the „Pattern Calibration” and „Calibration Grids”
tabs are visible.
Performs automatic pattern calibration. After the calibration
application sets the coefficients in the „Pattern Calibration”
tab.
This buttons allows to clear / load database that will be used
for perform automatic lens distortion correction. If one want
to perform this type of correction, „Correction parameters”
have to be set as „get from profiles database”.
Perform the lens distortion correction. The current image is
used when right option is checked („Enabled in the correction
process” checkbox). It will be used the selected filtering /
interpolation method. If the given option is checked, the
application uses the profiles database to perform the
correction process based on the EXIF data obtained from the
distorted image. When input images are movies, the „Movie
processing” checkbox has to be checked. Resolution of the
output image has to be adjusted before correction starts.
Enables to switch viewport layouts between given views.
Avaiable layouts are single view, dual view and triple view.
58
The Perspective Correction Module (Figure 3-7) consists of following
elements:
„Distorted Image” section,
„Corrected Image” section,
Distorted/Corrected viewports with given layout,
Supported methods allows to:
make superimposition of the few images,
work with the image transparency,
the transformation reference points might be entered with a keyboard or
set graphically (by the viewport),
to simplicity the image stitching process the help points should be used.
Source (distorted image view) help points are mapped by calculation inthe-fly into the destination help points (corrected image view). When one
want to set destination help points (during the stitching) the right
„Calculate” method might be used,
manipulate coordinates by the introduce of coordinates clipboard – this
solution save user’s time and facilitate copying coordinates from one tab
to the another,
Figure 3-7. The Perspective Correction Module
59
Meaning of the most significant buttons is described in Table 3-6.
Table 3-6. The most significant buttons in the Perspective Correction Module
Buttons
Description
Refreshes the image at the input (it is useful when image has been
changed after loading).
Makes possible to edit points that defines main transformation
mapping regions. On the view main points are marked by
blue/light blue points.
Makes possible to edit points that helps with defining main
transformation mapping regions. On the view help points are
marked by the red/light red color.
Makes possible to modify transparency options:
transparency,
alpha channel (only if input image has transparency data),
alpha channel balance (as below),
inverting alpha channel (checkbox),
Sets the manually entered coordinates.
Sets defaults to the given coordinates.
Copies given coordinates into the special coordinates clipboard.
Pastes coordinates from coordinates clipboard into given fields.
Makes possible to calculate destination main points (in „Corrected
Image” secton) based on the destination help points.
Performs the perspective correction. The current image is used
when right option is checked („Enabled in the correction process”
checkbox). In the correction process given filtering/interpolation
method will be used. When input images are movies, the „Movie
processing” checkbox have to be chcecked. Resolution of the
output image have to be adjusted before correction.
Enables to switch viewport layouts between one view that
represent distorted or corrected image as well as distorted and
corrected image on the dual view.
60
The Orthocamera Calibration Module (Figure 3-8) consist of following
elements:
„Input image” section,
„Results – Orthocamera parameters” section,
Supported calibration method allows to perform fast and automatic
orthocamera calibration based on the method described in the Chapter 2.
After executed calibration, results are displayed in the „Results...” section
and if given camera is linked into the module (on output) the camera will be
adjusted by determined parameters.
Meaning of the most significant buttons is described in Table 3-7.
Table 3-7. The most significant buttons in the Orthocamera Calibration Module
Buttons
Description
Refreshes the image at the input (it is useful when image has
been changed after loading).
Performs autocalibration of the parallel camera.
Figure 3-8. The Orthocamera Calibration Module
The sample use mentioned correction modules described in the Chapter 5.
61
3.5. BluePrint Modeler applications
BluePrint Modeler application might be used for:
modelling/texturing issues, like:
to create and edit low-polygons model (especially),
preliminary low-polygon model creating (for further modelling in the
more advanced applications),
model texturing,
calibration purposes, to determine proper camera position, calibration
might be manual or automatic (at present time automatic calibration is
only available for the parallel camera),
image processing purposes, especially for lens distortion removal and
perspective correction issues. Especially these tools can be used in
following situations:
removing radial distortion from the images and movies,
applying radial distortion (e.g. fish-eye distortion) to images and
movies,
texture manipulation,
image superimposing (with use of transparency channel),
image stitching (astronomy and photogrammetry purposes),
processing of the multiple images (batch processing) or movies
(a.e. flip image horizontally or change image resolution),
applying right rotations and scalings of the given image,
removing perspective effect in order to create orthophotographs,
architectural cataloguing,
object modeling for game-development purposes,
object modelling for virtual presentations,
fast modelling for situations outlook,
62
3.6. Development and future directions
Because of continous development of the BluePrint Modeler, the following
questions might be considered:
perspective camera calibration,
the plug’ins system (e.g. create plug-ins for save in various format and
develop the SDK11 kit),
new modelling functions12,
multi-language version (present version is english),
improve the texture mapping tool,
create module that support graphic filtering (effects for image
smoothing, sharpening, colouring...),
simple image editor for plotting correction on the blueprints,
support for animation modelling,
11
Software Development Kit – for support procedures for plug-in writing in programming
languages
12
an example – creating points on arc (in the present version points might be created on
segment but not based on circle or arc)
63
4. Lens distortion correction – experiments and practical
applications
As it was described in Chapter 2 (in Point 2.5.1) the radial distortion of
the
lens
system
make
the
output
image
a
little
bit
deformed.
For normal aims like photographing for documentation purposes this
distortions might be omitted, but in the geometry and texture reconstruction
even small distortions might introduce significant errors.
Example of texture reconstruction is shown on the Figures 4-1 – 4-4.
Figure 4-1. Extracted texture loaded with radial distortion
Figure 4-2. Extracted texture with removed radial distortion
As is visible above the images differs a little from each other. The difference
is caused by distortion that deforms the image radially.
64
On the image at left is visible a selected region from
the Figure 4-1. The blue edges are straight lines that
allows to compare the deformation. As is visible the
edge of the top window doesn’t cover the straight
line. For the bottom window the edge is near to
straight. It is connected with fact that upper part of
the image lays near image borders (the farther from
the image center the bigger distortion is observed).
Figure 4-3. Zoom of selected region from Figure 4-1.
On the image at right is visible a selected region
from the Figure 4-2. Because the lens correction
removal is performed the distortion isn’t visible in that
degree as was shown on the Figure 4-3. In result,
the edge of the top window covers the straight line.
Also for the edge of the bottom window similarly fact is
observed.
Figure 4-4. Zoom of selected region from Figure 4-2.
Please note that the Figure 4-1 is barrel-like that on Figure 4-2 might be
observed a little bit of pincushion effect. It is connected with use the method
with three parameters of distortion (that were introduced in the Chapter 2) –
when distortion center is simultaneously an image center.
Above were described influence of the spherical distortion on the texture
extraction
process.
Also
the
geometry
extraction
(modelling
and
reconstruction) take advantages from the image interpretation. If the image
is distorted the modelling errors may cause the model deformations. For this
reason one has to know what cameras are good for the reconstruction
65
process and how to take the photos in order to decrease influence of the
spherical distortion on the result images.
4.1. Experiments
The followings facts are concerned:
size of distortion (called further the initial error) for selected cameras,
relationship between the initial error and the focal length (only when the
profile covers the most of the focal range),
visible effects of the correction of used profile (made by calibration
methods).
There have been examined several cameras, from typical compact
devices to professional reflex cameras. Also the prices of the every cameras
have been considered (for information purposes).
The relationship between distortion and focal length are described on
example of the Olympus C-765 UltraZoom camera profile.
4.1.1. Tests of cameras
For research, 11 digital cameras have been examined to check their
possibilities to use in the reconstruction process (for data acquisition).
Every camera employed a calibration plane photographs with various focal
lengths. To decrease the measurement errors, the photographs were taken
using the tripod – in the most cases, there were taken a few photos for
every focal length (that means there were series of photos per every focal
length).
Because the largest distortion is observed for small focal lengths, the
research was focused on photographs taken with small focal length. The full
calibration profiles have been done for selected cameras (that covers the
whole focal length range). Values of initial error and final error (error after
the calibration - when the distortion has been removed) was basis of the
tests.
For result processing, there were used application called BPM: External
Lens Distortion Correction Module (written by the author for lens correction
66
and calibration purposes - with using the calibration algorithm written in
cooperation with Maciej Gaweł).
Tested cameras are shown on Table 4-1.
Table 4-1. Index of tested cameras with division according to created profiles
Complete profiles
Incomplete profiles
Canon Powershot A510
Sony Cybershot DSC-P100
Konica Minolta Dimage Z3
Kodak EasyShare CX7525
Kodak EasyShare Z700
Nikon Coolpix 4600
Nikon Coolpix 5200
(with cover the whole focal range)
Canon EOS 300D
Olympus C-2 Zoom
Olympus C-765 UltraZoom
Konica Minolta Dynax 7D
(almost complete)
For each measured camera the following graph was made:
Olympus C-765 UltraZoom
3
Distortion Level
2,5
2
1,5
1
0,5
0
20
40
60
80
100 120 140 160 180 200 220 240 260 280 300 320 340 360 380
Focal length [mm]
Figure 4-5. Sample lens profile calibration graph
For each focal length there were taken a few photographs (for full profiles
frequently 5 photos and for the rest profiles average 3 photos per single
focal length). Due to that fact, the average and the standard deviation for
every focal were determined. The graphs introduce average data (solid line)
and the standard deviation (dotted line). Red coloured lines concern initial
lens distortion (before calibration) and blue coloured lines means the final
distortion (after calibration). The bold dotted line for every colour represent
data aproximation by using of five-degree polynomial.
Introduced focal length concerns 35mm equivalent of the focal length for the
given camera model.
67
4.1.2. Full calibration profiles
Canon
EOS 300D
with 18-55 USM lens
Reflex
Type
~3500 PLN
Rough price
Table 4-2. Specifications for Canon EOS 300D
6,3 Megapixel / CMOS
18,0 – 55,0 mm (3x optical zoom)
28,8 – 88,0 mm (35mm equivalent)
n/a
3072 x 2048
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Canon EOS 300D
5
4,5
Distortion Level
4
3,5
3
2,5
2
1,5
1
0,5
0
20
30
40
50
60
70
80
90
100
Focal length [mm]
Figure 4-6. Relationship between focal length and lens distortion level for Canon
EOS 300D
Table 4-3. Survey statistics of measuring lens distortion for Canon EOS 300D
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
68
4,456 at 28,8 mm
2,863 after calibration
1,276 at 88,0 mm
1,236 after calibration
0,265
2,863 at 28,8 mm
1,236 at 88,0 mm
0,156
1536 x 1024
15 / 51
Olympus
C-2 Zoom
Type
Compact
Rough price
~600 PLN
Table 4-4. Specifications for Olympus C-2 Zoom
2,0 Megapixel / CCD
5,0 – 15,0 mm (3x optical zoom)
38,0 – 114,0 mm (35mm equivalent)
2,5x
1600 x 1200
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Olympus C-2 Zoom
4
3,5
Distortion Level
3
2,5
2
1,5
1
0,5
0
30
40
50
60
70
80
90
100
110
120
Focal length [mm]
Figure 4-7. Relationship between focal length and lens distortion level
for Olympus C-2 Zoom
Table 4-5. Survey statistics of measuring lens distortion for Olympus C-2 Zoom
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
69
3,457 at 38,0 mm
1,038 after calibration
0,241 at 114,0 mm
0,224 after calibration
0,036
1,038 at 38,0 mm
0,224 at 114,0 mm
0,087
1600 x 1200
9 / 47
Olympus
C-765 UltraZoom
Type
Rough price
Compact
~1400 PLN
Table 4-6. Specifications for Olympus C-765 UltraZoom
4,2 Megapixel / CCD
6,3 – 63,0 mm (10x optical zoom)
38,0 – 380,0 mm (35mm equivalent)
4x
2288 x 1712 (3200 x 2400 interpolated)
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Olympus C-765 UltraZoom
3
Distortion Level
2,5
2
1,5
1
0,5
0
20
40
60
80
100 120 140 160 180 200 220 240 260 280 300 320 340 360 380
Focal length [mm]
Figure 4-8. Relationship between focal length and lens distortion level
for Olympus C-765 UltraZoom
Table 4-7. Survey statistics of measuring lens distortion
for Olympus C-765 UltraZoom
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
70
2,733 at 38,0 mm
1,064 after calibration
0,231 at 380,0 mm
0,217 after calibration
0,048
1,064 at 38,0 mm
0,217 at 380,0 mm
0,059
1600 x 1200
48 / 279
Konica Minolta
Dynax 7D
with 17-35 mm lens
Reflex
Type
~6000 PLN
Rough price
Table 4-8. Specifications for Konica Minolta Dynax 7D
6,1 Megapixel / CCD
17,0 – 35,0 mm (2x optical zoom)
25,5 – 52,5 mm (35mm equivalent)
n/a
3008 x 2000
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Konica Minolta Dynax 7D
3,5
Distortion Level
3
2,5
2
1,5
1
0,5
0
20
25
30
35
40
45
50
Focal length [mm]
Figure 4-9. Relationship between focal length and lens distortion level
for Konica Minolta Dynax 7D
Table 4-9. Survey statistics of measuring lens distortion
for Konica Minolta Dynax 7D
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
71
3,012 at 25,5 mm
0,509 after calibration
0,726 at 39,0 mm
0,550 after calibration
0,097
0,795 at 42,0 mm
0,361 at 30,0 mm
0,066
2256 x 1496
7 / 27
4.1.3. Incomplete calibration profiles
Sony
Cybershot DSC-P100
Compact
Type
~1600 PLN
Rough price
Table 4-10. Specifications for Sony Cybershot DSC-P100
5,1 Megapixel / CCD
7,9 – 23,7 mm (3x optical zoom)
38,0 – 114,0 mm (35mm equivalent)
2x
2592 x 1944
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Sony Cybershot DSC-P100
3,5
Distortion Level
3
2,5
2
1,5
1
0,5
0
20
30
40
50
60
70
80
90
100
Focal length [mm]
Figure 4-10. Relationship between focal length and lens distortion level
for Sony Cybershot DSC-P100
Table 4-11. Survey statistics of measuring lens distortion
for Sony Cybershot DSC-P100
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
72
3,193 at 38,0 mm
1,731 after calibration
0,448 at 80,3 mm
0,401 after calibration
0,040
1,731 at 38,0 mm
0,401 at 80,3 mm
0,042
2048 x 1536
7 / 22
Nikon
Coolpix 4600
Type
Compact
Rough price
~800 PLN
Table 4-12. Specifications for Nikon Coolpix 4600
4,2 Megapixel / CCD
5,7 – 17,1 mm (3x optical zoom)
34,0 – 102,0 mm (35mm equivalent)
4x
2280 x 1716
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Nikon Coolpix 4600
4,5
4
Distortion Level
3,5
3
2,5
2
1,5
1
0,5
0
30
40
50
60
70
80
90
100
Focal length [mm]
Figure 4-11. Relationship between focal length and lens distortion level
for Nikon Coolpix 4600
Table 4-13. Survey statistics of measuring lens distortion for Nikon Coolpix 4600
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
73
4,066 at 35,8 mm
0,821 after calibration
1,336 at 57,9 mm
0,567 after calibration
0,045
0,961 at 50,1 mm
0,567 at 57,9 mm
0,091
1600 x 1200
9 / 27
Nikon
Coolpix 5200
Compact
Type
~1200 PLN
Rough price
Table 4-14. Specifications for Nikon Coolpix 5200
5,1 Megapixel / CCD
7,8 – 23,4 mm (3x optical zoom)
38,0 – 114,0 mm (35mm equivalent)
4x
2592 x 1944
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Nikon Coolpix 5200
5
4,5
Distortion Level
4
3,5
3
2,5
2
1,5
1
0,5
0
30
40
50
60
70
80
90
100
110
120
Focal length [mm]
Figure 4-12. Relationship between focal length and lens distortion level
for Nikon Coolpix 5200
Table 4-15. Survey statistics of measuring lens distortion for Nikon Coolpix 5200
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
74
4,404 at 38,0 mm
0,598 after calibration
0,814 at 72,6 mm
0,404 after calibration
0,026
1,030 at 41,4 mm
0,404 at 72,6 mm
0,066
1600 x 1200
5 / 21
Kodak
EasyShare CX7525
Type
Compact
Rough price
~890 PLN
Table 4-16. Specifications for Kodak EasyShare CX7525
5,0 Megapixel / CCD
5,6 – 16,8 mm (3x optical zoom)
34,0 – 102,0 mm (35mm equivalent)
5x
2560 x 1920
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Kodak EasyShare CX7525
5
4,5
Distortion Level
4
3,5
3
2,5
2
1,5
1
0,5
0
20
30
40
50
60
70
80
90
100
110
Focal length [mm]
Figure 4-13. Relationship between focal length and lens distortion level
for Kodak EasyShare CX7525
Table 4-17. Survey statistics of measuring lens distortion
for Kodak EasyShare CX7525
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
75
4,475 at 42,5 mm
3,980 after calibration
1,834 at 102,0 mm
1,813 after calibration
0,000
3,980 at 42,5 mm
1,813 at 102,0 mm
0,000
1600 x 1200
6/6
Kodak
EasyShare Z700
Compact
Type
~1200 PLN
Rough price
Table 4-18. Specifications for Kodak EasyShare Z700
4,0 Megapixel / CCD
6,0 – 30,0 mm (5x optical zoom)
35,0 – 175,0 mm (35mm equivalent)
3x
2408 x 1758
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Kodak EasyShare Z700
4
3,5
Distortion Level
3
2,5
2
1,5
1
0,5
0
20
30
40
50
60
70
80
90
100
110 120
130
140
150
160 170
180
Focal length [mm]
Figure 4-14. Relationship between focal length and lens distortion level
for Kodak EasyShare Z700
Table 4-19. Survey statistics of measuring lens distortion
for Kodak EasyShare Z700
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
76
3,434 at 35,0 mm
1,017 after calibration
0,370 at 175,0 mm
0,341 after calibration
0,045
1,017 at 35,0 mm
0,276 at 42,0 mm
0,047
1656 x 1242
6 / 18
Konica Minolta
Dimage Z3
Type
Rough price
Compact
~1600 PLN
Table 4-20. Specifications for Konica Minolta Dimage Z3
4,2 Megapixel / CCD
5,8 – 69,9 mm (12x optical zoom)
35,0 – 420,0 mm (35mm equivalent)
4x
2272 x 1704
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Konica Minolta Dimage Z3
4
3,5
Distortion Level
3
2,5
2
1,5
1
0,5
0
20
40 60
80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420
Focal length [mm]
Figure 4-15. Relationship between focal length and lens distortion level
for Konica Minolta Dimage Z3
Table 4-21. Survey statistics of measuring lens distortion
for Konica Minolta Dimage Z3
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
77
3,704 at 35,4 mm
0,578 after calibration
0,228 at 337,4 mm
0,228 after calibration
0,068
0,578 at 35,4 mm
0,202 at 63,6 mm
0,054
1600 x 1200
9 / 27
Canon
PowerShot A510
Type
Compact
Rough price
~890 PLN
Table 4-22. Specifications for Canon PowerShot A510
3,2 Megapixel / CCD
5,8 – 23,2 mm (4x optical zoom)
35,0 – 140,0 mm (35mm equivalent)
3,2x
2048 x 1536
Effective image resolution / sensor
Focal range
Digital zoom
Max. resolution
Canon Powershot A510
3,5
Distortion Level
3
2,5
2
1,5
1
0,5
0
20
30
40
50
60
70
80
90
100
Focal length [mm]
Figure 4-16. Relationship between focal length and lens distortion level
for Canon PowerShot A510
Table 4-23. Survey statistics of measuring lens distortion
for Canon PowerShot A510
Highest initial distortion level
Lowest initial distortion level
Max. std. deviation of initial distortion for single focal
Highest final distortion level
Lowest final distortion level
Max. std. deviation of final distortion for single focal
Image resolution
Tested focals amount / count of taken photographs
78
3,079 at 35,0 mm
0,815 after calibration
0,734 at 96,6 mm
0,260 after calibration
0,045
1,068 at 47,1 mm
0,260 at 96,6 mm
0,050
2048 x 1536
6 / 19
4.1.4. Comparison of selected cameras
The comparison were made for cameras mentioned in previous section
(visible on Figure 4-17). Comparison respect also additional selected
cameras.
Canon EOS 300D initial distortion
Canon EOS 300D final distortion
Olympus C-2Z initial distortion
Olympus C-2Z final distortion
Olympus C-765UZ initial distortion
Olympus C-765UZ final distortion
Konica Minolta Dynax 7D initial distortion
Konica Minolta Dynax 7D final distortion
5
4,5
Lens distortion level
4
3,5
3
2,5
2
1,5
1
0,5
0
20
30
40
50
60
70
80
90
100
110
120
Focal length [mm]
Figure 4-17. Comparison of selected cameras
(relationship between focal length and lens distortion level)
The comparison have been made for focal lengths ranging from 20mm to
120 mm.
Like in the previous graphs, the focal values are 35mm equivalents. As it
shown on Figure 4-17, usually final distortion (lens distortion after the
calibration) is considerable less than initial distortion. Only for tested reflex
cameras the initial distortion and final distortion covers each other (Konica
Minolta Dynax 7D and Canon EOS 300D – the covering starts from some
focal value).
Note that in Dynax 7D the profile is incomplete. So, one might observe
covering
starts
point
but
level
79
of
covering
is
unknown.
For the Canon EOS 300D level of covering is visible and shows that for bigger
focal length removal of the lens distortion is not neccessary.
The most significant fact concerns distortion for shortest focal length.
When the shortest focal length is reached the highest distortion is observed.
This is particularly visible for two selected reflex cameras which supports
lenses with the shortest focals (17/18 mm – please note, that compact
devices usually don’t support focal lengths as small as these).
4.2. Calibration
The calibration process have been made with BPM: External Lens
Distortion Correction Module. The calibration process, based on a genetic
algorithm, determines necessary coefficients. Sample screen from the
application is shown on Figure 4-18.
Figure 4-18. BPM: Lens Distortion Correction Module with active calibration tab
Every image was verified (the image quality has been determined) before
right calibration process. The quality is based on the dimensions of the grid
visible on the particular photograph. If the grid takes more space on the
image, the quality is better (of course the calibration plane must not be
cropped). Tested images had quality more than 65% (often oscillating on
95%). Calculation of three coefficients was implemented to be a calibration
80
profile (DX[0] and DY[0] are omitted in the calibration process – for testing
only three coefficients, not five).
The whole process is fully automatic and on the Athlon 1,6GHz
processor with 512 MB Ram the calibration process of 5 images (with
resolution 1600x1200) took approx. 37 seconds (~7.5”/image). Suppose
common calibration might consist of 30 images (with 1 image/focal length)
the calibration proces should take approx. 4 minutes. It is really fast,
considering the fact that user might do the calibration on home conditions
(without use of special research laboratory, with having only given camera
and possible a tripod).
4.3. Correction
When the profle is ready, the user can execute the correction process. In
used application for this purpose serves the „Correction” tab (visible
on Figure 4-19).
Figure 4-19. BPM: Lens Distortion Correction Module with active correction tab
All images that needs the calibration might be loaded into proper list. Use of
option „(Correction parameters) takes automatically from the autocalibration
profile file” allows to perform the correction avoiding adjustment of
coefficients manually.
81
Profiles database made by the author during the tests might be also used to
correction process totally automated. Tests shows that movies are processed
as well as images. Because in the AVI files the focal informations aren’t
stored, this information must be entered manually.
The correction process of 7 images (with resolution 1600x1200) with the
mentioned profile on the Athlon 1,6GHz processor with 512 MB Ram took
approx. 27 seconds (~4”/image). Suppose that during collecting materials for
a medium detailed object (like depicted on Figure 4-1) the amount of the
images might be an example 100, the correction process should take approx.
7 minutes. Similarly the calibration process described in 4.2 is also a method
for efficient correction of the obtained materials (without manually and
mundane adjusting lens distortion parameters like in the rest of graphic
processing applications).
82
5. Practical use of the BluePrint Modeler application
This Chapter describes usage of the BluePrint Modeler in various
situations that concern modelling process like:
removal of radial distortion from taken photographs,
texture extraction,
blueprints calibration,
orthophotographs creation,
modelling from parallel projections,
Common tool used within confines the application was Photogrammetric
Unit. With its Photoflow System introduced by the author and described more
in Chapter 3, it is possible to easy and efficient pass the given processes.
5.1. Lens distortion removal from perspective photographs
When taking photographs, these are usual loaded with radial distortion.
The BluePrint Modeler might be used to remove this undesirable effect. In
this case, the Lens Distortion Correction Module should be used.
With using of the Photoflow in the BluePrint Modeler there are created a
given module arrangement, that consist of:
two image inputs, including sample photographs taken with two Olympus
cameras (models: C-2 and C-765),
one Lens Distortion Correction Module,
four ouputs, used for compare results.
The photoflow is shown on Figure 5-1.
First image was processed with use of the calibration plane. Second
image was processed using automatic correction based on the lens profiles
database, made by author earlier.
83
For the first image the following steps were performed:
in the „Calibration image” section the image was loaded with the
calibration pattern. Then, the pattern was recognized and whole
recognition points was shown,
with use of the „Pattern Calibration” tab and proper button (for
performing
automatic
calibration)
the
calibration
was
made
(Figure 5-1),
Figure 5-1. Recognized calibration pattern
the option (in „Corrected image” section) „Correction parameters” was set
to „get from calibration section”, it was chosen to be bilinear filtering and
the
In
correction
this
case,
of
calibrated
the
first
earlier
image
coefficients
was
performed.
(determined
during
autocalibration) was used to perform the correction. The result image is
visible on Figure 5-2.
Considering the second image have been performed following steps:
the option (in „Corrected image” section) „Correction parameters” was set
to „get from profiles database”. The proper database was loaded and the
correction were performed. Correction was possible with use of EXIF
84
information placed in the distorted image (in the source JPG file). The
application found right camera and used the associated profile.
Back to Photoflow diagram, the situation after both corrections is on Fig. 5-3.
Figure 5-2. First image corrected with usage of calibration pattern
Figure 5-3. Finished lens distortion removal process for sample photographs
85
5.2. Texture creation (extraction)
For texture creation purposes the Perspective Correction Module can be
used that is a part of the Photogrammetric Unit. For example, one has the
following photos (Fig. 5-4) that have to serve as a material for building
textures.
Figure 5-4. Source photographs for texture creation example
With the data flow in the BluePrint Modeler there one can create appropriate
module arrangement (Figure 5-5), that consists of:
two image inputs, including above mentioned photographs (photographs
have to be processed earlier for radial distortion removal),
two perspective correction modules,
four image outputs for view of the correction results,
Figure 5-5. Flow diagram for texture creation example
86
Image inputs are marked by blue color, correction modules are marked with
red color and the image outputs have a green color. After creation of the
diagram the next step is to set correction parameters for both correction
modules. For every module the reference points for the distortion photo need
to be set (Figure 5-6).
Figure 5-6. Settings the refernce points for distorted photo in the first
perspective correction module (reference points are indicated by the red circles)
Then, the reference points are set on the corrected photos (this was done by
default, because the result has been a simple rectangular area – for this
purposes the Reset button was used in the Main Points tab to corrected
points). This is shown on Figure 5-7.
Figure 5-7. Settings the reference points for corrected photo in the second
perspective correction module (reference points are indicated by the read circles)
87
The next step is to obtain the filtering method (the bilinear filtering has been
choosen for smooth result image). After that Corretion Execute button need
to be clicked and computer generates transformed image like below (Figure
5-8):
Figure 5-8. Perspective correction result for texture creation example
Above shown texture is in resolution of 1024x512 (width and height of the
texture commonly have to be of power of 2). Similarly the parameters for the
second module need to be adjusted. Then obtained in this way textures can
be used in the any environment for create the sample rendering of the object
(see Figure 5-9). Above mentioned process of the texture extraction is
usually less than 10 minutes.
Figure 5-9. Visualisation of the model with textures extracted by the BluePrint
Modeler application
88
5.3. Blueprints calibration
Every modeling is a good source for the reconstruction. One of the type
of such sources is the blueprints. Blueprints as a parallel projections require
proper calibration in order to set orthogonal cameras (with blueprint as
image plane) relatively to the given model. The calibration process was
described in the Chapter two in Section 2.6.1.
5.3.1. Necessity for blueprints correction
First of all, the blueprints must be in a digital form (a bitmap image).
Often this process relies on scanning of the given blueprints. Naturally, by
scanning the image with the blueprint might be rotated a bit. Because the
methods described in 2.6.1 it is necessary to remove the above mentioned
rotation before the calibration process. Necessity of doing such operations is
visible below (Figure 5-10):
Figure 5-10. Reconstruction from non-corrected blueprints (only the blueprints
calibration were done)
89
On the Figure 5-10 two super-imposed screen from the BluePrint Modeler
application are presented. Points on the first plane were marked by red color
and
points
on
the
second
plane
are
marked
with
green
color.
Two points are choosen with the equal height. The process of reconstruction
is following: one indicates point on the first blueprint. Thanks to that, on the
second blueprint the line is showed (that represents projection of the point
from the first blueprint on the second blueprint). The best result is to obtain
the lines that cover each other. Unfortunately this not always the case.
As it is showed on the Figure 5-10 is observed some difference between
above mentioned lines (projections). This is a result of non-corrected
blueprints. Therefore the correction method should be used in order to get
the result like Figure 5-11.
Figure 5-11. Reconstruction from corrected blueprints (the blueprints correction
have been done before the right calibration)
As it is shown above, the lines are close to each other (please to take note
that the image on the Fig. 5-11 is more zoomed than on the Fig. 5-10; and
still the lines on the Fig. 5-11 are more close to each other than on the
Fig. 5-10).
90
5.3.2. Correction of the blueprints
When the blueprints are in the bitmap format, they should be corrected.
With the BluePrint Modeler the correction might be done with the Perspective
Correction Module (part of the Photogrammetric Unit). The main data flow is
following (Figure 5-12):
Figure 5-12. Photogrammetric data flow chart for the blueprints correction
purpose
Blueprints are introduced by image input modules (blue coloured).
sample blueprints before correction (Figure 5-13) are printed below:
Figure 5-13. The sample blueprints before their correction.
91
The
Please note that in the projection of the right side of the building (Right) the
blueprint is a little bit rotated. During the correction process the above
rotation must be removed.
For this purpose we use the Perspective Correction Module (four
modules, red-coloured on the Figure 5-12). First of all, the mutually parallel
planes have to be „synchronized”. It means that the blueprints have to be
really parallel. Let’s analyze that for the left-right planes. It is done by
correcting the first plane (left) in order to remove rotationl – for this purpose
the first perspective correction module is used. And next, the result image
(OutImg1 – corrected „left” blueprint ) is used with the „right” blueprint in
another correction module. The OutImg2 allows user to properly position the
reference points and simplify the correction process of the „right” blueprint.
The correction process for the above-mentioned blueprint’s pair is following:
On the „Distorted Image” section:
With use of the help points („Help points” tab) user marks the reference
points, that help with the correction process. In this case, a rectangular part
of the elevation (of course acquired shape not always might appear like
rectangle) is indicated. This is shown on Figure 5-14.
Figure 5-14. Distorted image of the „left” blueprint
(help points are marked with red color)
92
On the „Corrected Image” section:
With use of the main points („Main points” tab) user marks the strictly
rectangular shape where the early marked points (distorted) must be fitted.
The main points are used temporary, because the right main points will be
calculated from the destination help points. In the BluePrint Modeler the
destination help points are only calculated when user want to set the
destination main points. It relies upon the destination help points, the
„Calculate” method should be used. With use of the coordinates clipboard
the coordinates acquired by the translating main points should be copied and
pasted to the calculate dialog („Help Points Entry”) in order to execute the
„Calculate” method (Figure 5-15).
Figure 5-15. Entering the help points in order to set the corrected main points
(usage of the „Calculate” method)
This idea is shown on the image above. The region is indicated by the
blue lines (represents the main points). The destination help points are
marked with the light red color. Please note that destination help points are
based strictly on the main points region. It follows that one might manipulate
main points in order to fit the destination help points into the wanted region.
93
That solution is possible but it is laborious and tedious. Instead of this
the main points might be calculated instantly when one input the given
destination help points. This is the essence of the „Calculate” method.
For the examinated case the situation after the calculation of the main
points is following (Figure 5-16):
Figure 5-16. Main points after their calculation with the described method
As shown above, the destination main points came out the destination image
area. This means that the upper part of the corrected image will be cutted. It
is acceptable if the blueprint is not cutted. If this situation occurs (Fig. 5-17),
one has to calculate the main
points once again (with other
values of the destination help
points).
Figure 5-17. Wrong set of the
destination main points.
94
When „left” blueprint is corrected the result („OutImg1.bmp”) is sent to input
of another perspective correction module. The correction steps are following:
first correction, in principle – it relies on copying pattern image
(„OutImg1.bmp”) in order to help user to set the reference points on the
destination image. In this case user marks option „Enabled in the
correction process” for the pattern image, leaves the reference points in
the default position and performs the correction,
second (right) correction, user set the source points (similarly like it has
been written about „left” blueprint correction) and destination points
(basing on the previously obtained pattern image). For the given
example, the corrected view is following (Figure 5-18):
Figure 5-18. Aligned main points just before performing correction
One should remember that the result image (in this case „OutImg4.bmp”) of
the above correction has identical orientation like pattern image. For above
example, this means that the „right” blueprint has „left” orientation. The
„right” blueprint must be mirrored in order to get the right orientated image.
One
might
make
this
with
most
95
graphic
processing
applications.
Besides, it is good idea to convert bitmaps into the JPEG format (to limit
storage requirements).
After the correction process the blueprints appear like on Figure 5-19.
Figure 5-19. The blueprints from Fig. 5-13 after the perspective
correction process
Please note that early rotation of the „right” blueprint is removed and the
„right” and „rear” blueprint are flipped (horizontally mirrored). So obtained
blueprints are ready to the next stage – the right calibration process.
5.3.3. Blueprints calibration
With corrected blueprint user should create the following flow chart:
Figure 5-20. Photogrammetric flow chart for the calibration purposes
96
The flow chart (visible on Fig. 5-20) consist of four input image modules, the
calibration module and four output modules (with assigned cameras). The
four cameras on the outputs must be created and assigned earlier. Every
camera must be parallel („Orthophotogrammetric” option) and should have
assigned reference image (e.g. for „left” camera the reference image is
„Left_corrected” image). After the flow arrangements one should set the
parameters within the calibration module. The process is following:
the orientation of the camera and distance of the image plane relatively
to the model have to be adjusted,
the calibration points have to be arranged (reference points on the
image) and their values must be set in proper controls,
and then, the calibration might be fired by „Calibration execute” button.
After the properly performed calibration the assigned cameras are used to
set its parameters in order to future modelling.
The screenshot from the module during the calibration for the „left” blueprint
is following (Figure 5-21):
Figure 5-21. Calibration of the left-oriented blueprint
It is important to be precise when the points are arranged. The future
modelling strictly depends on the precision of this stage.
97
After processing all of the blueprints, one might start the right modelling
process. Arrangements of the blueprints for the given example is visible on
the Figure 5-22.
Figure 5-22. The blueprints after the calibration process
(view from the Model Editor)
98
5.4. Orthophotographs
Similarly to texture creation process, making orthophotographs also relies
on usage of the perspective correction module. In the texture creation
process the significant thing is to obtain only interested texture region
without any background. Since the textures are casual in low resolution, the
orthophotographs might be in high resolution with some background.
A way to create simple orthophotographs from one image taken with
Olympus C-765 UltraZoom camera is described below.
At first the lens distortion has been removed from the photo. This is done
with BPM: External Lens Distortion Correction Module (and took approx. 2
minutes).
Then, the result photo is used with the Photoflow pipeline (Figure 5-23).
Figure 5-23. Photoflow diagram used in the orthophotograph creation process
99
With a given pipeline, the parameters of the perspective correction
module are set. At first the reference points on the distorted (input) image
are positioned into the right place. With this process the „Help Points” from
proper tab („Main Points” haven’t be positioned) is used. As a reference
region, author chooses the windows, as very accessible objects that cover
most of the image.
Next, the reference points of the „Corrected” image are adjusted.
At distorted image, the help points are used to determine source region, so
on result image help points of the corrected image have to map equivalent
destination region. Because of earlier mentioned fact that „Help Points” of
the destination image couldn’t be changed directly, the „Calculate” method
should be used instead. Use of this method was described in the Section
5.2.2. After use of the „Calculate” method, the correction of the main
destination regions is performed. Next step is to chose proper interpolation
method and perform the correction.
Result of above described process is shown on the Figure 5-24.
Figure 5-24. Corrected image - result of the orthophotograph creation process
100
5.5. Modelling from parallel projections
In the point 5.2.1, there was described importance of proper correction of
the blueprints. Knowledge of this fact makes right correction possible of what
is useful during the modelling process. It was described in the Chapter 2
(point 2.6.1, especially 2.6.1.3) point on model might be obtained, if at least
2 rays exists, they determine the given point (by ray intersection).
The BluePrint Modeler application supports modelling with use of 2 rays.
The rays should be perpendicular to each other and one has to remember
that modelling with two parallel or covering rays is unacceptable (this is
described in 2.6.1.3).
The blueprints used to modelling are displayed on the image plane of the
given cameras. Modelling is possible both from blueprints cameras or normal
cameras (front,left...etc). But for detailed modelling (when zoom of the
blueprint’s part is needed) one should use the normal cameras with a given
zoom.
101
For point reconstruction one has to choose „Vertices” tab and enable the
„Creation” mode.
Example for point reconstruction is shown on Figures 5-25 and 5-26.
Figure 5-25. First stage of point reconstruction
(indicating point on the first plane)
Figure 5-26. Second stage of point reconstruction
(indicating point on the second plane)
102
To create vertices the following procedures, described in the Chapter 3,
which among other things allows to:
manual enter and modify the vertices data,
select various kind of vertices and get information about them,
create array of points on the straight line,
duplicate points (and objects consist of vertices like lines or polygons)
and create their translated copies,
weld several vertices into one (average) vertex,
mirror vertices (and objects consist of vertices like lines or polygons) for
create the mirror side of the given object,
remove any vertices,
By using „Lines” tab, one might create edges of the model. With use of
the „Polygons” tab it is possible to cover the object with a given texture. In
this case polygon creation relies on following procedures:
choosing right material and eventually setting the mapping coordinates,
indicating subsequent vertices that create given polygon (caution that
every polygon might be created from three or four vertices – if the
surface consist of more than four vertices, one must create polygons
separately) – by using the left mouse button,
finishing creating procedure by click the right mouse button.
The polygons might be created not only by graphic acquiring but with
manual entering the polygons data as well (this is done within „Create
vertex” section in the „Vertices” tab).
It is possible to create the 3D model with the above described methods.
Ready to use object can be exported to the given format.
103
5.6. Export model to other applications
Because the BluePrint Modeler in its present form supports only VRML 2.0
and its own format, one should save the reconstruction results into the
BluePrint Modeler format, in order to make possible future corrections.
Then the model should be exported to the VRML format, what would allow to
view the finished model under the any virtual reality viewer or for further
manipulation.
104
6. 3D objects reconstruction examples
This Chapter contains sample reconstruction made with the BluePrint
Modeler tool. For reconstruction purposes there reduction scale model and
normal scale object are examined.
6.1. First reconstruction example – Project „BDHPUMN”
As the target of the reconstruction the BDHPUMN train car is selected.
The BDHPUMN car is a reduction model that is made with a given scale
(H0 = 1:89). The car is shown on Figure 6-1.
Figure 6-1. The BDHPUMN reduction model (scale 1:87)
Beacuse the above reduction model is detailed enough, it can be
smoothly used to reconstruct both geometry and texturing (acquiring texture
of the real object is not necessary).
The reconstruction target is to create a simplified model with
simultaneous high precision of creating geometry. The created textures
should be of good quality, but that fact isn’t crucial – the main aspect of the
reconstruction is to create model which user recognizes as a train car.
The model should belong to the low-polygon model’s group.
105
6.1.1. Guidelines and measurement
As it visible on Figure 6-1, a uniform (so called blue-box, at this time pinkbox) background color is used in order to simplify texture creation process.
The author decide to take photographs from all sides of the object – in order
to create orthophotographs and simplify reconstruction process.
The
photographs
are
taken
using
Olympus C-765 UltraZoom and Olympus C2
Zoom
cameras
placed
on
a tripod (Figure 6-2). An additional light is
used.
Figure 6-2. The BDHPUMN measurement with
the Olympus C-765 UltraZoom camera
Usage of the tripod during the experiments makes simpler to take pictures,
but of course the tripod isn’t a necessary device for the reconstruction, what
will be shown with the next reconstruction.
Most of the photographs of the object are taken in a „macro” mode – for
that reason cameras are very close to the object and lens distortions are
especially visible.
The train car is measured manually with a scale and selected dimensions
are measured. Also the local axis of the object is set (see in Figure 6-3).
Figure 6-3. Object’s drafts with plotted dimensions and local axis
106
Beacause the object is symmetric in Z axis and almost symetric in X axis
the following assumptions are set:
it will be modelled only one fourth of the object (in quarters: -X,+Y,+Z),
rest of the object will be generated by symmetry (mirror tool), especially
for X axis, rest of the generated part will be additionally corrected (in
order to avoid almost symmetry problem),
only the train car body will be modelled in the BluePrint Modeler,
the carriage will be simple flat surfaces with selected transparent images,
the wheels will be modelled by the other tool (in the Cinema4D).
6.1.2. Selection of taken materials and data processing
During the survey 9 photos by Olympus C-2 and 35 photos by C-765UZ
camera are taken. Photos have present the model with all sides. From all
photographs, the 24 photos are choosen for further processing.
In the processing stage all selected photos are exposed to lens distortion
correction removal process. Result files are saved in TGA format, against to
source JPG format. As correction application, BPM: External Lens Distortion
Correction Module is used. Earlier maked profiles have allowed to get
undistorted images. But for the photos the following fact is observed; for all
photos taken with „macro” option in C-765UZ camera the lens distortion was
different (bigger) than for the same focal length without the „macro” option.
It forces user to make profiles also for situations when one uses the „macro”
option.
For this case, additional profile is not created and the author manually set
the focal length for these photographs (set to minimal focal length to get the
maximum lens reduction effect). After this procedure, the result photographs
get much better quality than at the beginning.
After that, the orthophotograph and texture creation is performed. The
author assume that, at the beginning the orthophotographs must be created,
and textures should be made after the modelling.
107
Assume the one orthophotograph can be made with a single photo.
Especially for photos taken with a close distance, the perspective distortion is
too big, and the orthophotograph couldn’t be created with enough quality.
So the additional survey is necessary in that case.
6.1.3. The second measurement
The given measurement has to be performed. At this time, the photos
don’t cover whole of the object but are focused on the specific parts (shown
in Figure 6-4) – it is assumed that particular parts could be stitched into the
result image in the processing stage.
Figure 6-4. Two extreme photographs for left projection
On the Figure 6-4 the two extreme photographs are shown. Please note
the fact mentioned earlier, that with small focal lengths, beside of the radial
distortion, the perspective effect deforms the greater part of the image. For
that reason user should take as many photos as necessary to create
undistorted image of the whole object (what is done this time).
Thanks to the lens distortion removal process, the proper deformations
are corrected and whole images are written in TGA format.
6.1.4. Orthophotograph creation
With obtained corrected photograph, the orthophotograph (source for
modelling) can be created. With use of the Photoflow system, the given
perspective correction is made. For the left side of the BDHPUMN there are
two source images that allow to create undistorted orthophotograph of the
mentioned left side. Front side of the object is created by stiching free
108
„front” images together. During „stitching” process significant thing is to
proper use the transparency (for that reason TGA file was used).
The mentioned free images (with use of the external graphic processing
application) are painted with proper transparency masks earlier – for every
image there are painted rectangular selection, that include only center part
of the image (30% of the image).
Thanks to that the BluePrint Modeler
(during the perspective correction process)
has to put together that image parts that
are
painted
with
transparency
masks.
Result image is shown on Figure 6-5.
Figure 6-5. Orthophotograph of the model’s
front side
It might be an interesting fact, why image does not save the original
aspect of the object. It would appear that for the reconstruction purposes
the source image must have proper dimensions (otherwise the reconstruction
would be hard or impossible). But answer for that question is the calibration
process, in which, the image dimensions are calculated automatically, so at
this stage user doesn’t have to care about proper image scaling. It
significantly helps with the reconstruction process.
During the perspective correction process and creating orthophotograph,
the stitch artefacts problem may appear (Figure 6-6). The artefact occurs
from inaccuracy of the correction process (when reference points are not
well positioned). For this case the
perspective correction process needs to
be repeated (after translation of the
some points).
Figure 6-6. Stitch artefact of the result
image
109
Of course not of all stitches are removed – it results from difference in
brightness for the stitched images. In that case author suggests to smooth
stitched region, but it is possible to left the images with slight visible stitch
artefacts.
6.1.5. Calibration of the images
For the calibration purposes there were used the Photoflow is used with
their proper calibration module. The right diagram is visible on the Figure 6-7
and reference points arrangement is shown on Figure 6-8.
As is visible on Figure 6-7 the parallel cameras are connected
(for automation of the adjusting calibration results).
Figure 6-7. The Photoflow diagram created for purposes of the calibration
110
Figure 6-8. Calibration reference points arrangement (left side)
Calibration (reference) points are selected because of their characteristics
(accessibility). Thanks to measurement that have been made earlier, the
reference points have proper values that are entered in correct edit boxes.
6.1.6. Main reconstruction of the object
When both images are prepared, the modelling process can begin (Figure
6-9).
At the beginning, there are reconstructed only main points of the object,
i.e. points that outline the model. Because the model has to be low-polygon,
the proper generalization is made. It should be mentioned that the vertices
are created first and when the shape of the BDHPUMN train car appears of
that points, there are connected by lines (edges). The author suggests
avoiding of creation of polygons and lines at once.
111
Figure 6-9. Beginning of the BDHPUMN scale model reconstruction
Just after the main vertices and lines creation, the polygons are created.
For the proper sides there are used proper materials (e.g. for front side were
used „front” material...etc). Next, the proper materials are adjusted with the
polygon modification tool – the planar mapping are used in the mapping
coordinates process.
After creation of the quarter part of the model the mirror reflex in the YZ
plane is performed. At the end of the model (on the left side) the vertices
have to be translated about some value in the X axis. The proper correction
is made and the half of the object is done. The other part is also created by
mirror tool and model is finished (Figure 6-10).
At this stage th object textures are created. Unfortunately during texture
creation some erros with matching textures can occur in the created model.
It follows that textures should be created simultaneously with the
orthophotographs in order to avoid similar problems.
In that way the textures are simply created from the orthophotographs
(by cropping the part of the image that include only modelled object).
The result image is shown on Figure 6-10.
112
Figure 6-10. The BDHPUMN’s modelling results
6.1.7. Export to the other VR modelling environment
After finished reconstruction experiment, the whole model can be
exported in the VRML 2.0 format to the Cinema4D application. Beacuse not
all part of the object were reconstructed (e.g. wheels – the BluePrint Modeler
doesn’t have procedures to reconstruct lathe objects yet), there had been
decided that the rest of the model should be finished in the mentioned
application. Finished model is shown on the Figure 6-11.
Figure 6-11. Finished low-polygon train car scale model
113
6.1.8. Model statistics
The finished model consist of the following elements (counted only parts
modelled in the BluePrint Modeler application – that is only the body and
carriages of the train car):
317 vertices,
522 polygons,
5 materials assigned and really used three (front, left and carriages),
Total reconstruction time was about approx. 20 hours. In this time following
things were made:
object measuring (taking photographs..very little of the whole time),
orthophotographs and texture creation (most of the time),
two reconstructions (about 2-3 hours). First reconstruction were made
from the first obtained materials (that turned out to be not very good to
reconstruction, owing to the significant perspective effect) and results
were bad, so the data acquisition process had to be repeated (see point
6.1.3).
6.2. Second reconstruction example – Project „GDDKiA”
The earlier reconstructed model might been used in various virtual reality
purposes, but reconstruction time was a little slow. Often the user needs to
fast reconstruct the given object without a special care about their textures.
It means that there only the main shape and texture of the object is wanted
to be made in short time. For that reason the second project is performed.
The
a
reconstructed
building
of
the
object
is
GDDKiA
management seated in the Zielona
Góra
city
(photograph
on
the
Figure 6-12).
Figure 6-12. The GDDKiA building in
the Zielona Góra
114
6.2.1. Reconstruction of the object
Against to the previous experiment (despite of the building’s symmetry)
the mirror tool is not used. Sample screenshot from the application is shown
below (Figure 6-13).
Figure 6-13. Modelling the GDDKiA building from the parallel projections
After wireframe modelling the model is covered with a surface (polygons).
For the surface the proper texture mapping is assigned. The result model is
shown on Figure 6-14.
Figure 6-14. Finished outlook model of the GDDKiA building
115
6.2.2. Model statistics
The finished model consist of the following elements:
28 vertices,
52 polygons,
4 materials assigned and really used three (front, rear and right),
Total reconstruction time was about approx. 1 hour. In this time following
things were made:
texture creation (approx. 0,5 hr),
main reconstruction (approx. 0,5 hr),
As it visible above, the main assumptions have been satisfied. What’s more,
the reconstruction process was really very fast, despite the low quality
textures (low quality not in sense of resolution, rather colour-matching
effect). The reconstructed geometry is a very low-polygon model, but it can
be the advantage. Especially when particular virtual reality environement
(where the model might been used) will have to work with many objects like
this (e.g. presentation of virtual cities...etc).
6.3. Summary
Introduced reconstruction shown the practical reconstruction process,
which was described in Chapter 2. As it shown, this process mundane, but
most of the elements are automated. It makes the reconstruction easier, and
opens the reconstruction process for the users that don’t have not many
experiences at this field.
Used BluePrint Modeler application, that supports modelling and
reconstruction methods, shows that reconstruction might be much easier
than the same reconstruction made with only the programs like Cinema4D or
3D Studio MAX. Of course these applications might still be used in making
details of the given object.
To recap, the modelling made with the BluePrint Modeler shown efficiency
of this solution. The full summary and conclusions of the reconstruction
process are described in Chapter 8.
116
7. Accuracy of the reconstruction process
Reconstruction process relies on the graphic analysis (so called
fotointerpretation). Unfortunately not all images are ideal projections
(perspective or parallel) and it will cause some errors. Although it is
impossible to perform reconstruction with ideal precision, it is significant to
know the errors scale and their influence on the reconstruction result.
For this reason in this Chapter the acurracy during reconstruction both
from perspective and parallel projections is described.
Beacuse most of the described and used earlier methods are based on
the manual image matching, not always the proper accuracy can be found.
Owing to that fact, equations used in this Chapter will be used as estimation
of the mentioned errors.
117
7.1. Acurracy for the perspective projection
It’s common knowledge that in perspective projection, size of the
projected object is proportional to the its distance from the lens.
This situation is shown on Figure 7-1, and given relation is shown below
(equation 7.1).
Figure 7-1. Perspective projection
h D
=
H
f
(7.1)
Beacuse the photogrammetry methods are focused to measurement
dimensions of the object rather than its distance from the lens, the equation
(7.1) might be developed (with respect to height calculation) to the following
expression:
H=
hPIX ⋅ SY ⋅ D
f ⋅ SYPIX
where:
H
D
f
SYPIX
SY
hPIX
objects height [m]
objects distance (from the lens) [m]
focal length [m]
height of the image [pixels]
height of the camera plane [m]
height of the object on the image [pixels]
118
(7.2)
For estimate error accuracy of the relationship (7.2) it might be used the
error theory. The following equation might be written:
 ∂H
m H = ± 
⋅ mhPIX
 ∂hPIX
2
  ∂H
  ∂H
  ∂H
 + 
⋅ m SY  + 
⋅ m D  + 
⋅mf
  ∂D
  ∂f
  ∂SY
2
2



2
(7.3)
where:
mH
mD
mf
mean-square error of the object’s height [m]
m SY
mean-square error of the height of thecamera plane [m]
mhPIX
mean-square error of the object’s height on the image
[pixels]
mean-square error of the object’s distance [m]
mean-square error of the focal distance [m]
The mean-square error’s written above concerns quality of determining
particular components. One might notice that in the equation (7.3) there is
no mean-square error for the SYPIX . It results from lack of knowledge
precision of image height. Besides that both SY and SYPIX components
concern one quantity, so it’s enought to use only m SY component.
The m D error might be easily determined (eventually one might even
assumed some value during measurement), similar to mhPIX quantity (by
measuring level of the image blur or other simple calculations when the
particular points are obtained). Unfortunately the m f
error might be
obtained only from more complex and precise measurement so it’s better to
assume that component value of ±1mm (common unit to focal distance
measure).
The
SY
and
m SY
quantities might be determined from other
measurements.
Finally, equation (7.3) after given calculations is following:
2
m h2PIX ⋅ D 2 ⋅ f 2 ⋅ SY 2 + m 2f ⋅ hPIX
⋅ D 2 ⋅ SY 2 +
mH = ±
2
2
2
m SY
⋅ f 2 ⋅ D 2 ⋅ hPIX
+ m D2 ⋅ hPIX
⋅ SY 2 ⋅ f
2
f 4 ⋅ SYPIX
2
The sample height calculations are described in point 7.4.
119
(7.4)
7.2. Acurracy for the parallel projection
In the parallel projections the objects aren’t distorted by perspective
effect, so their scale is saved. Sample projection is shown on Figure 7-2.
Corresponding equation (7.5) is written below the image.
Figure 7-2. Parallel projection
H =h
(7.5)
As it is clearly visible, height of the object is equal to its equivalent on the
camera image plane. Therefore, error of the determine h quantity is equal to
error of the real height of the object.
Of course, user administers only the h
expressed in pixels and SY
expressed in pixel and meters values. The way to calculate right SY values is
following:
the given plane must be calibrated (e.g. with calibration method,
described in the Chapter 2) - it give the SY values (in [m]) designed for
parallel projection,
having the given SY values, the height might be calculated as following:
H =h=
SY ⋅ hPIX
SYPIX
120
(7.6)
Accuracy for this solution is following (using error theory):
 ∂H
m H = ± 
⋅ mhPIX
∂
h
 PIX
2
  ∂H

 + 
⋅ m SY 

  ∂SY
2
(7.7)
Developing of above equation we have:
mH = ±
2
2
mh2PIX ⋅ SY 2 + m SY
⋅ hPIX
2
SYPIX
(7.8)
Sample use of this equations was used in height calculations and
described in point 7.4.
121
7.3. Determining the interior camera parameters
For measurement were used following cameras:
Olympus C-2 Zoom,
Olympus C-765 UltraZoom.
The survey method is following:
the given calibration plane (used in the lens calibration process – visible
on Figure 7-3) have to be photographed and its projection should cover
most part of the image,
during
measurement
the
quantities
D
have
to
be
measured
(components: H , f , SYPIX , hPIX are determined from image, using both
EXIF data and image dimensions),
if the image with calibration plane is a little bit rotated, one must remove
that rotation with using of graphic processing application (with
simultaneous saving original dimensions of the picture),
so prepared image are ready for hPIX components measuring.
During survey the 6 photos have been taken with Olympus C-2Z and 9
photos with using of Olympus C-765UZ. The right distances were mesured by
tape measure with precision of 0,01m. As the precision of the focal length
were assumed value of 0,0005m (±0,5mm).
The W (from Fig. 7-3) and H are in turn: W = 0,24m, H = 0,18m,
measured on the plane with accuracy of ±1mm (0,001m).
Figure 7-3. Photographed calibration plane with marked measurement quantities
122
Beacause of perspective projection (point 7.1) following expressions
might been developed:
h=
H⋅f
D
and
2
h
hPIX
=
SY
SYPIX
(7.9)
what leads to the following expression:
SY =
2 H ⋅ f ⋅ SYPIX
D ⋅ hPIX
(7.10)
following by error the following equation might be created:
 ∂SY
  ∂SY
=± 
⋅ m H  + 
⋅mf
 ∂H
  ∂f
2
m SY
2
  ∂SY
 + 
⋅ hPIX
  ∂hPIX
2
  ∂SY

 + 
⋅ mD 

  ∂D
2
(7.11)
Performing necessary calculations, we have:
SY
m SY = ±2
2
PIX
2
2
 m H2 ⋅ D 2 ⋅ f 2 ⋅ hPIX
+ m 2f ⋅ hPIX
⋅ D2 ⋅ H 2 +


⋅ 2
2
2
2 
 mh ⋅ f 2 ⋅ D 2 ⋅ H 2 + m D2 ⋅ hPIX
⋅H ⋅ f 
 PIX
4
D 4 ⋅ hPIX
(7.12)
For more precise calculation one might perform a series of the
measurements and calculate given value and their total mean-square error
from the proper equations (7.13) and (7.14). The equations were developed
with help of error theory.
SY =
SYn2
SY12 SY22
+
+L+
mSY1 mSY2
m SYn
1
1
1
+
+L
m SY1 mSY2
m SYn
M SY = ±
2
2
2
m SY
+ m SY
+ L + m SY
1
2
n
Thanks to that the right SY
n
(7.13)
(7.14)
dimensions might be calculated with
simultaneous error determining. Of course not only for vertical, but also for
horizontal dimension this calculations might be performed.
In the next point practical using of the above developed equations are
described.
123
7.3.1. Results of research
Research were executed for both axes X and Y. As a signs for X axis that
correspond with signs for Y axis, there were introduced as following:
SX , SX PIX instead of SY , SYPIX ,
m SX , M SX istead of m SY , M SY ,
W , wPIX instead of H , hPIX ,
The results were collected from Table 7-1 to Table 7-6.
Table 7-1. Results of camera image plane dimensions measurement
for Olympus C-2 Zoom camera without lens distortion correction
Olympus C-2 Zoom without lens correction distortion
Axis
mD [m]
mf [mm]
mW [m]
mwpix [pix]
SXpix [pix]
X
±0,01
±0,5
±0,001
±2
1600
Plane
1
2
3
4
5
D [m]
0,38
0,34
0,35
0,36
0,33
f [mm]
5,0
5,0
5,0
5,0
5,0
W [m]
0,240
0,240
0,240
0,240
0,240
wpix [pix]
1157
1317
1262
1212
1359
SX [m]
0,00873
0,00870
0,00869
0,00868
0,00869
mSX [m]
0,00090
0,00091
0,00090
0,00090
0,00091
SX ± M SX =
0,00870
0,00101
Axis
mD [m]
mf [mm]
mH [m]
mhpix [pix]
SYpix [pix]
Y
±0,01
±0,5
±0,001
±2
1200
Plane
1
2
3
4
5
D [m]
0,38
0,34
0,35
0,36
0,33
f [mm]
5,0
5,0
5,0
5,0
5,0
H [m]
0,180
0,180
0,180
0,180
0,180
124
hpix [pix]
875
998
956
917
1030
SY [m]
0,00650
0,00646
0,00645
0,00645
0,00645
mSY [m]
0,00067
0,00068
0,00067
0,00067
0,00068
SY ± M SY =
0,00646
0,00075
Table 7-2. Results of camera image plane dimensions measurement
for Olympus C-2 Zoom camera with lens distortion correction
Olympus C-2 Zoom with lens correction distortion
Axis
mD [m]
mf [mm]
mW [m]
mwpix [pix]
SXpix [pix]
X
±0,01
±0,5
±0,001
±2
1600
Plane
1
2
3
4
5
D [m]
0,38
0,34
0,35
0,36
0,33
f [mm]
5,0
5,0
5,0
5,0
5,0
W [m]
0,240
0,240
0,240
0,240
0,240
wpix [pix]
1135
1300
1241
1191
1341
SX [m]
0,00890
0,00882
0,00884
0,00883
0,00881
mSX [m]
0,00092
0,00092
0,00092
0,00092
0,00092
SX ± M SX =
0,00884
0,00103
Axis
mD [m]
mf [mm]
mH [m]
mhpix [pix]
SYpix [pix]
Y
±0,01
±0,5
±0,001
±2
1200
Plane
1
2
3
4
5
D [m]
0,38
0,34
0,35
0,36
0,33
f [mm]
5,0
5,0
5,0
5,0
5,0
H [m]
0,180
0,180
0,180
0,180
0,180
hpix [pix]
851
975
932
894
1006
SY [m]
0,00668
0,00661
0,00662
0,00662
0,00661
mSY [m]
0,00069
0,00069
0,00069
0,00069
0,00069
SY ± M SY =
0,00663
0,00077
Table 7-3. Comparison between measurement with and without lens distortion
correction for Olympus C-2 Zoom
Without correction
With correction
Difference
SX [m]
0,00870
0,00884
±0,00014
125
SY [m]
0,00646
0,00663
±0,00017
mSX [m]
0,00101
0,00103
±0,00002
mSY [m]
0,00075
0,00077
±0,00002
Table 7-4. Results of camera image plane dimensions measurement
for Olympus C-765 UltraZoom camera without lens distortion correction
Olympus C-765 UltraZoom without lens correction distortion
Axis
mD [m]
mf [mm]
mW [m]
mwpix [pix]
SXpix [pix]
X
±0,01
±0,5
±0,001
±2
2288
Plane
1
2
3
4
5
D [m]
0,37
0,34
0,34
0,33
0,33
f [mm]
6,3
6,3
6,3
6,3
6,3
W [m]
0,24
0,24
0,24
0,24
0,24
wpix [pix]
1785
1900
2003
2033
2080
SX [m]
0,01048
0,01056
0,01016
0,01031
0,01024
mSX [m]
0,00088
0,00089
0,00086
0,00088
0,00087
SX ± M SX =
0,01035
0,00098
Axis
mD [m]
mf [mm]
mH [m]
mhpix [pix]
SYpix [pix]
Y
±0,01
±0,5
±0,001
±2
1712
Plane
1
2
3
4
5
D [m]
0,37
0,34
0,34
0,33
0,33
f [mm]
6,3
6,3
6,3
6,3
6,3
H [m]
0,18
0,18
0,18
0,18
0,18
126
hpix [pix]
1351
1439
1515
1541
1576
SY [m]
0,00777
0,00782
0,00754
0,00763
0,00758
mSY [m]
0,00065
0,00066
0,00064
0,00065
0,00065
SY ± M SY =
0,00767
0,00073
Table 7-5. Results of camera image plane dimensions measurement
for Olympus C-765 UltraZoom camera with lens distortion correction
Olympus C-765 UltraZoom with lens correction distortion
Axis
mD [m]
mf [mm]
mW [m]
mwpix [pix]
SXpix [pix]
X
±0,01
±0,5
±0,001
±2
2288
Plane
1
2
3
4
5
D [m]
0,37
0,34
0,34
0,33
0,33
f [mm]
6,3
6,3
6,3
6,3
6,3
W [m]
0,24
0,24
0,24
0,24
0,24
wpix [pix]
1765
1988
1882
2019
2068
SX [m]
0,01059
0,01024
0,01066
0,01038
0,01029
mSX [m]
0,00089
0,00087
0,00090
0,00088
0,00088
SX ± M SX =
0,01043
0,00099
Axis
mD [m]
mf [mm]
mH [m]
mhpix [pix]
SYpix [pix]
Y
±0,01
±0,5
±0,001
±2
1712
Plane
1
2
3
4
5
D [m]
0,37
0,34
0,34
0,33
0,33
f [mm]
6,3
6,3
6,3
6,3
6,3
H [m]
0,18
0,18
0,18
0,18
0,18
hpix [pix]
1324
1491
1413
1516
1550
SY [m]
0,00793
0,00766
0,00796
0,00776
0,00771
mSY [m]
0,00067
0,00065
0,00067
0,00066
0,00066
SY ± M SY =
0,00780
0,00074
Table 7-6. Comparison between measurement with and without lens distortion
correction for Olympus C-765 UltraZoom
Without correction
With correction
Difference
SX [m]
0,01035
0,01043
±0,00008
127
SY [m]
0,00767
0,00780
±0,00013
mSX [m]
0,00098
0,00099
±0,00001
mSY [m]
0,00073
0,00074
±0,00001
7.3.2. Summary
As it was visible on the results, the influence of the radial distortion might
affect the camera image plane dimensions even up to ±0,00008m
(±0,08mm). Thanks to multiple measuring, the dimension values might be
calculated with increased precision.
Another observed fact the more greater the distance and focal length are,
the total mean-square error is smaller.
7.4. Sample height calculations
This point include results of the sample height calculation for both
projections. For perspective projection there were examined influence of the
radial distortion to measure. For parallel projection determine how the radial
distortion affects the accuracy is harder, because it rely on earlier calibration
plane process (that might been executed with various accuracy – beacuse of
the characteristic affine transformation, its accuracy is hard to calculate – see
[1]). For height calculations for parallel projection there are assumed some
error values, proportional to the measured plane.
128
7.4.1. Height calculations for perspective projection
The Table 7-7 introduces comparisons between height calculations with
and without lens distortion correction.
The mean-square error for distance is following:
< 1,00m
, m D = ± 0,01m,
1,00m ÷ 9,99 m
, m D = ± 0,10m,
> 10,00m
, m D = ± 0,20m,
Table 7-7. Sample height calculations for perspective projection
mD
[m]
var.
D
[m]
0,25
0,25
0,25
0,25
0,25
0,25
0,50
0,50
0,50
0,50
0,50
0,50
1,00
1,00
1,00
1,00
1,00
1,00
2,00
2,00
2,00
2,00
2,00
2,00
3,00
3,00
3,00
3,00
3,00
3,00
4,00
4,00
4,00
4,00
4,00
4,00
5,00
5,00
5,00
5,00
mf
[mm]
±0,0005
f
[mm]
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
mhpix
[pix]
without lens dist. correction
without lens dist. correction
mD
[m]
±2/1200
0,00646±0,00075
0,00663±0,00077
var.
hPIX
[pix]
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
SY±MSY [m]
SY±MSY [m]
H
[m]
0,027
0,081
0,135
0,188
0,242
0,296
0,054
0,161
0,269
0,377
0,484
0,592
0,108
0,323
0,538
0,754
0,969
1,184
0,215
0,646
1,077
1,507
1,938
2,369
0,323
0,969
1,615
2,261
2,907
3,553
0,431
1,292
2,153
3,015
3,876
4,737
0,538
1,615
2,692
3,768
mH
[m]
mH/H
0,003
0,009
0,015
0,020
0,026
0,032
0,006
0,017
0,028
0,039
0,050
0,061
0,015
0,046
0,076
0,107
0,138
0,168
0,025
0,073
0,121
0,169
0,218
0,266
0,035
0,103
0,171
0,240
0,308
0,377
0,045
0,134
0,224
0,313
0,402
0,491
0,056
0,166
0,276
0,387
129
11,0%
10,9%
10,8%
10,8%
10,8%
10,8%
10,5%
10,3%
10,3%
10,3%
10,3%
10,3%
14,3%
14,2%
14,2%
14,2%
14,2%
14,2%
11,4%
11,3%
11,2%
11,2%
11,2%
11,2%
10,8%
10,6%
10,6%
10,6%
10,6%
10,6%
10,6%
10,4%
10,4%
10,4%
10,4%
10,4%
10,5%
10,3%
10,3%
10,3%
H’
[m]
0,028
0,083
0,138
0,193
0,249
0,304
0,055
0,166
0,276
0,387
0,497
0,608
0,110
0,331
0,552
0,773
0,994
1,215
0,221
0,663
1,105
1,547
1,989
2,431
0,331
0,994
1,657
2,321
2,983
3,646
0,442
1,326
2,210
3,094
3,978
4,862
0,552
1,657
2,762
3,867
mH’
[m]
0,003
0,009
0,015
0,021
0,027
0,033
0,006
0,017
0,028
0,040
0,051
0,062
0,016
0,047
0,078
0,110
0,141
0,172
0,025
0,075
0,124
0,174
0,224
0,273
0,036
0,106
0,176
0,246
0,316
0,387
0,047
0,138
0,229
0,321
0,413
0,504
0,058
0,170
0,284
0,397
mH’/H’ |H-H’|
11,0%
10,9%
10,8%
10,8%
10,8%
10,8%
10,5%
10,3%
10,3%
10,3%
10,3%
10,3%
14,3%
14,2%
14,2%
14,2%
14,2%
14,2%
11,4%
11,3%
11,2%
11,2%
11,2%
11,2%
10,8%
10,6%
10,6%
10,6%
10,6%
10,6%
10,6%
10,4%
10,4%
10,4%
10,4%
10,4%
10,5%
10,3%
10,3%
10,3%
0,001
0,002
0,003
0,005
0,007
0,008
0,001
0,005
0,007
0,010
0,013
0,016
0,002
0,008
0,014
0,019
0,025
0,031
0,006
0,017
0,028
0,040
0,051
0,062
0,008
0,025
0,042
0,060
0,076
0,093
0,011
0,034
0,057
0,079
0,102
0,125
0,014
0,042
0,070
0,099
5,00
5,00
10,00
10,00
10,00
10,00
10,00
10,00
20,00
20,00
20,00
20,00
20,00
20,00
50,00
50,00
50,00
50,00
50,00
50,00
100,00
100,00
100,00
100,00
100,00
100,00
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
5,0
900
4,845
1100
5,922
100
1,077
300
3,230
500
5,383
700
7,537
900
9,690
1100
11,843
100
2,153
300
6,460
500
10,767
700
15,073
900
19,380
1100
23,687
100
5,383
300
16,150
500
26,917
700
37,683
900
48,450
1100
59,217
100
10,767
300
32,300
500
53,833
700
75,367
900
96,900
1100 118,433
0,497
0,608
0,113
0,332
0,553
0,774
0,995
1,216
0,222
0,655
1,090
1,526
1,961
2,397
0,553
1,631
2,714
3,798
4,883
5,967
1,105
3,259
5,425
7,592
9,759
11,927
10,3%
4,972
10,3%
6,077
10,5%
1,105
10,3%
3,315
10,3%
5,525
10,3%
7,735
10,3%
9,945
10,3%
12,155
10,3%
2,210
10,1%
6,630
10,1%
11,050
10,1%
15,470
10,1%
19,890
10,1%
24,310
10,3%
5,525
10,1%
16,575
10,1%
27,625
10,1%
38,675
10,1%
49,725
10,1%
60,775
10,3%
11,050
10,1%
33,150
10,1%
55,250
10,1%
77,350
10,1%
99,450
10,1% 121,550
0,510
0,624
0,116
0,341
0,568
0,794
1,021
1,248
0,228
0,672
1,119
1,566
2,013
2,460
0,568
1,674
2,785
3,898
5,011
6,124
1,134
3,345
5,568
7,792
10,016
12,241
10,3%
10,3%
10,5%
10,3%
10,3%
10,3%
10,3%
10,3%
10,3%
10,1%
10,1%
10,1%
10,1%
10,1%
10,3%
10,1%
10,1%
10,1%
10,1%
10,1%
10,3%
10,1%
10,1%
10,1%
10,1%
10,1%
0,127
0,155
0,028
0,085
0,142
0,198
0,255
0,312
0,057
0,170
0,283
0,397
0,510
0,623
0,142
0,425
0,708
0,992
1,275
1,558
0,283
0,850
1,417
1,983
2,550
3,117
From the above result one might see that at the difference are significant.
For example, during measurement of the typical building, from distance of
20m (with smallest focal length) when image of the building have 900 pixels
(on 1200 pixels of the image height), the difference between heights is
c.a. 0,50m. This error cause huge errors during reconstruction, especially
when using perspective reconstruction or creating textures from the distorted
image.
This calculations shows unquestionable, that lens distortion must have
been used, in order to make the reconstruction possible (or make it more
precise).
130
7.4.2. Height calculations for parallel projection
Also for parallel projection the given calculation have been performed
(see Table 7-8).
The mean-square errors are as following:
< 1,00m
, m SY = ± 0,05m,
1,00m ÷ 9,99 m , m SY = ± 0,20m,
> 10,00m
mhPIX = ±2 / 1200 pix,
, m SY = ± 0,50m,
Table 7-8. Sample height calculations for parallel projection
SY [m]
0,25
0,25
0,25
0,25
0,25
0,25
0,50
0,50
0,50
0,50
0,50
0,50
1,00
1,00
1,00
1,00
1,00
1,00
2,00
2,00
2,00
2,00
2,00
2,00
5,00
5,00
5,00
5,00
5,00
5,00
10,00
10,00
10,00
10,00
10,00
10,00
20,00
20,00
20,00
20,00
20,00
20,00
50,00
50,00
hPIX [pix]
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
500
700
900
1100
100
300
H [m]
0,021
0,062
0,104
0,146
0,188
0,229
0,042
0,125
0,208
0,292
0,375
0,458
0,083
0,250
0,417
0,583
0,750
0,917
0,167
0,500
0,833
1,167
1,500
1,833
0,417
1,250
2,083
2,917
3,750
4,583
0,833
2,500
4,167
5,833
7,500
9,167
1,667
5,000
8,333
11,667
15,000
18,333
4,167
12,500
131
mH [m]
0,004
0,013
0,021
0,029
0,038
0,046
0,004
0,013
0,021
0,029
0,038
0,046
0,017
0,050
0,083
0,117
0,150
0,183
0,017
0,050
0,083
0,117
0,150
0,183
0,019
0,051
0,084
0,117
0,150
0,184
0,024
0,053
0,085
0,118
0,151
0,184
0,037
0,060
0,090
0,121
0,154
0,186
0,085
0,097
mH/H
20,1%
20,0%
20,0%
20,0%
20,0%
20,0%
10,2%
10,0%
10,0%
10,0%
10,0%
10,0%
20,1%
20,0%
20,0%
20,0%
20,0%
20,0%
10,2%
10,0%
10,0%
10,0%
10,0%
10,0%
4,5%
4,1%
4,0%
4,0%
4,0%
4,0%
2,8%
2,1%
2,0%
2,0%
2,0%
2,0%
2,2%
1,2%
1,1%
1,0%
1,0%
1,0%
2,0%
0,8%
50,00
50,00
50,00
50,00
100,00
100,00
100,00
100,00
100,00
100,00
500
700
900
1100
100
300
500
700
900
1100
0,118
0,143
0,172
0,201
0,167
0,174
0,186
0,203
0,224
0,248
20,833
29,167
37,500
45,833
8,333
25,000
41,667
58,333
75,000
91,667
0,6%
0,5%
0,5%
0,4%
2,0%
0,7%
0,4%
0,3%
0,3%
0,3%
7.5. Relationship between modelling errors and camera orientation
for the parallel projection
For the parallel projection it was examined how the relative orientation of
the parallel cameras affect on the modelling (reconstruction) errors.
First and foremost thing that must exist in parallel reconstruction is the
proper relative camera orientation. This orientation is represented by the
angle that might be found on interesect of the lines, created by the
directions of the given cameras. Mentioned angle fits in a given range,
α ∈ (0 o ÷ 180 o ) . The recommend angle value is 90 o (what is used commonly
in architectural reconstruction, e.g. from the blueprints).
Because 3D-point reconstruction is based on the making intersection from
(at least) two cameras with using of the graphical methods (user point
particular points on the camera image plane), the reconstruction accuracy
depend strictly from the point indicating precision. The result point might be
found within characteristic error area. Owing to errors usually occurs with the
Gauss’ distribution, error area have a radial shape that it is similar to ellipse.
For show how the error area change depending on the relative orientation
angle the following pictures have been made (by using dedicated application
that simulate sample reconstruction) – see Figures 7-4 and 7-5.
Both examples are depicted on 2D to clearly present the idea of
relationship
between
modelling
errors
2D Space does not obscure this idea.
132
and
camera
orientation.
Figure 7-4. Parallel reconstruction made with right angle
For the situation found on Figure 7-4, there are two visible parallel
projection planes with shown possible target lines. Intersection of the given
target lines create given points, that are couloured adequately to probability
of given intersection (the more the error is the probability is smaller).
Assumed border value (marked with red color) indicate area with the most
probability of future intersection.
Next, the Figure 7-5 show reconstruction with sharp angle (45°).
Figure 7-5. Parallel reconstruction made with sharp angle
Compare of both pictures shows that the more the sharp angle is, the more
of the possible points might be created (error ellipse is much bigger for sharp
angles than for right angles).
133
7.6. Summary
Equations presented in this Chapter for mean-error’s determining were
successfully checked in the sample calculations. The influence of the radial
distortion was especially visibled here, so it is another thing to use radial
distortion correction in the data processing.
The procedure for determining the real camera sensor dimensions was
used with success. Tests shown that radial distortion has a substantial
influence on the dimensions results. However the dimensions of the sensor
might be determined with good precision (± 0,7 ÷ 1,0 mm), especially if one
makes a few measurements of the given value.
The tests showed that for perspective projections, error of height
determining ( m H ) linear depends on the distance to given object ( D ). If the
object is farther from the lens the error is bigger.
The very important thing is a significant influence of the radial distortion
on the result image. The influence is too big, so the proper correction
methods must be used.
The significant question about relative orientation of the parallel cameras
and its relationship between obtained reconstruction error were practically
examined during given tests. Tests showed that the more the relative
orientation angle is different from the right angle, the more the
reconstruction error are bigger. The best solution is to reconstruct from right
angles.
Of course the same holds not only to fit to parallel reconstruction but also
for perspective restitution. Especially when taking photos, one has to
remember about principle, according to it, photos should be taken at right
angles to each other. Of course, for taking photograph, it is impossible to get
straight right angle during measurement (photographing) but the angle
should vary about right angle value (90°).
134
8. Conclusions, observations and future development
Reconstruction is a complex process that have to be made with proper
precision in order to make a good quality model (for given purposes).
The conclusions that result from the author observations are described.
Conclusions (and observations) are divided into the points that can respond
to the given reconstruction stages.
First and the most significant aspect is that the most reconstruction
methods
rely
on
the
graphical
interpretation
of
obtained
data
(fotointerpretation), so the accurracy of the result model depends first of all
on the precision of getting data from the source images.
8.1. Conclusions on the data collecting
For using particular photographs to reconstruction purposes (e.g.
orthophotograph creation from perspective photos), the photographs can’t
be taken under too sharp angles. Source for that principle are described in
Chapter 7 in Section 7.5.
Since the digital cameras is used to acquire photographs, the proper
devices should be used. It is known that digital cameras ensure better quality
of images (better precision) than in case of using internet cameras (that are
loaded with significant radial distortion). It is recommended to use the
highest resolution, but it always depends on a given circumstances (when
camera have small memory card the amount of the high resolution images
are smaller – it cause to frequent transfer the images into the computer
during the photographing – what is tiresome).
One should ensure good lighting of the photographed objects. Especially
when the object is small (like scale trains or other reduction models) –
sometimes the reduction models are weakly lighted and one use flash as
additional lighting source. This often cause that center of the result image is
more brighten that rest of the image. Sometimes the significant reflex from
the object during flashing occurs – especially when photographing object
under the right angle. It is happen because these surfaces have turned their
135
normal vectors into the observer direction. Because flash is commonly placed
a little above the lens, so most of the light is reflected straight into the lens –
what cause overexpose of the part of the image. In case of small object it is
recommended to use additional, properly sited lighting diffused sources.
Comparison between correct and incorrect lighting is shown on Figure 8-1.
Figure 8-1. Difference between proper and improper lighting (at left – photo with
flash, at right – without flash but with additional lighting)
Even when the phototexture have poor quality (if source photos were
taken by poor lighting or photos are improper lighted owing to used flash)
still it is possible to create better texture – an example in the graphic
application like Adobe Photohop or Corel PhotoPaint given textures might be
corrected or created from scratch.
136
8.2. Conclusions on the data processing
Before the data is used in the modelling process, it has to pass the
preparation process. Frequently, the images have to be free from radial
distortion and parallel projections have to be properly calibrated (in order to
use them in modelling process).
If the photographs are taken with small focal length (when the radial
distortion is biggest), then one has to take notice that earlier calibration
should be made especially carefully.
Similarly, during orthophotographs or blueprints corection and calibration
(where the graphical fitting methods are used), that process should be made
with a special notice. Any imperfection of fitting will appear during modelling
process that force user to repeat the processing or acquiring data process.
8.2.1. Conclusion on lens distortion correction
The lens distortion correction, described by the author was successfully
used in the present Thesis. It should be mentioned that the solution works
well on the three coefficients, with more (five) coefficients the results aren’t
so good (two coefficients are improperly recognized), and this element is still
in development phase. Correction with three coefficients is enough in the
most cases, the main notice should be only focused on the lens calibration
process. Thanks to the lens distortion correction we can use cheap cameras
in the reconstruction process. Naturally optics and active matrices in more
expensive cameras serve well, but cheap devices along with correction offer
similar performance.
137
8.2.2. Conclusion on perspective correction
When taking photos under angles different from the right angle,
especially for creating textures, the perspective distortion should be removed
the perspective distortion. This process, described in the previous Chapters
isn’t so hard. But the significant thing is the angle under the photo was
taken. When this angle is too sharp, part of the texture that has to be
extracted, is smaller. After the perspective correction process produces
image with proper size, but part of the image will be blurred. This is shown
on the Figure 8-2. As can be see, the left part of the image is blurred too
much.
Figure 8-2. Texture created from one perspective photograph taken with sharp
angle
Figure 8-3. Texture created from two perspective photographs
taken with sharp angles
To avoid this situations one has to take two photos with two sides of the
object’s facade. And next this photos have to be superimposed in order to
create good quality texture. Of course one should consider some methods
to fit colours on the both images (to make the seamless texture).
138
The important thing are stitches of the result image. Brightness of the
every
image
are
not
always
equal
together. If the images are even slightly
different to each other (in sense of
brightness) the result (super-imposed)
image might be equipped with artefacts,
shown on Figure 8-4.
Figure 8-4. Visible stitch of the result image
Other reason of stitch visible is the poorly made perspective correction
process. In the most cases it is possible to get proper fit the images in order
to avoid the stitches.
For stitches that occurs from the brightness differences, the following
solutions are possible:
ensure better lighting during taking photographs,
manual smooth (and averaging) the stitched region,
use proper application in order to get the proper colors and brightness
balance,
139
8.3. Conclusions on modelling process
In the modelling, the main question is the good source for modelling.
The blueprints, orthphotographs...etc can be that source, that have to be
positioned in a given way. Precise positioning (calibrating) is significant, what
is shown by sample reconstructions (from Chapter 6), because using
graphical method in the calibration process (e.g. determining reference
points on the images).
Question described in Chapter 7 question, how of the modelling errors
influence on reconstruction process says great about force of that influence.
Thanks to that one has to consider following things:
the way of taking photos (orientation angles),
resolution and object coverage on the photograph,
calibration of the used camera,
accuracy of the whole construction process,
If the processing is made with enough precision, the modelling proces will
be not too hard, and it will ensure assumed precision of the geometry
reconstruction.
Other important issue is the proper method of creating curved shapes,
like arcs, circles...etc. Beacause low-polygon model is based on the points
and polygons, in this connection also the given arcs have to be built on the
particular points. The points count (points which create given curved shape)
depends on the size of the given detail – user has to consider how many this
points must be. If chosen generalization is too big, texturing could be harder
(some pixels that must be mapped into the model will be copied into the
background). There must be enough number of border points to clearly
define relevant texture map. In case of inappropriate texture mapping
further position adjustments or other shape of texture image will be
necessary.
Another significant question is to consider about creating only texures
instead of the orthophotographs. It could save the user time during texture
matching to the modelled object.
140
8.3.1. Conclusion on modelling from parallel projections
Described algorithm that makes possible finding the point nearest two
segments in the space (based on the [3]) is effective and for reconstruction
(modelling) is enough. Please note that before each point reconstruction
from the segments, it should be tested that segments aren’t parallel each
other or don’t cover themselves.
The equation solving might be expose to optimalization – against
Cramer’s expression, Gauss’ elimination might be used.
8.3.2. Conclusion on modelling from perspective projections
Although the perspective reconstruction methods aren’t directly described
in this Thesis, the estimate of the sample perspective modelling was
described in Chapter 7. Of course the estimation methods concern only
simple height calculations when most quantities are available, but included
radial distortion on this process shows how the results may differ.
Conclusion leads to recommendation, according to that, the camera must
be calibrated in a careful way. The lens correction and estimation of the real
size of the camera’s projection plane must be done. After that, one might
determine accuracy of the given project, and use suitable methods to acquire
the data.
Main gometry reconstruction and perspective camera calibration process
is described in detail in the [12].
141
8.4. Future developments
Because geometry reconstruction is a very wide subject, where many
methods might be used to reach the assumed destination, the author
decided to study the following things:
further test of the lens distortion correction – to search the better
methods for lens distortion calibration (with more parameters that might
beter approximate given distortion model),
further tests that concern parallel modeling,
perspective camera calibration methods,
further
tests
about
stitching
photographs
(into
the
textures/orthophotographs) – methods for artefacts removal, lightning
adjusting...etc,
developing of the BluePrint Modeler application,
142
9. Acknowledgements
I want to thank my Supervisor, Dr inŜ. Sławomir Nikiel for aid and help
with research concerning Thesis and related topics (e.g. for help with my
first published article – see [11] ). As we can see, 3-years’ cooperation gave
a very interesting results within the confines of photogrammetry and
geometry reconstruction processes. Thanks to the hints of the Dr inŜ.
Sławomir Nikiel, the BluePrint Modeler is the application not only for making
simple reconstructions, but, first of all, for architectural modelling. I want to
thank my Menthor, for ecouraging me to work on the amazing field of the
three-dimensional graphics.
The very thanks are necessary to Maciek Gaweł, with his help we
successfully created the lens distortion calibration method. Thanks to that
creation of the lens distortion profile of the any camera is fastest and simpler
than in the time when I start to study about it. Also his hints was very
helpful during research and work.
Also, the author want to thank employees of the FotoJoker store: Mr.
Adrian Kawalec and Mr. Grzegorz Pasiński, that make available to test of the
most digital cameras (lens distortion correction – see Chapter 4). Also their
help during data acquiring for lens calibration was very helpful.
Besides this, I want to thank:
General Directorate of National Roads and Motorways (GDDKiA) for
permission to taking photographs of their building.
143
10. Literature
[1] Criminisi A., Reid I., Zisserman A., "A Plane Measuring Device",
Department of Engineering Science, University of Oxford, UK
[2] Cucchiara R., Grana C., Prati A., Vezzani R., „A Hough Transform-based
method for Radial Lens Distortion Correction”, 12th International
Conference on Image Analysis and Processing (ICIAP'03) , September
17 - 19, 2003, Mantova, Italy
[3] DeLoura M., "Game Programming Gems 2", CHARLES RIVER MEDIA INC.
[4] Fangi G., Gagliardini G., Malinverni E.S. „Photointerpretation and small
scale stereoplotting with digitally rectified photographs with geometrical
constraints”, C.I.P.A. International Symposium, Potsdam, September 1821, 2001
[5] Karras G.E., Mavrommati D. „Simple Calibration Techniques for Non-
metric
Cameras",
C.I.P.A.
International
Symposium,
Potsdam,
September 18-21, 2001
[6] Karras G.E., Mountrakis G., Patias P., Petsa E., „Modelling Distortion of
Super-Wide-Angle
Lenses
for
Architectural
and
Archeological
Applications”, Department of Surveying, National Technical University,
Athens, Greece,
[7] Krauss K., "Photogrammetry vol. 1: Fundamentals and Standard
Processes", Institute for Photogrammetry and Remote Sensing, Vienna
University of Technology
[8] Krauss
K.,
"Photogrammetry
vol.
2:
Advanced
Methods
and
Applications", Institute for Photogrammetry and Remote Sensing,
Vienna University of Technology
[9] Microsoft Corporation, „A flexible new technique for camera calibration”,
MSR-TR-98-71, 1998, s.12,
[10] Mohr R., Triggs B., „Real Cameras Projection”,
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/MOHR_TRI
GGS/node10.html, 1998,
144
[11] Nikiel S., Kupaj M., „Fotogrametria w komputerowym modelowaniu
obiektów architektonicznych”, V Konferencja Naukowa, Systemy
pomiarowe w badaniach naukowych i w przemyśle SP'04, Łagów
06/2004,
[12] Tsai R.Y., „A versatile camera calibration technique for high-accuracy
3D machine vision metrology using off-the-shelf TV cameras and
lenses”, IEEE Journal of Robotics and Automation, vol. 3, no. 4, Aug.
1987, pp.323-324
[13] Vass G., Perlaki T., „Applying and removing lens distortion in post
production”, Colorfront Ltd., Budapest,
145