aspect 3D ImageScan Pro User Guide

Transcription

aspect 3D ImageScan Pro User Guide
USER GUIDE FOR IMAGESCAN PRO
VERSION BETA
© 2015 ArcTron GmbH. All rights reserved.
Table of Contents 1.1 INTRODUCTION ......................................................................................6
1.2 COMPUTER CONFIGURATION ...........................................................7
2.1 BEFORE STARTING A PROJECT.........................................................9
2.1.1 INDEPENDENT CONVERGENT MODELS ...............................................9
2.1.2 STRIP MODELS ....................................................................................10
2.1.3 IMAGE FANS ........................................................................................10
2.2 INTRODUCTION TO THE CAMERA CALIBRATION....................14
2.2.1 SELF-CALIBRATION METHOD .............................................................16
Exercise 1: Self-calibration ......................................................................18
2.2.2 CALIBRATION USING CODED TARGET MARKERS...............................25
Exercise 2: Calibration using markers .....................................................25
2.2.3 CALIBRATION USING CALIBRATION PLATE (RING)............................27
Exercise 3: Calibration using calibration plate........................................27
3. BUNDLE ADJUSTMENT AND GEO-REFERENCING .......................29
3.1.1 MONO-FOCUS BUNDLE ADJUSTMENT.................................................29
3.1.2 MULTI-FOCUS BUNDLE ADJUSTMENT ................................................29
3.1.3 AUTOMATIC GCP PICKING MODE ......................................................30
Exercise 4: Sparse/Dense Point Cloud, Geo-referencing and Mesh.........31
4. GENERATE SPARSE / DENSE POINT CLOUD, DSM,
ORTHOPHOTO AND MESH.......................................................................36
Exercise 5: Create DSM and Orthophotos ...............................................36
REFERENCES ................................................................................................39
List of Figures Figure 1: General workflow for 3D object reconstruction based on SfM.......................7
Figure 2: A screenshot from aspect 3D ...........................................................................8
Figure 3: Convergent model for image capturing...........................................................9
Figure 4: Strip model for image capturing ...................................................................10
Figure 5: Image fan model for image capturing ...........................................................10
Figure 6: A sample tool for camera configuration........................................................ 11
Figure 7: A screenshot from GSD calculator and flight planning tool of aSEPCT......13
Figure 8: Schematic representation of the barrel distortion and associated undistorted
image.............................................................................................................................15
Figure 9: Demonstration of barrel distortion (first row) chess board with
corresponding grid, (second row) two original images, and (third row) rectified image radial distortion free (images obtained from ref. 3)......................................................15
Figure 10: ImageScan module and its components (ImageScan button is highlighted by
red rectangle) ................................................................................................................18
Figure 11: Start a new project by ImageScan ...............................................................18
Figure 12: Predefined setting of ImageScan for the self-calibration ............................19
Figure 13: Customize setting of ImageScan for the self calibration .............................20
Figure 14: How to create quadrate analysis of epipolar geometry ..............................21
Figure 15: How to open quadrate analysis of epipolar geometry.................................22
Figure 16: (upper row) A good match between stereo pair, (lower row) low matching
due to camera pose between stereo pair .......................................................................24
Figure 17: Graphic user interface for calibration using coded-target markers ..........25
Figure 18: Ring calibration plate .................................................................................27
Figure 19: Graphic user interface (GUI) for calibration using ring calibration..........28
Figure 20: A screenshot from ImageScan showing 57 images with mono-focus...........29
Figure 21: A screenshot from ImageScan showing 16 images acquired with two
different cameras at four different focal lengths (multi-focus). .....................................30
Figure 22: How to set the principal distance of MarkerPad in aspect 3D ...................31
Figure 23: Graphic user interface (GUI) for calibration using ring calibration..........32
Figure 24: 3D reconstruction of a small statue using MarkerPad (with 12 images
obtained by Sony DSC-RX100) .....................................................................................33
Figure 25: Created mesh from (left) SfM and (right) strip light scanning (SLS) ..........33
Figure 26: Compare between created mesh from SfM and strip light scanning (SLS)..34
Figure 27: How to create and open DSM and Orthophoto by aspect 3D .....................37
Figure 28: Mesh, DSM and true-orthophoto generated from amphitheater at the
University Salzburg (57 images obtained by camera Sony NEX-7) ..............................38
List of Tables Table 1. CoC value of DSLR cameras...........................................................................12
Table 2. Parameters for GSD calculation and flight planning......................................13
Table 3. List of camera intrinsic parameters.................................................................16
Table 4. Customize value for the camera calibration....................................................20
Table 5. Quantitative evaluation of the camera calibration..........................................21
Table 6. Comparison of different methods for camera calibration ...............................27
ImageScan Pro: A Know-How Guide
User Guide for 3D Object Reconstruction Using
ImageScan Pro (aspect 3D)
Compatible with aspect 3D Version beta – Sept. 2015
Written by Dr. -Ing. Gholam Reza Dini - ArcTron 3D GmbH
1.1 Introduction
This user guide outlines the theoretical background as well as a step by step user guide for the 3D object reconstruction using ImageScan Pro module within aspect 3D. The 3D reconstruction is based on multiple views in the closed range photogrammetry termed as “Structure‐from‐Motio” (SfM). The module ImageScan Pro contains the following features 
Various tools for camera calibration 
Camera pose estimator and epipolar geometry 
Sparse and dense point cloud generator 
Digital surface model (DSM) and orthophoto generator 
Filtering and re‐sampling of point cloud 
Automatic and manual registration of point cloud into reference coordinate using ground control points (GCPs) 
Meshing and photo‐texturing 
Protocol generator
ImageScan Pro: A Know‐How Guide 7 This tutorial is to help you to understand the basic concepts about 3D object reconstruction based on SfM as well as how to apply these concepts within aspect 3D, practically. The followings figure shows the main steps for 3D reconstruction based on SfM (see workflow diagram). 
Camera Calibration 
Camera pose estimation using bundle adjustment (create sparse point cloud) 
Create dense point cloud, filtering of point cloud, meshing, create digital surface model (DSM) and orthophoto Figure 1: General workflow for 3D object reconstruction based on SfM 1.2 Computer configuration
aspect 3D works on every computer those have Windows 7 but this is a minimal configuration. The optimal performance for aspect 3D can be as below or higher hardware / software configuration. 
Windows® 7 operating system 64 bit 
CPU: Intel® Quadcore ™ i7 processor 
RAM: 16 GB DDR3 
Graphic card: NDIVIA GeForce series (with CUDA platform) The use of the GPU in combination with multi‐core CPUs accelerates the complex processing so it recommend to support your PC by an appropriate GPU in order to shorten run‐time, considerably. Note: If your computer has lower performance or configuration, aspect may proceed the processing however, it probably takes long time or it may crash during computation. Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 8 Figure 2: A screenshot from aspect 3D Note that this tutorial assumes that users are familiar with the principal concepts in photogrammetry, computer vision, stereo vision and digital image processing techniques (at intermediate level). For up‐to‐date information, documentation as well as online community about aspect 3D, see aspect 3D website http://aspect3d.arctron.de/ Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 9 2.1 Before Starting a Project
For each photogrammetric project, before starting a project it is essential to assess the requirements in accordance with the project purpose. These preprocessing phases include  Determine Image capturing techniques  Calculate Ground Sampling Distance (GSD)  Overlap between images (end laps and side laps)  Plan for height of flight  Camera configuration For 3D object reconstruction based on photogrammetric approach, there is three different forms of image capturing methods. In the design of image capturing procedures, the maximum robustness and accuracy of the data is of course the most important factor. 2.1.1 Independent Convergent Models
In the convergent pair project, camera has a great deal of flexibility to move toward object. It causes a large overlap between multiple images however; a significant disadvantage for this model is the different distance between camera and object, resulting in different scales throughout images. Figure 3 demonstrates a schematic scheme of convergent model. This method is most suitable if only a single model from object is required. Another drawback of this model is that each individual model should be fully controlled if there is a sufficient overlap. Figure 3: Convergent model for image capturing Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 10 2.1.2 Strip Models
This method of image capturing is the most usual procedure in the aerial photography. It includes a series of parallel images with minimum 60% along track overlap and 30% cross track overlap. A key characteristic for this method is the good stability of the model stemming from high overlapping in the multiple images. This causes the orientation information be passed between models accurately resulting in dense point cloud with minimal rate of noise. The main drawback of this method is that camera with longer focal length may not deliver promising result because longer focal length increases the Height/Base ratio and consequently reducing depth accuracy. Figure 4: Strip model for image capturing 2.1.3 Image Fans
This method is developed in order to overcome the height/base problem if the camera is set to the long focal length (as discussed in strip model). As shown by Figure 5: Image fan model for image capturing Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 11 Figure 5, in this method image capturing is relatively similar to convergent model however its difference is that more than one image is acquired for each camera position. These key characteristic provides a possibility to reconstruct the object three dimensionally even with a small overlap between models (at least 10%). In fact, by rotating viewing angle in each camera position, the orientation information is shared between models establishing a robust bundle adjustment model. Therefore, compared to the convergent model, there are fewer unknown parameters to be determined by the bundle adjustment. These three main image capturing methods should be considered before starting a SfM project and based on project requirements, a suitable method can be chosen. Camera configuration is another issue to be considered. It is absolutely an important task because a good 3D reconstruction depends upon the image quality (sharpness, contrast, etc.). For camera configuration, subject distance is relatively an important factor. There are different online and offline tools which can be used for this purpose (e.g. http://www.dofmaster.com). As shown by Fig. 6 CCD size, focal length, f‐stop and object distance are inputs for Depth of Field (DoF) calculation. In order to calculate DoF, first it is required to determine circle of confusion (CoC). This is indeed the maximum size of a circle that would be detected as a point by human eyes. Image size is an important factor for determining the maximum size of circle of confusion. This value is identical for each lens and can be found in the camera information. Figure 6: A sample tool for camera configuration Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 12 (Full Frame) 35mm 1.3x DSLR (EOS 1D) 1.5x DSLR (D100) 1.6x DSLR (EOS 10D) 2x DSLR (E‐1) 0.3 mm 0.23 mm 0.2 mm 0.1875 mm 0.15 mm Table 1. CoC value of DSLR cameras In order to calculate DoF, first the hyperfocal distance should be calculated. The hyperfocal distance is the shortest distance from the camera in which from approximately half that distance to infinity will appear sharp and it is calculated using following formula: Where: H: hyperfocal distance, f: focal length of lens N: aperture (f‐stop) c: circle of confusion constant Near distance of acceptable sharpness:
Far distance of acceptable sharpness: Hence depth of field is calculated by subtraction of near and far distance. DoF = Df - Dn
The last point which should be considered before starting a project is flight planning and calculation of associated GSD. The following table shows the required parameters to calculate GSD and plan a flight, accordingly. Parameter
Symbol
How to calculate
Sensor Width (µm) CCD Input parameter Focal Length (mm) Foc Input parameter Flight Height (m) FlgH Input parameter Field Length (m) FldL Input parameter Field Width (m) FldW Input parameter Image Length (pixel) ImgL Input parameter Image Width (pixel) ImgW Input parameter GSD (cm) GSD (CCD × FlgH) / (Foc × ImgL) × 100 Footprint Length (m) FtpL GSD × ImgL Footprint Width (m) FtpW GSD × ImgW Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 13 Footprint Area (m2) FtpA FptL × FptW Along Track Overlap (%) AT Input parameter Cross Track Overlap (%) CT Input parameter Vehicle Speed (Km/h) VS Input parameter Number of Image per Row NrR 100 × FldL / (FtpL × (100 – AT)) Number of Image per Column NrC 100 × FldW / (FtpW × (100 – CT)) Total Number of Images NrT NrR × NrC Time Interval between Images Intr (FtpL × (100 – AT) / (27.778 × VS) Scale Factor SF (FlgH × 1000) / Foc Distance between Image Center (Along Track) DisAT (FtpL × (100 –AT)) / 100 Distance between Image Center (Cross Track) DisCT (FtpW × (100 –CT)) / 100 Proper Size of Marker MS GSD × 50 Table 2. Parameters for GSD calculation and flight planning To calculate GSD and plan a flight using aspect 3D, first go to the ImageScan menu and then click on GSD Calc, enter input parameters and press Enter. Figure 7: A screenshot from GSD calculator and flight planning tool of aSEPCT Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 14 2.2 Introduction to the Camera calibration
Camera is indeed a tool which transforms the 3D coordinate in the object space into a 2D coordinate in the image space. However, in order to reconstruct objects three dimensionally, or measure the distance on image space, it is required to perform some professional processing. The main function of the camera lens is to scale down object size through a perspective transformation. However, due to lens distortion, the projection of object space into the image space is not a straightforward issue as theoretically. The lens distortion can be modeled through a linear or non‐linear least squares adjustment. In order to extract accurate metric information from 2D images, elimination of lens distortion is necessary. In the recent years, there have been developed a large number of high quality camera even being inexpensive. Following to such advancement, these cameras can be used for highly important measurement tasks. This section introduces the principal concepts about camera calibration followed by an instruction about practical implementation of these concepts within aspect 3D. Parameters for camera calibration include intrinsic parameters (focal length and image center) as well as radial and tangential distortions. To demonstrate camera calibration mathematically, let x be a representation of a 3D point in homogeneous coordinates (a 4‐dimensional vector), and y be a representation of the image of this point in the pinhole camera (a 3‐dimensional vector). Then the following relation holds Where C is the camera matrix. In computer vision, it describes the mapping of a pinhole camera from 3D points in the object space to 2D points in the image space. The sign implies that the left and right hand sides are equal up to a non‐zero scalar multiplication. By replacing the intrinsic parameters, the camera matrix will be expressed as per below The presence of w is explained by the use of homography coordinate system (w=Z). The unknown parameters are fX and fY (focal lengths) and (cX, cY) which are the optical centers expressed in the pixels coordinates. In addition to focal length and principal point of the camera, there is another important parameter for camera model which is called lens distortion expressed by distortion matrix. This is a row matrix with 5 columns representing radial and tangential distortion as shown below. k1, k2 and k3 represents radial distortion which are computed through the following equations Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 15 Radial distortion
Figure 8: Schematic representation of the barrel distortion and associated undistorted image (x,y) demonstrates the coordinate of old pixel point in the input image and its position on the rectified image is represented by . r is the radius of distortion. The presence of the radial distortion usually manifests in form of the “barrel” or “fish‐eye” effect which is the most typical form of distortion (see Figure 9). The negative value of k1 causes Barrel distortion (being convergence) while its positive value indicates the divergence form of distortion (i.e. Pincushion distortion). Figure 9 shows fisheye effect on three sample images and their associated corrected images. Figure 9: Demonstration of barrel distortion (first row) chess board with corresponding grid, (second row) two original images, and (third row) rectified image ‐ radial distortion free (images obtained from ref. 3) Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 16 In addition to the radial distortion, there is tangential distortion which usually occurs because of misalignment of the lens. This case is observed if the lens plane is not perfectly parallel to the imaging plane. Therefore, this type of distortion is also known as decentering distortion because it causes a shift on the image center. Tangential distortion is calculated using the following equations Tangential distortion The next sections explain you to understand various methods of camera calibration along with few exercises that show you how to: 
Calibrate your camera using coded‐target markers, self‐calibration or pattern calibration (by calibration plate which requires few laboratory equipments). 
Assess the accuracy of calibration using a quantitative evaluation Symbol
Parameter description
f Focal length CX, CY Principal point of image K1, K2, K3 Radial distortion coefficients P1, P2 Tangential distortion coefficients Table 3. List of camera intrinsic parameters 2.2.1 Self-calibration method
This method provides a possibility for camera calibration using image features without any calibration equipment so it is also called “auto‐calibration“. Self‐
calibration is indeed based on the identification and matching of image features which are obtained by the same camera from multiple views. First using SIFT (Scale‐Invariant Feature Transform) and RANSAC (Random Sample Consensus) algorithms, image features as well as corresponding points are detected, respectively. Then the Projective Geometry is used to create two quadratic equations for each pair of views containing five unknown parameters (also known as photogrammetric resection). The Levenberg‐Marquardt (LM) bundle adjustment algorithm based upon least squares adjustment approach is then utilized in order to optimize the residual error. In self‐calibration, the scene is reconstructed based on the properties of the essential matrix to find the best stable computation by varying focal lengths and principal point, iteratively. The essential matrix is also calculated from the fundamental matrix by transforming the intrinsic parameters of stereo pair associated with the two views. Therefore, constraints on the essential matrix are translated into constraints on the intrinsic parameters of the pair of cameras. This let to search in the space of intrinsic parameters in order to minimize a cost Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 17 function related to the constraints. The following projective collinearity equations show the mathematical model for the self‐calibration. Where (x,y) are the coordinates of features in the image space, (xP,yP) are the coordinate of the principle point of the lens in the image space, f indicates the focal length and matrix m refers to the camera orientation matrix. The self‐calibration algorithm has shown a robust performance in the presence of noise providing that the obtained images contain a large number of features. However, a significant drawback for this method is its complex computation which usually takes a run‐time longer than other calibration methods. In order to implement the self calibration procedure, the camera lens is usually focused at infinity, if the distance between object and camera is more than 2‐3 meters. However, the exceptional case is when one would obtain image from very close range. In such a case, first the camera lens should be adjusted with distance until the sharpness of image is sufficient for the feature extraction. The sharper the images are, the more precise the self‐calibration causes. To solve the quadratic equations, at least three images from different views is required although in order to create a stable solution, it is strongly recommended to take 8‐16 images. Less than 8 images may not deliver an accurate estimation from camera model. On the other hand, more than 16 may not improve camera model but only increasing the run‐time. The determination of the offsets of the principal point is also highly correlated with the tangential distortion. The misalignment of the lens optical centers (stemmed from angular deviations of the lens component or CCD sensor array) usually results in such an offset. To correct radial and tangential distortion, the influence of pitch and yaw angles of the lens sensor should be considered. The adjustment is based on polynomial curve fitting. Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 18 Exercise 1: Self-calibration
Note: Before you start the experiment with the self‐calibration, configure the setting of your camera and then take few images. The recommended number of images for the self‐calibration is between 8 and 16. Try to obtain images from well‐textured object with full of features in general, and as many features as possible in the corners of the camera lens in particular, because theoretically the distortion is maximal on the corner of the lens. That means for the modeling of radial distortion and consequently a well self‐calibration, the features on the corners delivers an accurate estimation from radial distortion (K1, K2, K3). The following sequence shows the self‐calibration procedure within aspect 3D. To open ImageScan, first run aspect 3D and click on “ImageScan” button Figure 10: ImageScan module and its components (ImageScan button is highlighted by red rectangle) To create a new project, first add the images by clicking on “BROWSE IMAGE” or drag and drop your images. Figure 11: Start a new project by ImageScan Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 19 When the images are added, you will see the thumbnail of image in the bottom of workbench. Furthermore, you can select or unselect each image if you would add or delete them for the self‐calibration process. Once the images are added into project, the camera, the numbers of images as well as their focal lengths are automatically detected (see left side of workbench in image 12). For example in this test, [12] 10.4mm indicates that 12 images are added into the project and all of them have the same focal length of 10.4 mm. In case the images have multi‐focus, they will be sorted based upon focal lengths. If a camera is not known within aspect 3D camera database, then please click on “SET CAMERA CCD” and enter the CCD size. Finally, you should select at which level which you want perform the self‐calibration. There are three different predefined modes with the associated description as below. Normal (Number of features=5000, Feature Similarity=0.7) Fine (Number of features=10000, Feature Similarity=0.8) Ultra (Number of features=20000, Feature Similarity=0.9) Figure 12: Predefined setting of ImageScan for the self‐calibration After choosing the level of quality, you can press “START CALIBRATION”, and then the self‐calibration will be performed. In the Log‐Window you will see the progress of calibration as well as outcoming results although, aspect 3D will generate a protocol automatically at the end containing the process results for each stage, separately. In addition to the predefined setting, aspect 3D let you to customize calibration processes by choosing different values for “number of features” and “feature similarity” as shown by table 2 To customize self‐calibration click on highlighted rectangle as shown below Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 20 The following window will let you to customize the self‐calibration. Once you have selected your own favorite input, press “CONTINUE” button, it will return you to the previous window where you can start calibration. Level of quality
Number of Features
Features Similarity
Very Low 1000 0.5 Low 2500 0.6 Normal 5000 0.7 High 10000 0.8 Very High 20000 0.9 Ultra 40000 0.95 Table 4. Customize value for the camera calibration During self‐calibration aspect 3D compare each pair of images against each other followed by calculation of reprojection error as well as RMSE error for the generated epipolar geometry. aspect 3D also creates automatically a 8×8 grid for each image assessing how is the similarity and matching between a given images vs. other images. Note that the more the similarity between two images, the better the matching is. Figure 13: Customize setting of ImageScan for the self calibration Once the processing is finished, reprojection error and RMSE of the created epipolar geometry indicates the level of accuracy as shown by Table 5. Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 21 Level of quality
Reprojection Error
RMSE of Epipolar Geometry
Very Low > 4.00 > 4.00 Low 1.00 - 3.99 1.00 - 3.99 Normal 0.50 - 0.99 0.50 - 0.99 High 0.25 - 0.49 0.25 - 0.49 Very High 0.11 - 0.24 0.1 ‐ 0.24 Ultra <0.10 <0.10 Table 5. Quantitative evaluation of the camera calibration To analyze accuracy of epipolar geometry per each individual image, go to the thumbnail area on the left side of aspect then right click on one thumbnail image and select “Quadrate analysis of epipolar geometry”. Figure 14: How to create quadrate analysis of epipolar geometry Once the processing is finished right click on project folder (e.g. B002) and select “Explore folder” (see Fig. 15). Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 22 Figure 15: How to open quadrate analysis of epipolar geometry Fig. 16 shows two examples from good match and bad match, respectively. The color of rectangle (8×8 grid) represents the number of corresponding features per each grid (white: few features, dark green: many features). The black vector Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 23 demonstrates the arithmetic mean of reprojection errors on each grid and the red circle indicates the RMS error of epipolar geometry on each grid. Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. Figure 16: (upper row) A good match between stereo pair, (lower row) low matching due to camera pose between stereo pair
ImageScan Pro: A Know‐How Guide 25 2.2.2 Calibration using coded target markers
The second way to calibrate a camera using aspect 3D is to utilize the coded target markers. The same algorithm using coded‐target markers is used but instead of feature extraction and feature matching, the centers of markers are detected. Therefore, the processing will be rather faster than self‐calibration. As shown in page 25, there are 96 coded target markers for the self‐calibration. Note that each marker should be unique and not observed twice otherwise it confused the algorithm. Exercise 2: Calibration using markers
Tip: the coded‐target markers can be used in two different forms (i) as regular grid in form of MarkerPad which can be utilized for calibration and registration (as ground control point) purposes simultaneously (ii) as individual markers that can be stuck everywhere for registration of point cloud into local coordinates. 1. Print out the markers (as shown in the next page) 2. Stick the markers on your scene 3. Set your camera setting 4. Adjust the sharpness of the object (markers should be seen clearly) 5. Take few photos from markers. Try to keep markers always on the corner of camera lens 6. Simply drag & drop images into aspect 7. A window is opened. Click on Camera Calibration > Marker Calibration 8. When the calibration is perform you can see the progress and results on the Log‐Window 9. Once the calibration is finished, the calibration file is saved as a text. Meanwhile the intrinsic parameters and the reprojection error are shown by aspect 3D Figure 17: Graphic user interface for calibration using coded‐target markers
Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 27 2.2.3 Calibration using calibration plate (ring)
The last option which aspect 3D offers for the camera calibration, is camera calibration through ring calibration plate. Figure 18: Ring calibration plate Table 6 compare different methods of camera calibration according to their computation complexity, avalibility and accuracy of calibration, respectively. Factor Self‐calibration Marker calibration Ring calibration Complexity of processing Complicated Simple Simple Availability Everywhere Moderate Laboratory Equipment Accuracy Depend on imaging Moderate Precise Table 6. Comparison of different methods for camera calibration Exercise 3: Calibration using calibration plate
Tip: The ring calibration plate is made by stable material which has minimal amount of expansion and contraction due to temperature. The calibration plate also has very smooth surface so if you would obtain images by lightening consider the problem due to reflection. 1. Set your camera setting 2. Adjust the sharpness of the object (rings should be seen clear) 3. Take few photos from markers. Try to keep a corner ring always on the corner of camera lens 4. Drag & drop them into aspect Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 28 5. A window is opened. Click on Camera Calibration > Pattern Calibration 6. Set the number of row and column of rings 7. When the calibration is perform you can see the progress and results on the Log‐Window 8. Once the calibration is finished, the calibration file is saved as a text. Meanwhile the intrinsic parameters and the reprojection error are shown by aspect 3D. Figure 19: Graphic user interface (GUI) for calibration using ring calibration Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 29 3. Bundle adjustment and geo-referencing
As shown by Fig 1, once the camera is calibrated the next steps are creation of sparse point cloud, dense point cloud, DSM/Orthophoto and mesh. To generate sparse point cloud, aspect 3D uses Levenberg‐Marquardt (LM) bundle adjustment. aspect 3D has also SURE package inside which is used for dense point cloud. This is a newly developed package mainly based on Semi‐Global Matching (SGM) algorithm. Once the images are added into ImageScan, based upon EXIF data aspect 3D automatically detects whether the added set of images are obtained with the same focus ‐ known as mono‐focus ‐ or images are acquired with various types of focus (multi‐focus). 3.1.1 Mono-focus bundle adjustment
In mono‐focus mode, all the acquired images have the unique focal length so for image distortion, camera calibration and consequently bundle block adjustment, aspect compute all image within a unique group. Figure 20: A screenshot from ImageScan showing 57 images with mono‐focus Fig. 20 demonstrates a sample of mono‐focus including 57 images taken by camera Sony NEX‐7 (focal length 19mm). As all images have the same focus therefore, a unique calibration file (including intrinsic and distortion parameters) will be calculated and then bundle adjustment is performed based on this calibration information. 3.1.2 Multi-focus bundle adjustment
Multi‐focus set of images can be either by images from different cameras or images obtained from same camera but at various focal lengths. In this case, camera calibration should be performed for each focal length individually and consequently, undistorted images will be generated for each group based upon their calibration parameters. Fig. 21 demonstrates a sample of multi‐focus case containing two different cameras (Nikon V1 and Sony RX‐100) and three Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 30 different focal lengths (8 images with 10mm, 5 images with 14.75mm and 3 images with 24.4mm as focal length). Figure 21: A screenshot from ImageScan showing 16 images acquired with two different cameras at four different focal lengths (multi‐focus). As mentioned, although different calibration sets are used but a unique bundle adjustment is performed in order to align images and verify the best possible epipolar image among all the potential epipolar geometries. Finally sparse point cloud along with camera orientation is delivered which will be used for dense point cloud generation. 3.1.3 Automatic GCP picking mode
As mentioned in the previous section, the coded‐target markers can be used either as MarkerPad or individual markers in order to register point cloud into reference coordinates. If you would register the output into a global or local coordinate system, it is required to use ground control points (GCPs) when bundle adjustment is processed. Within aspect 3D, GCPs utilization has three different modes 
Manual picking mode 
Automatic picking mode 
MarkerPad (regular network of markers) The manual picking mode will be implemented in aspect soon, but currently automatic picking mode and MarkerPad mode is active. In this section we will explain how aspect can automatically pick individual GCPs as well as GCPs on the MarkerPad. The MarkerPad is designed at four different sizes (DIN A2, DIN A3, DIN A4 and DIN A5). For the automatic picking mode, the coordinate of reference points should be saved as a text file with the following format (ID X Y Z). For scaling based on MarkerPad, use only needs to enter the serial number of MarkerPad. Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 31 Exercise 4: Sparse/Dense Point Cloud, Geo-referencing and Mesh
Tip: If you would print the MarkerPad yourself, before running the SfM you have to measure the principal distance and then enter its value in meter in the associate setting of aspect 3D. The principal distance is the distance between center of first marker and last marker in the first row. However, if you use the MarkerPad with standard size printed by ArcTron 3D, you only need to enter their serial. 1. Run aspect 3D ImageScan > ImageScan 2. In the opened window, load your project 3. Print out MarkerPad or individual Markers as GCPs 4. Measure the principal distance of MarkerPad. 5. Click on ImageScan icon on the left side of aspect as shown below 6. Click on GCP > Marker Paper Size 7. Set the principal distance in meter (see Fig. 22) and then OK Figure 22: How to set the principal distance of MarkerPad in aspect 3D 8. Again go to the left side of aspect 3D the click on ImageScan button Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 32 9. In the opened window, click on Advance Mode button 10. Then load your calibration file and the GCPs file (both as text) 11. Choose one of the following predefined settings in the quality button “Normal”, “Fine” or “Ultra”. If you would customize SfM process then simply click on associated button as shown below 12. Finally choose “CREAT MESH” if you would. To start processing click on “START IMAGESCAN” Figure 23: Graphic user interface (GUI) for calibration using ring calibration As an exercise, in the following link you will find 12 images from a small statue (shown below) which are acquired by Sony DSC‐RX100. In addition, the calibration file and GCPs as txt file are also available http://aspect3d.arctron.de/ Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 33 Figure 24: 3D reconstruction of a small statue using MarkerPad (with 12 images obtained by Sony DSC‐RX100) To generate the point cloud and mesh by 12 images from Sony DSC‐RX100, use calibration and GCPs files and follow the instruction as described by Exercise 4. In order to assess the quality of mesh created by SfM, the statue is scanned by strip light scanning (SLS) which has the nominal accuracy of 0.1mm. Then two meshes are compared against each other as shown by Fig. 25 and Fig. 26 Figure 25: Created mesh from (left) SfM and (right) strip light scanning (SLS) Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 34 The quantitative evaluation has demonstrated a RMSE 0.3mm. This range of error is absolutely acceptable for 3D object reconstruction. Furthermore, the created point cloud shows low amount of noise. Using our MarkerPad, the mesh and point have real scale so one can measure any distance as reality. Figure 26: Compare between created mesh from SfM and strip light scanning (SLS) Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 36 4. Generate sparse / dense point cloud, DSM,
Orthophoto and Mesh
aspect 3D has SURE package inside which is used for creation of dense point cloud, mesh, photo‐texture, DSM and Orthophotos. This is a newly developed package and mainly based on Semi‐Global Matching (SGM) algorithm. Beside the original images, SURE requires the image orientation including interior orientation (camera parameters) and exterior orientation of camera (rotation and translation) as input. Computation for the dense matching starts with the determination of correspondences points along epipolar line for each model. As a multi‐view stereo solution, aspect 3D creates multiple models per image. The resulting point cloud after filtering can be used to create further Geo‐Data such as Digital Surface Model (DSM), Orthophoto and mesh model. The following section shows how aspect generates point cloud as well as the associated mesh in practice; see the Exercise 4 in the previous section followed by Exercise 5 which shows how one can create DSM and Orthophoto by aspect. Exercise 5: Create DSM and Orthophotos
Note: aspect utilize newly developed engine to create a DSM with minimal interpolation effect as well as a true‐orthophoto accordingly. Once the dense point cloud and mesh is created, you can simply create the associated DSM and orthophoto. aspect 3D uses the filtered point cloud following by an interpolation in order to fill out the holes. In order to create DSM and orthophoto, do a right click on generated project as shown by Fig. 27 then click on “Create DSM”. Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 37 Figure 27: How to create and open DSM and Orthophoto by aspect 3D To generate true‐orthophoto, aspect 3D automatically detects the 3D edges (e.g building roofs) and sharpen these areas as well as occluded areas. This is a key step in order to generate true‐orthophoto (known as refined orthophoto). To open the created DSM and orthophoto do again a right click on generated project and click on “Explore folder”, then you will be leaded into associated folder where final DSM, orthophoto and all intermediate results are. As shown by Fig. 27, there are different folders within DSM folder. “Cloud”, “DSM” and “Ortho” are filtered point cloud, initial DSM and orhtophotos, respectively. Interpolated folder indicates that the holes on point, DSM or orthophotos are filled based on interpolation and refined folders refer to the refinement of sharp edges (e.g. building outline). Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 38 Figure 28: Mesh, DSM and true‐orthophoto generated from amphitheater at the University Salzburg (57 images obtained by camera Sony NEX‐7) Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved. ImageScan Pro: A Know‐How Guide 39 References
1. Moe D., Sampath A., Christopherson J. and M. Benson, self calibration of small and medium format digital cameras, 2010, International archive of photogrammetry and remote sensing (IntArchPhRS), Vol. XXXVIII, Part 7B, pp. 395‐400 2. Clarke, T.A. & Fryer, J.F. 1998, the development of camera calibration methods and models, Photogrammetric Record, 16(91), pp 51‐66 3. Hartley, R.I & S. B. Kang, parameter‐free radial distortion correction with centre of distortion estimation, 2005, Computer Vision, ICCV 2005, Vol. 2, pp. 1834 ‐ 1841 4. Z. Zhang, "Camera Calibration", in G. Medioni and S.B. Kang, eds., Emerging Topics in Computer Vision, Chapter 2, pages 4‐43, Prentice Hall Professional Technical Reference, 2004 5. Camera calibration with OpenCV, last access on 04.09.2015 http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibr
ation.html 6. Frank, O., Katz, R., Tisse, C. & Durrant‐Whyte, H.F., camera calibration for miniature, low‐cost, wide‐angle imaging systems, the British Machine Vision Conference 2007, pp. 10, 2007 7. R.I. Hartley, an algorithm for self calibration from several views, in proceedings of the conference on computer vision and pattern recognition, seattle, Washington, USA, pp. 908‐912, 1994 8. Wenzel, K., Rothermel, M., Haala, N. & D. Fritsch, 2013 SURE – the ifp software for dense image matching, photogrammetric week '13, Ed. D. Fritsch, Wichmann, Berlin/Offenbach, pp. 59‐70 9. P. R. S. Mendonca and R. Cipolla, “A simple technique for self‐calibration,” In Proc. CVPR., vol. I, Fort Collins, Colorado,pp. 500–505, June 1999. 10. Birch, J. Using 3DM Analyst Mine Mapping Suite for Rock Face Characterisation. In Laser and Photogrammetric Methods for Rock Face Characterization; Tonon, F., Kottensette, J.T., Ed.; Colorado School of Mines: Golden, CO, USA, 2006; pp. 13–32. 11. www.dofmaster.com ‐ online reference (last access on 04.09.2015) Version 1.2 ‐ Sept. 2015 © 2015 ArcTron GmbH. All rights reserved.