Instrument Design – Detailed Requirements

Transcription

Instrument Design – Detailed Requirements
BeagleScan Report
3-D Laser Range Finder with the Beagleboard
Brandon Rosas
Table of Contents
Abstract ......................................................................................................................................................... 3
Introduction .................................................................................................................................................. 3
Objectives ..................................................................................................................................................... 5
Methods and Procedures.............................................................................................................................. 5
Equipment ................................................................................................................................................. 5
Procedure.................................................................................................................................................. 7
Results ......................................................................................................................................................... 22
Conclusion ................................................................................................................................................... 28
References .................................................................................................................................................. 30
2
Abstract
Robots as well as humans can benefit from the physical knowledge of their surroundings. For humans it
is useful to properly analyze an environment for hazardous situations before exploring further on foot;
this is where the robot comes in. For robots it is important to know obstacles that may be in its
projected path. This project will attempt to accurately recreate the environment and objects near a laser
range finder implemented using the Beagleboard and will also determine whether the Beagleboard is a
suitable platform for computer vision use.
Introduction
The concept of the laser triangulation and setup used in this project can be seen below:
Figure 1 - Triangulation Concept [6]
From Figure 1 we have the following that must be defined: x’, y’, f, b, θ. The variable x’ and y’ refer to
pixel locations on the image plane. The variable f is the focal length of the camera and is calculated
through calibration. The variable b is the baseline length from the center of rotation of the laser to the
center of the image plane and is calculated through calibration. The laser rotates about an axis parallel
to the Y axis and the variable θ is the orientation of the plane made by the laser relative to the Z axis. If
the plane of light is parallel to the Y-Z plane. If
the plane of light is parallel to the X-Y
plane, assuming a counter-clockwise rotation.
We will know all of the variables defined above and need only to calculate the distance to the object.
We need a way to transform
into the point positioned under the laser
.To begin we
determine the normal to the plane of light
And the baseline
3
The point
will lie in the plane if
Equation 1
Suppose that a point on image plane of the lasers line is
space represented as
then this will correspond to a ray in 3D
Equation 2
So we only need to solve for the transformation function, t. Substituting Equation 2 in Equation 1 gives
the following
This will give us the following for finding the 3D coordinates
Equation 3
We will use Equation 3 to calculate the 3D coordinates of the object being scanned. Three documents
were referenced for these calculations and design: “A 3D Scanning System Based on Laser Traingulation
and Variable Field of View” [6], Machine Vision [7], Springer Handbook of Robotics [1].
4
Objectives
This project will be performing image analysis to extract information about the surrounding area from
the lasers projection on an image sensor (camera). The Beagleboard development board will be used for
this project as the main controller board. We will be determining whether the Beagleboard is suitable
for such a vision project. A block diagram of the design is given in Figure 2
Controller Board
DSP
Image Sensor
Microprocessor
Input View
of Laser
Laser
Servo Motor
Servo Controller
Board
Figure 2 - System Block Diagram
The reason for choosing the Beagleboard is due to its low cost and hardware capabilities. The
Beagleboard houses a 600 MHz ARM Cortex-A8 processor and a TMS320C64x DSP that runs at 430 MHz.
The ARM processor will run the Angstrom Linux distribution and handle the image acquisition and servo
repositioning.
The desired outcome of the project will be a working laser range finder that is capable of modeling its
surrounding area. The information gathered by the device will be passed to a client PC that will produce
a point cloud image based on the data. This will allow the user to determine what the environment is
like around the device and allow for a partial reconstruction of the objects surrounding the device.
Methods and Procedures
Equipment
The equipment used for this project is listed below in Table 1 and datasheets for specific hardware can
be located in the appendix.
5
Equipment
Cost
Beagleboard
Free
2 GB SD Card
Free
Playstation 3 USB Camera
$30
LC532-5-3F Focusable Green Laser
$40
Micro Maestro 6 Channel USB Servo Controller
$20
Futaba S3305 Servo
$15
DDT500 Direct Drive Tilt
$15
USB to Serial converter
$9
USB A to USB A mini cables
Free
DB9 NULL Modem cable
Free
Table 1 – Equipment
A note about the LC532-5-3F laser. This is a class 3a laser and the classes are detailed below in Table 2
obtained from OSHA [11].
Class
1
2
2a
3a
3b
4
Implications
Safe.
Blink reflex of the eye will prevent damage. Do not
stare deliberately into the laser.
Region in low power region of class 2. Requires in
excess of 1000 seconds for retinal burn.
Supermarket scanners are of this class.
Output does not exceed 5mW. Optical
magnification can cause damage. Firearm laser
sight and laser pointers. Only hazardous for
intrabeam viewing. Some limited controls are
usually recommended.
5 to 500mW power output. Can cause damage
when viewed directly.
Output exceeds 500mW of power. Can cause
permanent damage to the eyes or skin without the
need of optical magnification.
Table 2 - Laser Classes and Implications
Short term viewing of the laser should not cause damage to the eye.
6
Procedure
The Beagleboard has been setup with a boot sector on the SD card to run a Linux distribution based on
Angstrom, Open Embedded (OE). The operating system has been chosen due to its wide use and large
user base for support. The setup procedure for the Beagleboard is not part of this project but can be
located in the appendix [8].
To begin some drivers and libraries are needed on the Beagleboard to communicate with the camera
and the USB servo controller. The camera uses the gspca_ov534 driver that is included in the OE
distribution. To communicate with the servo controller the libusb 1.0 library is used which is included
with OE. The servo controller is installed as a serial port terminal on the Beagleboard.
Now that the drivers and the libraries needed for the project are installed we need to begin with
calculating the baseline distance between the camera and the laser scanner. To do this a small test
application called “baseline” was written using C++ and the OpenCV library that would move the servo
to a desired position and then have the camera capture an image of the lasers current position [10]. The
code is included in a zip file for this project. The goal is to find the servo position that corresponds to the
laser line being centered in the middle of the image plane by placing a crosshairs marker on the image
for alignment.
7
Figure 3 - Baseline Calculation
This resulted with a servo position of 5550 as the position of the servo for the center alignment. To
determine what angle this corresponds to we use a
scale where a servo value of 4400 represents the
mark and a value of 8000 represents the mark. Now we calculate the angle as
So we know the angle that the laser is positioned at and we also know the distance the object is from
the camera is 10 inches or 25.4 cm.
So we now know our baseline length. I used a ruler to estimate the distance and both agree. The
constant was added to the constants code file.
At this point I will introduce another constant that was used, the angle increment between scans. I
settled on a fine angle increment of about
8
Equation 4
The value of 25 was placed as the angle increment in the constants code file. Equation 4 also gives us an
idea of how much data we will be processing during the scan since we will be load, saving, and
converting 144 images for the full scan.
The next step is to find our last missing value, the focal length of the camera. A small C++ application
was written to capture the images from the camera using OpenCV. The code for the application is in the
zip file with this report and is named “calibration_images”. We will also need the calibration_grid.png
that I created. This is used to calibrate the camera, it too is included in the zip file in the same directory
as the code. To use the MATLAB camera calibration toolbox we first download it [4].This toolbox allowed
for the calculation of a few of the camera constants. The first step is to navigate to the directory where
the image toolbox was downloaded and in this directory there are many .m files we are interested in the
calib_gui.m file in particular. At the MATLAB command prompt type calib_gui and the following menu
should show itself
Figure 4 - Calibration Toolbox Initial Screen
Choose either option. I chose the standard option since I have plenty of memory in my laptop and since
we are only loading a small number of images (15). You will see the following menu displayed
Figure 5 - Calibration Toolbox Main Screen
Next you will change the directory to where the images that you wish to use for calibration are stored,
for me it is in the directory where the calibration application mentioned earlier is located. Once this has
been done click the “Images names” button in the calibration toolbox menu. You will be prompted to
enter the image suffix and the image extension as shown below
9
Figure 6 - Calibration Toolbox Image Loading
Afterwards a window will be displayed showing all the images that have been loaded
10
Figure 7 - Calibration Toolbox Image Load Results
Now that the images have been loaded we can close this window. Next we will hit the “Extract grid
corners” button of the camera calibration toolbox menu. In the MATLAB command prompt accept all
defaults. After accepting the defaults we will be shown an image that you must click on the corners of
the squares to place points as shown below starting in the upper left working our way clockwise around
to the lower left.
11
Figure 8 - Calibration Toolbox Initial Corner Definition
In the MATLAB command prompt the length and width of the squares is 30mm. After doing this the
squares lengths and widths are placed in the image
12
Figure 9 - Calibration Toolbox Initial Corner Definition 2
If the red crosshairs do not line up with the corners of your squares, make sure that the measured
calibration squares are 30mm in length and width. You will now return to the MATLAB command
prompt and accept the default for the message displayed. Once this is done you will be given another
image that you must mark the four corners on. Continue this process until all the images have been
worked through. Once complete the extracted corners will be shown
13
Figure 10 - Final Extracted Corners Image
Now in the MATLAB calibration toolbox menu hit the “Calibration” button. This will proceed to
calculating the cameras parameters from the data gathered from the edge extraction. The following will
be displayed at the command prompt as a result
14
Figure 11 - Camera Calibration Results
The focal length is the only parameter that we are interested in for the camera that we are using and
this project. We will round to the nearest pixel for the focal length obtaining 568 pixels. This constant is
placed in the constants code file.
Now that the calibration constants have been obtained we can look at the algorithm used for scanning
the objects. Figure 12 shows the expected output from the system.
15
Start
Initialize system
Scans
complete?
Position servo and
acquire image
Image
Image
processing
complete?
Read in image,
threshold and
calculate distances
Coordinate
file
Finalize system
End
Figure 12 - Coordinate Generation Flowchart
The output files from the scanner will be a color image, grayscale image, and threshold image when in
DEBUG mode. An example of a scan for each image type using my hand as the object to scan is given
below in Figure 13, Figure 14, and Figure 15.
16
Figure 13 - Example Color Image Scan
17
Figure 14 - Example Grayscale Image Scan
18
Figure 15 - Example Threshold Image
The threshold image is the one that is of most importance. This is what the algorithm will use to
determine the pixel in the image plane to use. The point to use is obtained by going through each row of
pixels till the first occurrence of white (255) is located and then continues till there is no longer a white
pixel and obtains the middle pixel between the width. If another white pixel area is obtained in the same
row the larger area of white pixels is used. This is a problem that is called fanning and occurs when the
camera captures the laser hitting the object and also hitting behind the object. Choosing the larger
white area of the threshold image chooses the object closer to the device and should solve the problem.
To create the application we use a virtual machine (VM) of Ubuntu 9.10. The application code files and
make file are copied to a directory on the Ubuntu VM. Opening a terminal prompt and navigating to the
directory with the aforementioned code and make file. To build the application we use make and are
presented with an output file “main”. The display from Ubuntu can be shown below in Figure 16
displaying the results from the make operation.
19
Figure 16 - Ubuntu Terminal Window
A quick look at the make file in the zip folder shows directories specified for OE libraries. These libraries
were built for the OE operating system on the Beagleboard. Referring to the website in the references
one can set up a cross compilation system for the Beagleboard [5].
Copy the “main” file and “images” directory from the build directory of Ubuntu and place them on the
SD card in the root user’s directory. Now we need to connect the following as shown in Figure 17. A USB
to serial adapter, a DB9 null modem cable, and a DB9 to ID10 cable is needed to connect to the serial
port of the Beagleboard. A USB A to USB A mini cable is used to power the Beagleboard. Another USB A
to USB A mini cable is used to attach the Beagleboard to a USB hub that is used to attach the Playstation
3 Eye camera and the Maestro USB servo controller to the Beagleboard. The Maestro USB Servo
Controller and the LC532-5-3F laser both need a 5v power supply for operation. I scavenged an old
power supply that was rated at 5.2v and 800mA for the project. The laser draws close to 300 mA and the
servo draws close to 250 mA for the load of the laser and the laser’s mount. So the 800 mA rating is well
within the limits of both devices. The setup can be seen in Figure 17 and the scanning device to connect
to in Figure 18.
20
Figure 17 - Setup
Figure 18 - Scanning Device
21
Once the “main” file has been copied to the card and the SD card has been placed in the Beagleboard
we startup a serial terminal emulator, I use Tera Term [9], and connect to the Beagleboard with the
setup shown in Figure 19.
Figure 19 - Tera Term Serial Settings
Once this is complete plug in the Beagleboard’s USB power to the PC and the Beagleboard will begin to
boot up. Once the login screen appears, login as the root user. Navigate to the project directory and run
the application by typing “./main”. The servo will position itself back to the
position if not currently
there and begin scanning towards the position capturing the color image and then converting a
grayscale and threshold image and saving them to the SD card. After scanning the servo will position
itself back to the
position and begin processing the threshold image for co-ordinate calculation.
Results
I performed an initial scan of a box placed approximately 7 inches away from the camera and laser
scanner which can be seen in Figure 20.
22
Figure 20 - Initial Box Scan
After filtering out some the laser scan data that was placed on the ground I obtained the graph in Figure
21 using MATLAB.
23
Figure 21 - Box Scan
The graph shows that the data is scattered close to the 17.78 cm that the box is approximated to be
away from the scanning device. This is not a precise measurement however since the box is not a
perfectly parallel with the Height and Width plane and the box was not placed on a perfectly flat surface
both imply that the distance changes throughout the surface scan. Another tool that can be used to
assist in measuring the accuracy of the device is a histogram of the points displayed in Figure 21. The
histogram can be seen in Figure 22.
24
Figure 22 - Histogram
The histogram helps in showing that the distribution is centered close to 17.78 cm.
To test how well the device worked at calculating the co-ordinates of an object I layed my hand down in
front of the scanner and ran the application. After filtering some of the scanning points from the ground
I obtained the point clouds shown in Figure 23 and Figure 24 from MATLAB.
25
Figure 23 - Hand Scan
Figure 24 - Hand Scan Different Perspective
26
The total scanning time for the hand capture was 113 seconds and the total time for performing the coordinate calculations was 45 seconds and can be found in the output in Figure 25. The co-ordinate
calculation time did not fluctuate much and the scanning time differed approximately by
seconds
centered around 130 seconds when taking other scans performed into consideration. I was surprised to
see that the co-ordinate calculations time did not fluctuate much. The co-ordinate calculation piece of
code varies in checking 1 to 640 pixels depending on where the brightest pixel is and the width of the
brightest section. But the timing did not vary as much as expected. The co-ordinate calculation code is
included in the zip folder.
Figure 25 - Timing Output
Another piece of information that the application captures when in debug mode is the CPU and memory
utilization. When the application was capturing the images and saving them to disk the CPU utilization
was close to 60% and the memory consumption was close to 10% of the 256Mb on the Beagleboard.
When the application was converting the images to grayscale and thresholding the CPU was close to
95% utilized and the memory was about 40% used as well. Once the images had been analyzed the CPU
utilization dropped back down to about 53% and was writing the data points to disk with memory
staying constant at about 40% utilization. The filtered data can be seen in Table 3.
27
PID
2091
PID
2091
PID
2091
USER
root
USER
root
USER
root
PR
20
PR
20
PR
20
NI VIRT
0 44172
NI VIRT
0 114m
NI VIRT
0 114m
RES SHR S %CPU %MEM
21m 5388 R 59.3 9.4
RES SHR S %CPU %MEM
92m 3136 R 95.3 39.8
RES SHR S %CPU %MEM
93m 3164 R 53.5 40.1
TIME+
0:08.92
TIME+
1:11.77
TIME+
1:53.42
COMMAND
main
COMMAND
main
COMMAND
main
Table 3 - Top Output Filtered
The CPU utilization is rather high for the image conversion portion of the application, but this is to be
expected for the amount of data that it is processing. The memory consumption during the co-ordinate
writing portion of the application is also interesting enough to note. The memory consumption steadily
increases as it saves the co-ordinate points in memory and loads the images. It seems as though the
version of OpenCV that I am currently running may have a memory leak issue with the images.
Conclusion
The purpose of this project was to determine how well the Beagleboard would perform as a computer
vision solution for a possible robotics or handheld manufacturing scanner. The results above
demonstrate that the Beagleboard’s on board memory and CPU would more than qualify the board as
being suitable to handle the load of such a project. The overall time to completion of a scan was
approximately 3 minutes and total disk space usage was 30 Mb for picture storage in DEBUG mode
(color, grayscale, and threshold images stored) and 15 Mb in non-DEBUG mode (only color images
stored). There is quite a bit of room for improvement in the project however in three areas: scanning
time, CPU utilization, imaging problems.
To reduce scanning time the servo could be setup to run in a continuous motion from the
to the
mark. This would require timing calculation with the servo and the image acquisition of the camera. The
current project simply moves the servo to the next position and waits 0.2 seconds to reach the next
position, chosen based on the servo datasheet [3].
To reduce the CPU utilization or time that the CPU is being utilized by the application we can optimize
OpenCV. OpenCV does not currently have a version that is optimized for use with the Beagleboard’s
DSP. If this were to be implemented the amount of time spent utilizing the ARM processor could be
reduced which would mean a far more efficient application rather than having the DSP idle.
There are quite a few problems with the laser and camera interface that can cause problems with the
resulting point cloud that is built. Reflection of the laser off a partially reflective or shiny media can
cause erroneous readings. I compensated for this in the system by calibrating the thresholding value for
my environment. The thresholding can be further tuned in another project or possibly even an autotuning threshold. Another type of reflection problem that occurs is when the reflection is directly shot
back into the camera flooding the sensor with the lasers light causing the system to report maximum
readings. I do not know a corrective measure for this as of yet. Another possible solution is using the
28
OpenCV libraries line detection algorithms. This could help with some of thresholding problems. One
last problem that I noticed with the camera and laser was fanning. This occurred when the laser
returned two readings for the same row, in other words the laser was captured behind the object. This
is due to the separation of the laser and the camera. This project captures multiple widths of the
threshold values and keeps the larger width. The closest laser scan is the one to keep since the problem
is a reading of a laser scan behind an object so the closer object will most likely have a larger laser
intensity showing in the image. There are surely more problems with the laser, camera, and setup but
these were the ones noticed in my project.
29
References
1. Fisher, Robert B.; Konoliage, Kurt; Daniilidis, Kostas; Eklundh, Jan-Olof. Springer Handbook of
Robotics. Heidelberg: Springer-Verlag, 2008. Pgs. 521-562.
2.
“Micro Maestro 6-Channel USB Servo Controller”.
http://www.pololu.com/catalog/product/1350
3. “Futaba S3305 High Torque Standard Servo with Metal Gears”. http://www.gpdealera.com/cgibin/wgainf100p.pgm?I=FUTM0045
4. University of Oulu. “MATLAB Camera Calibration Toolbox”.
http://www.mathworks.com/matlabcentral/linkexchange/links/745-camera-calibration-toolbox
5. Koei, Koen. “Building Angstrom”. http://www.angstrom-distribution.org/building-angstrom.
March 30, 2010
6. Joao Guilherme, Mario Gazziro, Alessandro Ide, Jose Saito. “A 3D Scanning System Based on
Laser Traingulation and Variable Field of View”.
http://libproxy.uhcl.edu:2086/stamp/stamp.jsp?tp=&arnumber=1529778 . 2005 International
Conference on Image Processing.
7. Ramesh Jain, Rangachar Kasturi, Brian G. Schnuck. Machine Vision. McGraw-Hill 1995.
8. “Beagleboard Hardware Setup”.
http://code.google.com/p/beagleboard/wiki/BeagleBootHwSetup. February 04, 2010.
9. “Tera Term Home Page”. http://ttssh2.sourceforge.jp/. March 22, 2010.
10. Vadim Pisarevsky. “OpenCV”. http://opencv.willowgarage.com/wiki/. March 06, 2010.
11. OSHA Technical Manual: Laser Hazard Classifications.
http://www.osha.gov/dts/osta/otm/otm_iii/otm_iii_6.html
30