Team 2 - Drivalert: Driver Alertness monitor

Transcription

Team 2 - Drivalert: Driver Alertness monitor
THE UNIVERSITY OF BRITISH COLUMBIA
Department of Electrical and Computer Engineering
FINAL REPORT
DRIVALERT: DRIVER ALERTNESS MONITOR
Putting Fatigue-Related Accidents to Rest
prepared by
Group 2
Wallace Hung
Raymond Lo
Ziguang Qu
Harsh Rathod
Tony Seddon
in Partial Fulfillment of the Requirements for
EECE 474 – Instrumentation Design Laboratory
Date submitted: 25 July 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Abstract
We have developed a system that can monitor the alertness of drivers in order to prevent
people from falling asleep at the wheel. Motor vehicle accidents cause injury and death,
and our system will help to decrease the amount of crashes due to fatigued drivers.
There is similar technology available from Attention Technologies, Inc based on a
measure of drowsiness called PERCLOS, but we have developed a more cost effective
solution while improving the overall reliability. We utilized developments in digital
image processing and facial recognition algorithms to detect a driver closing his or her
eyes for extended periods of time. The main parameters used for detecting drowsiness
are PERCLOS, microsleeps and EyeIntegral – a value we have developed that improves
on the PERCLOS system. In the initial phase of development, a web cam is used to
capture video and stream data to a laptop, which is used to run the image-processing
program. When the program detects symptoms of driver fatigue, it will trigger a series of
alerts based on the level of drowsiness. The first alert sounds an audible alarm; the
second sounds the alarm and triggers a vibrating rumble pack. The rumble pack is
placed in the lower lumbar support area of the driver’s seat and will vibrate enough to
startle a person who is in the initial stages of becoming drowsy. The next step of the
project will be to implement the system as a stand-alone device based on DSP technology.
We have produced an initial design using a Texas Instruments TMS320C6416 DSP and a
USB CMOS-based video camera. This modular system will be simple to install into any
vehicle with minor calibration and will be an ideal device for commercial vehicle drivers,
such as chartered bus drivers, who are forced to drive long distances throughout the
night. The cost is low enough that everyday drivers of personal vehicles would be able to
utilize it as well. This product also has the potential to be used to remind drivers to keep
their eyes on the road when other distractions are present.
Group 2
ii
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Table of Contents
Abstract ............................................................................................................................... ii
List of Figures ..................................................................................................................... v
List of Tables .................................................................................................................... vii
Glossary ........................................................................................................................... viii
List of Abbreviations ......................................................................................................... ix
INTRODUCTION .............................................................................................................. 1
1
PROJECT BACKGROUND ...................................................................................... 3
1.1
Introduction......................................................................................................... 3
1.2
Indicators of Drowsiness..................................................................................... 3
1.3
Driver Fatigue Monitor System by Attention Technologies, Inc. ...................... 7
1.4
Driver Alert by Volvo......................................................................................... 8
2
INITIAL PRODUCTION AND INVESTIGATION................................................ 10
2.1
Introduction....................................................................................................... 10
2.2
Feature Recognition Algorithms....................................................................... 10
2.2.1
Kanade-Lucas-Tomasi Feature Tracker Algorithm .................................. 11
2.2.2
Hough Transformation.............................................................................. 11
2.3
MATLAB Image Processing & Device Interface............................................. 12
2.4
RGB Colour Space............................................................................................ 14
2.5
Infrared Web Camera........................................................................................ 15
3
SYSTEM AND SOFTWARE DEVELOPMENT .................................................... 16
3.1
Introduction....................................................................................................... 16
3.2
System Design .................................................................................................. 16
3.3
Equipment Set-up.............................................................................................. 17
3.4
Viola and Jones Object Detection Algorithm ................................................... 18
3.5
Software Optimization ...................................................................................... 23
4
RUMBLE PACK ...................................................................................................... 35
4.1
Introduction....................................................................................................... 35
4.2
Parallel Port....................................................................................................... 36
4.3
Rumble Pack Circuit ......................................................................................... 37
4.4
Structure of the Rumble Pack ........................................................................... 39
4.5
Force Feedback for the Rumble Pack ............................................................... 40
5
DSP SYSTEM DESIGN........................................................................................... 45
5.1
Introduction....................................................................................................... 45
5.2
DSP System Requirements ............................................................................... 45
5.3
DSP Filtering .................................................................................................... 51
5.3.1
FIR Filter................................................................................................... 51
5.3.2
IIR Filter.................................................................................................... 52
5.4
DSP Input and Output....................................................................................... 52
Group 2
iii
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
5.5
Software Conversion......................................................................................... 54
5.6
Other Possible Solutions ................................................................................... 58
5.6.1
TMS320C6416 DSP with CMOS Image Sensor SOC ............................. 58
5.6.2
TI DM644x System-on-Chip Family........................................................ 60
CONCLUSION................................................................................................................. 61
Assessment.................................................................................................................... 61
Recommendations......................................................................................................... 61
REFERENCES ................................................................................................................. 64
Group 2
iv
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
List of Figures
Figure 1.1 - Occurrence of behavioural indicators of fatigue and subjective alertness
rating. Source: [5] ...................................................................................................... 4
Figure 1.2 - Blink interval (median) over subjective alertness. Source: [5] ...................... 5
Figure 1.3 - Blink duration (median) over subjective alertness. [5] .................................. 6
Figure 1.4 - Saccadic speed (solid line, right y-axis) and subjective alertness (dotted line,
left y-axis) of a subject over driving time. Bars indicate the occurrence of fatiguerelated incidents. [5]................................................................................................... 7
Figure 2.1 – MATLAB Difference Imaging: Opened and Closed Eyes........................... 12
Figure 2.2 –MATLAB Greyscale Image - Firescale ........................................................ 13
Figure 2.3 - RGB Colour Map (Source: [RGB]................................................................ 15
Figure 3.1 - Rectangular Features of an Image. (Source: [15]) ....................................... 19
Figure 3.2 - Summation of Rectangles. (Source: [15]).................................................... 20
Figure 3.3 – AdaBoost Training for Eye Recognition (Source: [15]) .............................. 22
Figure 3.4 - bResults vs. Time (in seconds) – Frank ........................................................ 28
Figure 3.5 - bResults vs. Time (seconds) – Harsh ............................................................ 28
Figure 3.6 - bResults vs. Time (seconds) – Ray ............................................................... 29
Figure 3.7 - bResults vs. Time (seconds) - Tony.............................................................. 29
Figure 3.8 - Eye Integral Curve Over 180 Seconds – Frank............................................. 31
Figure 3.9 - Eye Integral Curve Over 180 seconds – Harsh ............................................. 31
Figure 3.10 - Eye Integral Curve Over 180 seconds – Raymond ..................................... 32
Figure 3.11 - Eye Integral Curve Over 180 seconds – Tony ............................................ 32
Figure 4.1 - 25-Pin Parallel Connector (Source [19])....................................................... 37
Figure 4.2 - DC Motor Rumble Pack Circuit.................................................................... 38
Figure 4.3 - DC Motor Rumble Pack - Complete Unit..................................................... 40
Figure 4.4: Logitech ® RumblePad™ 2 Wireless Controller........................................... 41
Figure 4.5 - Force Feedback GUI ..................................................................................... 42
Figure 5.1- EMIFA to SDRAM Schematic (Source: [26])............................................... 49
Figure 5.2 – DSP Reset Circuit (Source: [27]) ................................................................. 50
Group 2
v
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Figure 5.3 - DSP Clock Circuit (Source: [28]) ................................................................. 50
Figure 5.4 - Power Supply Block Diagram - C6000 DSPs (Source: [29]) ....................... 51
Figure 5.5 - C++ Code Example: Simple for( ) Loop....................................................... 55
Figure 5.6 - C++ Code Example: Excessive Memory Reads ........................................... 57
Figure 5.7 - C++ Code Example: Optimized for DSP...................................................... 57
Figure 5.8 - Micron MT9D111 Block Diagram (Source: [33]) ........................................ 59
Group 2
vi
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
List of Tables
Table 3.1 Web Camera Specifications and Settings ......................................................... 18
Table 3.2 - Pseudo Code for the Multithreaded Timer ..................................................... 24
Table 3.3 - Pseudo Code for the System Counter Algorithm ........................................... 26
Table 3.4 - Drowsiness Variable Analysis........................................................................ 30
Table 5.1 - TMS320C6416 Specifications........................................................................ 48
Table 5.2 - Micron MT48LC16M16A2BG-75 Specifications ......................................... 48
Group 2
vii
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Glossary
blink interval
- amount of time between blinks (inverse of blink rate)
double data rate (DDR)
- a type of memory chip that transfers data on the rising
and falling edge of a clock signal, which almost doubles
the transfer rate of a memory chip without DDR
saccade
- a small rapid jerky movement of the eye especially as it
jumps from fixation on one point to another
Group 2
viii
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
List of Abbreviations
CMOS
- Complementary metal-oxide semiconductor
DDR
- Double Data Rate
DSP
- Digital Signal Processor
EMIF
- External Memory Interface
FBGA
- Fine-Pitch Ball Grid Array
FPS
- Frames per second
GUI
- Graphical User Interface
Kbps
- Kilobits per second
KLT
- Kanade-Lucas-Tomasi Feature Tracker algorithm
Mbps
- Megabits per second
MSDN
- Microsoft Developer Network
PERCLOS
- Percentage of time eyelids are closed
SDRAM
- Synchronous Data Random Access Memory
SOC
- System-on-chip
USB
- Universal Serial Bus
Group 2
ix
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
INTRODUCTION
This report summarizes our development of the Drivalert, a drowsiness detection system for
drivers. This device is a means of decreasing the amount of fatigue-related accidents on the road.
Each year, approximately 6.2 million motor vehicle accidents are reported to the police in the
United States, with over thirty percent resulting in injury and under one percent resulting in death
[1]. Many people rely on their vehicles for everyday use and some are required to drive
throughout the day and night in order to make a living. These drivers are usually in control of
large commercial vehicles that can have a devastating effect when colliding with a personal
vehicle. According to [2], in the United States, tired drivers cause over 100,000 motor vehicle
accidents that occur each year.
According to a poll conducted by the Traffic Injury Research Foundation, one in five drivers
admitted to falling asleep at the wheel [14]. Many times there are no consequences, but in the
cases where accidents occur, the outcome can be devastating. We have developed a system that
will help to decrease the number of fatigue-related accidents in large commercial vehicles as well
as personal vehicles.
The report is divided into the following sections:
Group 2
1
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
In Section 1, we present the results of our research of indicators of drowsiness. Specifically, we
are concentrating on the eyes of the driver and the symptoms that we will detect with our device
– blink rate, blink duration and blink interval. We also describe in detail the flaws of technology
that is currently available and how our device compares to them.
In Section 2, we summarize the initial stages of our development of a feature recognition
algorithm and our use of image processing in MATLAB. The concept of RGB colour space and
the digital format of images are also examined.
In Section 3, we outline the requirements of the project and describe the different components of
our device. These include the driver alertness recognition software and the components used to
monitor and alert the driver.
In Section 4, we review our development of the vibrating rumble pack. This section includes a
detailed description of the interface using a USB connection to the laptop as well as a parallel
port connection that would be compatible with our DSP design.
In Section 5, we introduce our hardware component of the project and the steps that we would
take to implement the system as a single unit. Although we have developed the system using a
laptop, we have designed the final system with a digital signal processor.
Finally, we assess our completed work and provide recommendations for further development
and modifications to the system.
Group 2
2
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
1 PROJECT BACKGROUND
1.1 Introduction
Vehicle manufacturers have been improving safety measures for years, but have yet to discover a
sure way to prevent drivers from falling asleep at the wheel. Currently, there are a few
technologies that have been tested by automobile manufacturers. We have investigated different
physical indicators of drowsiness and determined that a person’s eyes show the most obvious
signs of drowsiness. After summarizing the different visible symptoms of fatigue, we provide an
overview of other products available that are related to our project.
1.2 Indicators of Drowsiness
According to [4], the PERCLOS (PERcent of the Time Eyelids are CLOSed) measure of driver
fatigue is the most dependable method of determining whether a driver is susceptible to falling
asleep at the wheel. It specifically looks at the amount of time the pupil is covered by the eyelid,
not necessarily complete blinks by the subject. In a given time frame, the percentage of time the
eyelids are closed. The National Highway Traffic Safety Administration also endorses the
system [4].
Group 2
3
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Apart from the PERCLOS system described above, there are other important indicators of
drowsiness that we will use. A study by two researchers of University of Cologne, Germany
shows that increasing sleepiness correlates with these criteria, in the order of reliability: (1)
Increase in blink rate; (2) Increase in blink duration; (3) decrease in lid- and saccadic velocities
[5]. The study required 65 participants to complete a PC-simulated driving course at late night,
and used an alertness rating to monitor how sober the participants are. The lower the alertness
rating is, the more likely the driver is in the state of “microsleep”, making driving errors or
staring (see Figure 1.1).
Figure 1.1 - Occurrence of behavioural indicators of fatigue and subjective alertness rating. Source: [5]
While correlating the alertness rating with the participants’ blink interval and duration, it can be
observed that the participants’ blink intervals decrease (Figure 1.2) and blink durations increase
(Figure 1.3) when they become less alert. It is also observed that the saccadic speed will
increase when driver becomes drowsy (Figure 1.4). For our system, we have decided to use the
Group 2
4
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
blink interval as our main indicator for drowsiness, since it is quite long (approximately 2
seconds), and we can count the number of blinks over a fixed period of time. We are also able to
detect longer blink durations (over 0.5 seconds), which are very strong indicators of driver
fatigue.
Figure 1.2 - Blink interval (median) over subjective alertness. Source: [5]
Group 2
5
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Figure 1.3 - Blink duration (median) over subjective alertness. [5]
Group 2
6
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Figure 1.4 - Saccadic speed (solid line, right y-axis) and subjective alertness (dotted line, left y-axis) of a
subject over driving time. Bars indicate the occurrence of fatigue-related incidents. [5]
We were unable to develop a component for our program to measure the saccadic speed of the
eye, so we have decided not to focus on it as a parameter for our system. Information on the
saccadic speed of the eye has been included for completeness and possible future development.
1.3 Driver Fatigue Monitor System by Attention Technologies, Inc.
The technology that most-closely relates to our system is the Driver Fatigue Monitor by
Attention Technologies Inc. It is based on the PERCLOS measure of driver fatigue. In [6], a
Group 2
7
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
thorough study of the system by Attention Technologies was completed. They have discovered
that the PERCLOS-based system does work but has some flaws that we wish to improve upon.
For instance, we have found that the PERCLOS measure may differ between test subjects (See
Section 3.5), so we have modified our code to compensate for the limitations of PERCLOS.
Also, the current system uses infrared LED’s that emit light into the eye, which increases power
consumption and send unwanted infrared light directly into the driver’s eye. Although infrared
has minimal effect on the eye, it is still considered damaging and direct contact with the eye
should be avoided [7]. The system has a cost of approximately $1000 CDN.
Our system uses a common web cam that will use minimal power and emit no form of light into
the driver’s eye. These are some of the major benefits of our system compared to that of the
Driver Fatigue Monitor system. We were able to design a system for a much lower cost than that
of Attention Technologies, Inc. We have used some of the PERCLOS measures as part of our
fatigue detection as well.
1.4 Driver Alert by Volvo
Volvo is developing a system called Driver Alert that will be used to detect drowsy and
distracted drivers, but they are not using visible queues of driver fatigue. Instead, they will
detect unusual behaviour of the vehicle. It will also be used to determine if someone is driving
without due care and attention. This could be due to a person talking on a cellular phone or
spending an inordinate amount of time tuning the stereo system [8, 9].
Group 2
8
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
The unusual movements of the vehicle will be tracked by GPS and with advances in wireless
vehicle technology, they hope to track the vehicle’s motion with respect to the lines in the road.
The Volvo system is still in development, and will not be available on the new fleet of vehicles
until late 2007 [8].
The Driver Alert system seems to be promising for the future of driver safety, but it is still in its
infancy. Because it is still in development, thorough research has not been completed on the
viability of the system. Unbiased study must be before the system will have the confidence of
the automotive industry. Also, it will only be available on the Volvo vehicles in its early stages
of deployment whereas our device can be installed in any vehicle.
Group 2
9
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
2 INITIAL PRODUCTION AND INVESTIGATION
2.1 Introduction
Using digital image processing, more specifically, facial recognition techniques, we focused on
developing an algorithm that would detect eyes and blinking in a video stream. The initial
development began with testing different forms of feature recognition software along with some
image processing techniques available through MATLAB. It was also important to understand
the format of video that we were working with and the representation of images in digital form.
We outline our research in feature recognition, MATLAB, and the RGB colour space in the
following sections.
2.2 Feature Recognition Algorithms
We set out to develop a feature tracking software algorithm to track the eyes of the driver and
determine the ideal timeout period that should occur before the alerts are triggered in the system.
We are able to detect drivers closing their eyes for abnormal periods of time and an increase in
blink rate over time– the two common indicators of becoming drowsy at the wheel.
Group 2
10
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
2.2.1 Kanade-Lucas-Tomasi Feature Tracker Algorithm
In addition to our preliminary research, we have discovered different algorithms for facial
recognition. In particular, we have briefly tested the Kanade-Lucas-Tomasi (KLT) Feature
Tracker algorithm, which has been summarized by [10]. This is now in the public domain and
open source code was found on the Stanford Vision Laboratory website
(http://vision.stanford.edu/public/software/body.html).
The KLT algorithm was actually a very promising possibility for our eye detection software until
we found the MPT software. Beyond some initial tests with images of a mouth, we did not
explore the KLT algorithm in any more detail.
2.2.2 Hough Transformation
The Hough Transform is a feature extraction technique used to detect regular curves like lines,
circles, and ellipses hidden in large amounts of data [11]. During image processing, infinite
number of lines pass through any given point, and each of these lines are represented by r and θ,
which illustrates the length and the angle from the origin to the line. The main idea behind the
Hough Transform is to find out which of these potential lines pass through the circle or the
ellipse in an image. In our experiment, the pupils of the eyes should form a circle and if we can
find the average size of the pupil, we can locate similar shapes in the image. But since these
shapes can be found in many different areas of the image, by determining the maximum distance
of the two pupils can have, one can eliminate potential errors [12].
Group 2
11
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Due to our poor resolution from the web camera, this technique was found to be too difficult to
implement for our system. The eyes only take up a small number of pixels in the streaming
video and detecting the pupils alone would be impossible with our hardware.
2.3 MATLAB Image Processing & Device Interface
We began testing with MATLAB and there were a few different techniques that would
potentially work for our problem. One of those techniques is using differential imaging, where
we will monitor for changes in the eye area over a number of frames. The difference images will
be blown up and amplified to ensure that we are tracking the eyes properly. Figure 2.1 is the
result of the difference between two captured images.
Figure 2.1 – MATLAB Difference Imaging: Opened and Closed Eyes
Group 2
12
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
The eyes are a very distinct feature of the image, and we expected to use this technique to show
differences in the eye region of the video, which would result from the subject blinking.
We also investigated different colour spaces for the images in order to detect the eye region of
the face. As depicted in Figure 2.2, the greyscale image shows the eyes very clearly and could
allow for easy detection of the eyes and blinks.
Figure 2.2 –MATLAB Greyscale Image - Firescale
Setting the camera to greyscale format in MATLAB did not generate the expected result, as the
image shows up as a colour map with different colours resulting in intense blue, red and green.
This is due to the MATLAB colour scale settings – the picture is shown in Firescale format and
not in true greyscale. We decided not to investigate this any further, with the possibility of using
an infrared camera and avoiding changing the camera settings on a software level.
Group 2
13
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
2.4 RGB Colour Space
RGB24 is a common form of colour representation, based on a combination of three colours:
Red, Green and Blue. The “24” denotes the number of bits per pixel or bpp. In numerical
representation, three integers show the intensity of each of the colours, with each integer being
between 0 and 255. A higher integer value represents a higher intensity of the respective colour.
Each extreme is used to define black (255,255,255) and white (0,0,0) and shades of grey are
defined as any combination of colours in which the intensity of each colour is the same. Figure 1
depicts the RGB colour map with the primary colours defined by their integer values [12].
yellow
(255,255,0)
green
(0,255,0)
(0,255,255)
red
blue
(255,0,0)
(0,0,255)
red
(255,0,0)
Group 2
cyan
14
magenta
(255,0,255)
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Figure 2.3 - RGB Colour Map (Source: [RGB]
The CMOS-based web camera used in this project automatically converts the image from the
CMOS pixels into RGB digital form.
2.5 Infrared Web Camera
The Mercury iCAM VGA web camera could be modified to obtain a clearer image and decrease
the colour space the processor was dealing with. According to [13], by removing the IR filter,
we can create an infrared camera, which will provide us with sharper images in true greyscale
format and should work better in the dark. Due to the fact that we don’t need to work with
colours, this infrared camera may simplify our eye tracking system. We found that due to the
low quality of the web camera we modified, we had better results with the Creative NX Ultra
web camera in 16-bit RGB colour.
Group 2
15
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
3 SYSTEM AND SOFTWARE DEVELOPMENT
3.1 Introduction
We began development with a laptop and web camera with the intention of implementing the
system later as a stand-alone device. The initial stages of the project proved to be softwareintensive and a large amount of time was spent developing a reliable program that would detect
signs of driver fatigue. We have developed software based on open source C++ code that
successfully tracks the eyes of a driver and will trigger alerts based on the detection of
drowsiness [14]. The design approach is summarized in the following sections.
3.2 System Design
Essentially, the concept of our design is quite simple in that the software detects drowsiness and
performs a series of alerts to warn the driver that he or she is displaying signs of fatigue. The
alerts will be triggered gradually, with each subsequent alert being more noticeable. The device
will initially warn only the driver of the vehicle, so as not to cause any unnecessary disturbance
for the passengers or embarrassment for the driver. This warning will be sent using a vibrating
pack that will be placed in the lower lumbar support of the driver’s seat. The second and third
Group 2
16
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
alerts will consist of high-pitched sounds of increasing volume – the third being much louder
than the second.
3.3 Equipment Set-up
We required a lot of processing power to work with image processing, so a laptop was used
instead of a DSP. We connected the rumble pack (removed from a video game controller) to the
USB port of the laptop and added code to our blink detection software to enable and disable the
rumble pack. The VGA web camera was placed above the rear-view mirror as it provides the
best sightline to the driver’s eyes. As the work was done in a laboratory environment, we always
placed the camera in a location that would correspond to the rear-view mirror of a vehicle. We
considered placing the camera near the instrument panel in order to keep the system compact and
avoid long wires running throughout the vehicle. However, this set-up was not tested, but
remains an area for further investigation.
The specifications of the web cameras used in this project are outlined in Table 3.1. After
extensive testing of both cameras, it was determined that the Creative NX Ultra was better for
our system and provided superior results compared to the Mercury web camera.
Group 2
17
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Table 3.1 Web Camera Specifications and Settings
Creative NX Ultra WebCam
Mercury iCAM Web Camera
Image Sensor
CCD
CMOS
Maximum Video Resolution
640 x 480
640 x 480
Resolution Setting for Project
320 x 240
320 x 240
Frame Rate
15 fps
15 fps
Connection
USB
USB
Video Output Format
RGB24
RGB24 – IR *
Colour Depth
16-bit RGB
16-bit
* Mercury camera was modified to detect infrared light – See Section 2.5 *
3.4 Viola and Jones Object Detection Algorithm
We have improved open source C++ code that was is used to detect the face and eyes of a person
in streaming video [14]. The Machine Perception Toolbox is a C++-based program that detects
objects based on the algorithm developed by Viola and Jones [15]. We have adopted this
method because of its ease of use and the very encouraging results we achieved early in our
development stages.
The Viola & Jones algorithm is based on the key features detected in a rectangle of pixels.
Features are grouped into 3 different categories: 2-rectangle features, 3-rectangle features, 4rectangle features. 2-rectangle features consist of taking the difference of the sum of pixel values
from adjacent boundary rectangles. 2-rectangle features can be adjacent horizontally or
Group 2
18
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
vertically. 3-rectangle features consist of taking the sum of the outer rectangles and subtracting
the middle rectangle, while 4-rectangle features consist of taking the sum of diagonal features
and subtracting the opposite diagonal features. An illustration showing the different types of
features is shown below.
Figure 3.1 - Rectangular Features of an Image. (Source: [15])
To perform the sum calculations at fast enough speeds, the algorithm involves performing a
cumulative summation of the pixel values from the upper left corner of the image, and
completing an image integral of all the data points in the matrix. From this image integral we
can use a basic math formula involving the 4 boundary points from the image integral to
calculate the sum of pixel values inside the rectangle. This is illustrated in the diagram below.
Group 2
19
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
The algorithm bases its data on a 24 x 24 pixel map is its base foundation, however can be
scalable over any image size above this. This scalability allows our camera to be placed in
different ranges of distances between the camera and the face. Additionally, the open sourced
algorithm we used for this project allows us to take advantage of such web camera features as
panning and hardware zooming, which our Creative NX Ultra web camera took advantage of.
Figure 3.2 - Summation of Rectangles. (Source: [15])
Once the image integral is found, 2-rectangle, 3-rectangle, and 4-rectangle features can be
determined to make up our feature set. Since there are more features than there are picks for a
given area of an image, there is a more efficient method of analysis we can conduct than the
slower computations involving individual pixels. Features are then analyzed for predefined
Group 2
20
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
facial patterns, and are then classified as a set off possible set of facial features. The algorithm
then looks for similar weights in the feature sets calculated as ones that have been saved as part
of the repository of facial recognition patterns. Classifiers are layers used to reduce false facial
images from patterns that have been recognized as part of the feature set. The repository is
composed of over 4,916 different facial models. If multiple facial feature patterns are recognized,
which is expected in most cases, patterns are removed via a pattern classification method to
continually further the matching patterns until a proper facial pattern is clearly defined. First
level of classification removes 50% of false positive facial recognition patterns, while the second
level removes 80% of those false positive patterns. These numbers are increased in subsequent
levels. Classification is refined using the AdaBoost algorithm.
AdaBoost is a learning algorithm that uses the weaker features to help narrow down the stronger
patterns. It does this by assigning each example of a given training data a weight on its first
instance. It then makes each pattern make its guess on which is part of the face, and then
compares this data against its facial pattern model to determine which patterns hypotheses were
correct. It then continually assigns the weaker patterns greater weights, allowing it to narrow
itself to proper facial recognition. The learning aspect of this algorithm is the fact that non-facial
patterns will be ignored for future frames as they have been determined to contain facial patterns.
Hence, this speeds up the algorithm, as less parts of the image need to be processed in
subsequent frames. Note when significant facial patters are not detected, the algorithm will
begin broadening its scope of the image to locate the facial pattern [17].
Group 2
21
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
In the event that a face is detected, the algorithm looks for finding the eyes in the recognized face
box. To do this it first locates approximate areas of the eyes, and then re-performs the ViolaJones algorithm using rectangular features to determine the features for the eye regions. The
software uses the positioning as well as the usual darkness of the eye regions to conclude its
determination of the eye regions.
Figure 3.3 – AdaBoost Training for Eye Recognition (Source: [15])
The performance specifications of are algorithm are:
•
Detection rate of 0.9 can be achieved using a 10 stage AdaBoost classifier
•
Algorithm found to process image of 384x288 pixels in 0.067 seconds on a
700 MHz Pentium III
Group 2
22
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
•
Translates to 15 frames per second
•
Each rectangle portion will require approximately 1200 microprocessor instructions
•
Each rectangle portion consists of 24x24 pixel regions
•
One of the faster eye detection algorithms available
3.5 Software Optimization
After some initial tests with MATLAB, we found that the MPT, which contains five facial
expression detection tools, was a great starting point for our project. These tools can detect faces
and eyes from real-time video capture. Testing with MPT shows that MPT is a better design
option than MATLAB in many ways. First, the code captures from real-time video, rather than
recording it and doing the analysis afterwards. Second, it can find faces and eyes for people with
different eye sizes, skin colours and face shapes. It can also find faces when the person is
wearing a cap or glasses. Third, the Blink Detector tool in the toolbox can detect the extent of
eye openness, which is a good indicator of drowsiness. The code is also well documented and
thus easy to modify.
The real-time video is represented as data type IMediaStream in the file
MPBlinkDetectorFilter.cpp. Each frame of the streamed video will be copied and transformed
by adding face/eye boxes and adding a coloured bar indicating the openness of the eye, then redisplayed on the screen. While processing, we use a byte stream type pointer to the image buffer,
which can be accessed from the IMediaStream data type.
Group 2
23
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
In FilterInterface.h, whether the eye is closed or open and how much it is open, is reflected in the
variable bResult. A positive bResult value would indicate that the eye is relatively closed, while
a negative one would indicate that the eye is relatively open. The absolute value of bResult
indicates the extent of eye openness/closure. A blink is detected when bResult switches from
negative to positive, which indicates the eye is closed after it is open.
In order to approach the problem of detecting drowsiness, we tried a relatively simple task first,
which is to make the system beep when eyes are closed (bResults>0) for 5 seconds.
At first, we tried to make a multithreaded timer that communicates with the main program with a
message queue. The timer will start when eyes are detected to be transitioning from being open
to being closed. When the timer is running it will check for messages every 50 ms.. The main
thread will send a message to the timer to reset the timer if eyes are detected to be open again.
When the timer expires, which indicates that eyes are closed for 5 seconds, the timer thread will
beep and destroy itself. However, the code could not compile due to some library file conflict,
which we did not manage to figure out. The pseudo code for the multithreaded timer is outline
in Table 3.2.
Table 3.2 - Pseudo Code for the Multithreaded Timer
Eye Open Loop (bResults <
Eye Closed Loop (bResults
0)
> 0)
if (closed = = 0) {
if (closed = = 1) {
timer1=100;
start timer;
signal the timer to destroy;
do {
Group 2
24
Timer Thread
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
closed = 1;
closed = 0;
every 50ms timer1--;
}
}
if(timer1 = = 0) {
beep;
destroy timer;
}
if(check message = = true)
msg = get message;
}while(msg!=destroy);
destroy timer;
After struggling with the multithreaded timer, we found the function QueryPerformanceCounter
on MSDN. This function returns the value of a high-resolution counter that counts the cycles the
CPU has executed since the last start. The function QueryPerformanceFrequency will return the
CPU frequency. Combining these two counters together will yield accurate timer value in
seconds, which is suitable for real-time applications.
Group 2
25
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Table 3.3 - Pseudo Code for the System Counter Algorithm
Eye Open Loop (bResults < 0)
Eye Closed Loop (bResults > 0)
if (closed = = 0) {
if (closed = = 1) {
start counter;
counter value = 0;
closed = 1;
closed = 0;
}
}
check value of counter;
if (counter value > 5 seconds) {
beep;
counter value = 0;
}
After successfully modifying the code to beep after 5 seconds of eye closure, we ran the test
program, and discovered that sometimes the program will recognize people or even lab
equipment in the background as secondary faces. This causes a problem since we only want to
measure eye openness for the primary face. We set up a variable called showblink_flag in
FilterInterface.h. By setting showblink_flag to false after the primary face is treated, we can
disable the bResult calculation of secondary faces.
The next step we took is to figure out which variables are related to drowsiness. Three criteria
come to mind: Extent of eye openness, blink count and PERCLOS. The extent of eye openness
can be measured using bResult. Blink count can be measured using an incremental counter
Group 2
26
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
which increments when eyes transition from open to close. By calling the aforementioned
QueryPerformanceCounter when the eyes transition from open to closed and closed to open, we
can determine the time the eyes are closed. The time of eyes closed divided by total time would
yield percentage of eye closed for a given time interval. We set up a static variable,
RunningTimeElapsed, which indicates how much time has elapsed since the start of program.
We modified code so that every time bResults is generated, RunningTimeElapsed, bResult and
blink count will be written into a log file. The PERCLOS value will be calculated every minute
and written to the log file as well. This is the basis for logging driver information on the DSP
system, which will allow for review of driver data and indicate whether the driver has habitually
driven while fatigued.
To test the effectiveness of the three criteria, we designed a 3-minute test: during the 1st minute,
the tester should stay sober and alert. During the 2nd minute, the tester should exhibit drowsiness.
During the 3rd minute, the tester should close his/her eyes completely. We set up the web camera
to be directly in front of the subject, 40 cm away and 10 cm above the eyes. Frank, Harsh, Ray
and Tony were subjected to the test and we obtained the following results:
Group 2
27
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
bResults
30
20
10
bResults
0
0
60
120
180
-10
-20
-30
Figure 3.4 - bResults vs. Time (in seconds) – Frank
bResults
30
20
10
0
-10
0
60
120
180
bResults
-20
-30
-40
Figure 3.5 - bResults vs. Time (seconds) – Harsh
Group 2
28
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
bResults
30
20
10
0
-10
0
60
120
180
bResults
-20
-30
-40
Figure 3.6 - bResults vs. Time (seconds) – Ray
bResults
30
20
10
0
-10 0
60
120
180
bResults
-20
-30
-40
-50
Figure 3.7 - bResults vs. Time (seconds) - Tony
It is difficult to use the bResults values from Figures 3.4 to 3.7, as the difference between sober,
drowsy and closed eyes are not easily extracted from the data. For this reason, we were forced to
develop a different measure to determine whether a subject was drowsy or not.
Group 2
29
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Judging from the results in Table 3.4, we think that extent of eye openness and PERCLOS are
useful, but not deterministic indicators of drowsiness. They would vary too much between
different people. Meanwhile, blink count is not a useful indicator of drowsiness. We came up
with a new idea, which is to take integral value of bResults over time. This idea is realized by
multiplying bResults with the difference of QueryPerformanceCounter between two consecutive
measurements of bResults.
Table 3.4 - Drowsiness Variable Analysis
Frank
Harsh
Ray
Tony
Frank
Harsh
Ray
Tony
Frank
Harsh
Ray
Tony
Extent of eye
openness
Blink Count
% of time eye
closed
Sober
-20 ~ +5
-20 ~ +10
-20 ~ +5
-30 ~ 0
22
37
15
8
15.9 %
25.9 %
10.1 %
2.8 %
Drowsy
-10 ~ +15
-15 ~ +10
-15 ~ +5
-25 ~ +10
42
30
18
23
40.0 %
30.6 %
14.0 %
14.0 %
Closed Eyes
-10 ~ +15
0 ~ +10
-10 ~ +15
-10 ~ +10
34
29
45
28
43.7 %
36.7 %
26.8 %
21.8 %
The EyeIntegral curves for each test subject are shown in Figures 3.8 to 3.11 and illustrate the
effectiveness of using this value to detect drowsiness. There is a significant difference in the
curve in each of the three states, which allows us to effectively use our program.
Group 2
30
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
EyeIntegral
100
0
-100
0
60
120
180
-200
EyeIntegral
-300
-400
-500
-600
Figure 3.8 - Eye Integral Curve Over 180 Seconds – Frank
EyeIntegral
50
0
-50 0
60
120
180
-100
-150
EyeIntegral
-200
-250
-300
-350
-400
Figure 3.9 - Eye Integral Curve Over 180 seconds – Harsh
Group 2
31
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
EyeIntegral
100
0
-100
0
60
120
180
-200
EyeIntegral
-300
-400
-500
-600
Figure 3.10 - Eye Integral Curve Over 180 seconds – Raymond
EyeIntegral
0
-100
0
60
120
180
-200
-300
EyeIntegral
-400
-500
-600
-700
Figure 3.11 - Eye Integral Curve Over 180 seconds – Tony
We observed that when the testers are sober, the EyeIntegral value keeps decreasing at a
consistent rate. When the testers are drowsy, the value would decrease sometimes and increase
sometimes. When the testers close their eyes, the value would continuously increase.
Group 2
32
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Based on the observation above, we decide to implement the drowsiness detection algorithm as
such, in order to take advantage of both EyeIntegral and PERCLOS values:
1. Monitor how much EyeIntegral changes every 5 seconds. Clear EyeIntegral to 0 if it is
decreasing (to prevent overflow).
2. If EyeIntegral is rising during a 5-second interval, beep and close-monitor the value of
EyeIntegral by checking it every time bResult changes instead of every 5 seconds. If it
goes above 20, start the rumble pack and beep. Clear EyeIntegral to 0. Continue to closemonitor the value until it is detected to be falling.
3. Regardless of whether EyeIntegral goes above 20 in each round of monitoring eye
closure, if it enters the state of closed 4 times during a one-minute interval, start the
rumble pack and beep.
4. Take average value of PERCLOS during the first three minutes. If PERCLOS rises 5 %
over this value, start the rumble pack and beep.
If the change in EyeIntegral hasn’t gone above 20 yet, the system will monitor the value as above
and stop monitoring when the change of EyeIntegral falls below 0. We use PERCLOS as a
secondary measure, and set the alerts to trigger if the subject’s readings change significantly over
a short time interval. Specifically, the program will enable the first alert if the PERCLOS rating
changes by 5% or more over a one-minute interval. This is sufficient time to detect drowsiness
in this case, since drivers tend to exhibit signs of drowsiness over time as opposed to suddenly
falling asleep at the wheel. When the system is started, it will determine the “alert” PERCLOS
Group 2
33
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
rating for the driver automatically and use that value as an initial reference. According to [5], it
is more important to detect the change in the blink rates and PERCLOS than the exact values.
Our test results support this theory, as each subject from our group exhibited a significant
increase (over 5% of the “alert” value) in PERCLOS and fluctuations in the EyeIntegral value
our program computes.
Group 2
34
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
4 RUMBLE PACK
4.1 Introduction
We used two methods to build the feedback portion of the rumble pack – one for the
development stage and one for the DSP hardware implementation (to be discussed in detail in
Section 5). For our initial tests with the laptop, we used vibrating components from a video
game controller. The alternative we used was a DC motor with the intention to interface the
motor directly to the DSP-based system. There are many ways to connect the rumble pack to a
computer or a DSP. Serial port, parallel port, and USB connections are the most common ways
to transmit data between a computer and an electronic device. In fact, interfacing a parallel port
with a DSP is quite simple. Therefore, building a parallel port connection becomes the main task
of the DSP design of the system.
Deciding between the parallel based circuit vibration system and the USB based DirectX
vibration game controller has many trade offs. Seeing as the development stage of our project
uses DirectX and Windows API already, there are no additional performance issues caused by
using this method. Advantageously, the Force Feedback™ controller method allows us to
implement the alert system wirelessly, as opposed to wired as it would be if we used the
vibration motor method. Additionally, the parallel port is being phased out on modern
computers, while USB method is widely available method on today’s systems. In contrast, for
the future implementation on the DSP board, we are better off using the parallel port based
method as it would be simpler, and would not require any drivers to function, while still
Group 2
35
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
accomplishing similar functionality. This leads us to conclude that we would use the Force
Feedback™ method for the computer-based demo, while we would alternatively use the parallel
port-based system for the DSP design.
4.2 Parallel Port
There are two main types of parallel ports, the male parallel port connector and the female
parallel port connector. The female parallel port connector is built together with the DSP to
become the output port of the DSP. The male parallel port connector is used for the terminal of a
data cable, which can plug into the female parallel port connector.
The most common parallel port connectors have 25 pins. Fig. 1 shows a 25-pins parallel port
connector. There are four types of pins in a 25-pins parallel port connector. Pin 2 to pin 9 are the
eight output pins which are used to send binary data signals. Pin 10 to pin 13, and pin 15 are the
five input pins, which are used to receive status signals from the device. Pin 1, pin 14, pin 16,
and pin 17 are the output pins, which are used to send control signals. Pin 18 to pin 25 are the
grounded pin [19].
A DSP or computer can send a signal “1” to one of the data pins (pin 2 to pin 9). This signal
becomes a small current flowing through the data cable if there is a connection between the
parallel port and a device (or a circuit). This small current can be a trigger to turn on the rumble
pack. Since only one data signal is required to turn on or off the rumble pack, only one data pin
and one ground pin are needed for the signal transmission [19].
Group 2
36
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Figure 4.1 - 25-Pin Parallel Connector (Source [19])
Since we are using C++ code for the system, the rumble pack is turned on or off by the algorithm
of the C++ code we added to our main program. In the C++ code, some variables need to be
declared to mark whether the pins are on or off. The line “Out32(0x378, 0x01)” is the most
important line of the C++ code for the rumble pack. It sends a “1” to the output pin 2 [20].
Therefore, the triggering part of the rumble pack should be connected to pin 2 and one of the
ground pins (pin 25). After executing the code, a small current should flow from pin 2 to the
triggering part. Part of the C++ code for the rumble pack-to-parallel port interface
(ParallelPort.cpp, ParallelPortDlg.cpp) is listed in Appendix A.
4.3 Rumble Pack Circuit
Group 2
37
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
A small DC motor is used as a rotational part of the rumble pack. At least 0.3A (about two AA
batteries in series) of current is needed to make this DC motor run. The major power supply
inside a car is 12V. Therefore, to get a 0.3A current from a 12V power supply, a 40Ω resistor
needs to be connected in serial with the motor.
The current from the parallel port is too small, so it cannot provide the supporting current for the
motor. The best way is to use this signal as a triggering signal to turn on the motor. A 2N2222
transistor can be the trigger of the motor. The completed circuit of the motor rumble pack is
shown in Figure 4.2.
Figure 4.2 - DC Motor Rumble Pack Circuit
The 2N2222 is a NPN transistor. It acts as a switch that uses a small current to turn on a larger
current [21]. In the motor rumble pack circuit, the base of the transistor is connected to pin 2 of
the parallel port connector. First, the switch of the circuit must be closed to make a closed circuit
Group 2
38
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
for the whole system. When there is a small current (signal ‘1’) flows from pin 2 to the base, this
small current turns on the transistor. Therefore, the larger current can flow through the transistor
from the collector side to the emitter side. This current flow makes the motor rotate. Only when
there is no signal flowing from the parallel port or the switch is opened, the motor does not rotate.
4.4 Structure of the Rumble Pack
The structure of the motor rumble pack is showed in fig. 3. The outer cover of the circuit is a
leather case. A belt is connected to it, so that the driver can wear the rumble pack on the waist.
There are two holes on the side of the case. One is for the wire of the power supply plug. The
other one is for the cable, which is connected to the male parallel port connector. The switch of
the rumble pack is located on the top of the case. At the back of the case, there is a big hole that
allows the rotating part of the motor to rotate.
After plugging in the power supply and connected to the parallel port of the DSP or computer,
the driver can turn on the rumble pack by the switch. If the driver starts to drowse, the DSP or
computer will send a current from the parallel port to turn on the transistor and the motor. When
the motor rotates, its rotating part can slightly hit the driver’s body to wake him or her up.
We will be inserting the vibrating controller into a lumbar support cushion for the purposes of
the demonstration. Figure 4.3 depicts the DC Motor Rumble pack before it is inserted into the
lumbar support.
Group 2
39
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Figure 4.3 - DC Motor Rumble Pack - Complete Unit
In the final deployment of the device, the DC motor rumble pack would be inserted into a lumbar
support that would fit into any vehicle, as it is designed specifically for a driver’s seat.
4.5 Force Feedback for the Rumble Pack
In getting the rumble pack working for our project, we discovered a simple alternative solution to
getting the vibration going would be to use force feedback via DirectX. Knowing that our
current open source code for eye detection relies on DirectX to provide hardware acceleration for
video, we discovered enabling the DirectInput features of DirectX would be relatively easy to
include to our project.
Group 2
40
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
The DirectX API works with the Windows API to achieve a Window Handle and determine the
state of the machine. This is typically used (and was designed for) game programming, as it uses
hardware acceleration to optimize and speed up input and output data analysis. This was a good
fit for our project, which involved hardware accelerated video streaming along with an attached
device.
Direct X also has built in functionality via Direct Input to allow us to quickly use input devices
such as mousse, keyboards, or game controllers. We used an off-the-shelf game controller,
which is Force Feedback® compatible, to provide the rumble for our driver’s lumbar support
alert system. The Logitech® Cordless RumblePad 2 is what we used.
Figure 4.4: Logitech ® RumblePad™ 2 Wireless Controller
The RumblePad™ 2 consists of:
Group 2
41
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
•
2 DC motors, with each motor sending off a different frequency of vibration
•
2 motors used to simulate a vibration in 3D space, as can change magnitude, force, and
type of vibration waveform.
Taking a sample code provided as part of the <DXSDK>\Samples\C++\Direct Input\FFConst
directory, we were able to modify the sample to produce a simple application that could use a
Start and Stop button to trigger, and disable a RumblePad Force Feedback effect. The interface
involved 3 main functions, 1 to initialize the device and set-up the effect, 1 to find the attached
game controllers, and 1 to free the effect when the application was done using it. For this
application we created a simple graphical interface as seen below, so we could get an accurate
account of the response time:
Figure 4.5 - Force Feedback GUI
Testing the DirectX Force Feedback App we created gave us acceptable results:
•
Almost instantaneously (less than 100 MS) for application execution to it operational
•
Close to instantaneous time (less than 100 MS) from start button click to force feedback
operating.
•
Close to instantaneous time (less than 100 MS) from stop button click to force feedback
no longer operating.
Group 2
42
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Looking more specifically at the later 2 points marked above, we see that the overhead caused by
setting off and disabling the rumble command is quite minimal, therefore this meets our project
expectations, and we will use this in our Laptop-oriented configuration of the project. In
addition, an additional command in the DirectX Effect attributes, allows us to set the time to
limit a given effect. This will allow reduced delays, as now effects now only need to be
triggered on, and don’t require further overhead of having to be triggered off.
Integrating the force feedback alert code with our open-sourced eye detection algorithm proved
to be quite a challenge as the code is designed around several projects that each encapsulates a
given task, while all projects are grouped in the same workspace.
First, we tried creating a class with the force feedback effect common to multiple projects. This
proved to be too slow, as it required objects to be referenced to the effect in real time. It caused
a 5-10 second delay in processing the image information, even though the information continued
to possess approximately 1 second of delay to real-time.
Secondly, we tried using Windows API commands as they are being used in the program to
process information already. This seemed to be a simple and quick implantation method useful
for providing an alert without any additional overhead, as the current code used Windows API
was already designed to respond to Windows handle messages.
Group 2
43
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
To send messages between 2 separate projects, a handle is required to locate which WinAPI
Window is to be referenced. Referring to [22], we found the appropriate command below:
HWND = FindWindow("MPBLINKDETECTORAPP","MPLab Blink Detector Application");
Where MPBLINKDETECTORAPP is the class name and MPLAB Blink Detector Application is
the window name.
Once again referring to [23], we located that there is a slot of memory dedicated to user
identified signal messages. The message value range is 0x0400 – 0x7FFF. In our project we
identified our message as value 0x0400 to identify a start of an alert vibration.
Testing process times between the beep command, which is signalled before the vibration, and
the vibration itself are found to still be minimal delays. This has led us to stay with this method
for passing the alert on between the two projects for this method of user alert vibration.
Group 2
44
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
5 DSP SYSTEM DESIGN
5.1 Introduction
The majority of work completed in this project was in the software design for eye recognition
and fatigue detection. Due to the fact that digital image processing requires a significant amount
of processing power, we decided to begin developing the system with a laptop. The next phase
for the driver fatigue monitoring system is to produce a modular system that could be easily
deployed in any vehicle.
In order to create a stand-alone system, we require a much more
compact processor and hardware set-up. Considering the processing demands of our system, a
digital signal processor must be used as opposed to a microcontroller. The following sections
outline the specifications of our system and the proposed DSP configurations.
5.2 DSP System Requirements
For our purposes, we are concerned only with the specifications and limitations of our equipment.
The NX Ultra web camera we are using has a colour depth of 16 bits (2 bytes) per pixel, which
means each frame captured will contain 2 bytes of data per pixel. We found that a resolution of
320 x 240 as well as a frame rate of 15 frames per second would produce sufficient results for
the driving monitoring system.
Group 2
45
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Calculations:
pixels
bytes
bytes
kB
×2
= 153,600bytes ÷ 1024
= 150
frame
pixel
kB
frame
kB
frames
kB
150
× 15
= 2250
frame
s
s
(320 × 240)
In order to compensate for any lag in the processing and to avoid data losses, we will need
memory well in excess of the 2250kB (2.25MB) of data that is input each second by the system.
Since our program is approximately 100MB in size after compiling and converting to DSP
assembly language and the Windows system uses 20MB of RAM to run the process, we expect
that 1GB of memory for data and program files will be sufficient for our system using the USB
web camera. This will also allow us to add driver files for the USB to DSP interface as well as
save data to memory for analysis of driver behaviour. It would also allow us to upgrade the
camera hardware or increase the resolution of the camera; and still safely run the system.
There are many different types of digital signal processors, with a number of trade-offs that were
considered for this project. Due to the amount of processing power necessary for this project, we
have decided to work with the Texas Instruments TMS320C6000 family of DSPs. Our first
significant decision was whether we should work with a floating-point or fixed-point processor.
Fixed-point DSPs handle only integers in their arithmetic, while floating-point DSPs can perform
integer or real arithmetic (using scientific notation) [24]. While the floating-point DSP will
provide more accurate calculations, speed of processing is sacrificed. Audio systems using
Group 2
46
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
speech recognition algorithms are depend on accuracy and small delays are acceptable. However,
in real-time video processing, the speed of the system cannot be compromised. The data set of
audio applications is also very limited, and subsequently the sampling rate is much smaller than
that of video. According to [24], video applications have sampling rates in the order of tens or
hundreds of megabits per second (Mbps), whereas audio applications sample at tens or hundreds
of kilobits per second (Kbps). Consequently, we have decided to use a DSP from the TMSC64x
family of DSPs – the most powerful fixed-point processors available from Texas Instruments. If
we find that our system would benefit from a floating-point processor, software is available from
Texas Instruments that will allow a floating-point processor to emulate a fixed-point processor
[25].
We have been using a Pentium 4 with a 1.7 GHz Centrino processor and 512MB of RAM. This
system is using a cumbersome operating system in Windows XP, but still has the processing
power to run our program with relative ease. We have found that we are limited not by our
camera frame rate, but by the speed of processing the image data in real-time. The DSP-based
system will be have a processor dedicated to our image processing application, so we have
decided to use a TMS320C6416-7E3 DSP with 1GB of memory. The processing speed is only
720 MHz, but since it is a dedicated system, we can process the video at a suitable speed for our
application while still keeping costs to a minimum. The DSP chip is available for $149 US. The
C6416 is compatible with DDR2 SRAM, which is convenient for a number of reasons. First, the
cost of 256MB of DDR2 SRAM is approximately $9.30 per chip, which keeps our costs down
while providing more than sufficient memory for program data and real-time processing. Also,
the supply voltage of the SDRAM is 3.3V, which can be easily provided by the VI/O of the DSP.
Group 2
47
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
See Table 5.1 and 5.2 for specifications of the TMS320C6416 DSP and MT48LC16M16A2BG75 SRAM chips.
Table 5.1 - TMS320C6416 Specifications
Texas Instruments
TMS320C6416 – 7E3
Processor Type
Fixed-point
Processor Speed
720 MHz
On-Chip L1 SRAM
32KB
On-Chip L2 SRAM
1024KB
EMIF
1 16-Bit, 1 64-Bit interface
DMA
64-Ch EDMA
Core Supply
1.4V
IO Supply
3.3V
Table 5.2 - Micron MT48LC16M16A2BG-75 Specifications
Group 2
Micron
MT48LC16M16A2BG-75
Memory Type
DDR2 SDRAM
Memory Size
256MB
Memory Speed/Data Rate
133MHz
Supply Voltage
3.3V
Package
FBGA
Pin-Count
54
48
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
The memory is connected to the DSP according to the Texas Instruments 64x Fast Reference
Schematic shown in Figure 5.1.
Figure 5.1- EMIFA to SDRAM Schematic (Source: [26])
The reset and clock circuit schematics are shown in Figures 5.2 and 5.3, respectively.
Group 2
49
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Figure 5.2 – DSP Reset Circuit (Source: [27])
Figure 5.3 - DSP Clock Circuit (Source: [28])
The following power supply block diagram depicts the set-up for a C6000 DSP with Vcore = 1.4V
and VI/O = 3.3 V, which are the values needed for our TMS320C6416 DSP.
Group 2
50
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Figure 5.4 - Power Supply Block Diagram - C6000 DSPs (Source: [29])
5.3 DSP Filtering
5.3.1 FIR Filter
The equation of a common digital filter is:
y[n] = ∑ c[k] * x[n-k] + ∑d[i] * y[n-i]
In this equation, there are two convolutions, one with the pervious input, x[n-k], and one with the
previous output, y[n-i]. A convolution is a weighted moving average with one signal flipped
Group 2
51
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
back to front, and the filter coefficient is the convolving function. When calculating the filter
coefficients, it would be much easier if we exclude any possible feedback. In another words,
limit ourselves to deal with the previous inputs only, thus the equation will become y[n] = ∑ c[k]
* x[n-k]. When this filter is subjected to an impulse, the output will become zero after the
impulse has run through the summation. As a result, the impulse response of this filter will be
finite. Such a filter is what we call a Finite Impulse Response filter (FIR filter). In reality, the
coefficients for an FIR filter can be calculated very easily. All we need to do is simple take the
inverse Fourier Transform of the desired frequency response, and we would have the coefficient
[30].
5.3.2 IIR Filter
With the equation y[n] = ∑ c[k] * x[n-k] + ∑d[i] * y[n-i] in mind, we know that if we subject this
filter to an impulse, the output may not result in a zero. The impulse response of such a filter can
be infinite in duration, thus we call this kind of filter an Infinite Impulse Response (IIR filter). In
reality, the impulse response will not be infinite. If it was infinite, the filter will become unstable.
So in most cases, the impulse response will die away to a negligibly small level, and once the
level gets below one bit, we normally assume it to be a zero. The IIR filter serves more like as a
warning that the filter is prone to feedback and instability [30].
5.4 DSP Input and Output
Although transferring the C++ code from our laptop and web camera system will be almost
seamless (See Section 5.5), the difficult lies in interfacing a DSP with a USB camera that was
Group 2
52
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
built to work with a PC. Interfacing a USB controller with a DSP is somewhat straightforward,
but the device drivers for a USB camera would have to be re-written for the DSP implementation.
There is a USB controller available from Cypress (Model SL811HS) that will enable the
connection of the USB camera to the DSP. The SL811HS requires an operating voltage of 3.3V,
which will be provided by the VI/O of the DSP. The output of the USB web camera is digital
RGB video, which eliminates the need to add an analog to digital converter.
There are two methods to build the feedback portion of the rumble pack – one for each of our
project stages. For our initial tests with the laptop, we used vibrating components from a video
game controller. The alternative is to use a DC motor and interface the motor directly to the
DSP-based system. There are many ways to connect the rumble pack to a computer or a DSP.
Serial port, parallel port, and USB connections are the most common ways to transmit data
between a computer and an electronic device. In fact, interfacing a parallel port with a DSP is
quite simple. Therefore, building a parallel port connection becomes the main task of the DSP
design of the system.
There are two main types of parallel ports, the male parallel port connector and the female
parallel port connector. The female parallel port connector is built together with the DSP to
become the output port of the DSP. The male parallel port connector is used for the terminal of a
data cable, which can plug into the female parallel port connector.
The most common parallel port connectors have 25 pins. Fig. 1 shows a 25-pins parallel port
connector. There are four types of pins in a 25-pins parallel port connector. Pin 2 to pin 9 are the
eight output pins which are used to send binary data signals. Pin 10 to pin 13, and pin 15 are the
Group 2
53
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
five input pins, which are used to receive status signals from the device. Pin 1, pin 14, pin 16,
and pin 17 are the output pins, which are used to send control signals. Pin 18 to pin 25 are the
grounded pin [20].
A DSP or computer can send a signal “1” to one of the data pins (pin 2 to pin 9). This signal
becomes a small current flowing through the data cable if there is a connection between the
parallel port and a device (or a circuit). This small current can be a trigger to turn on the rumble
pack. Since only one data signal is required to turn on or off the rumble pack, only one data pin
and one ground pin are needed for the signal transmission [20].
The DSP can easily connect to a speaker using an amplifier. The sounds would be stored in the
DSP memory along with the program and be programmed to play when the alerts are triggered.
An Audio Codec is necessary for interfacing with a speaker. We have decided that the
TLV320AIC20 would be sufficient for our needs, as it is an Audio Codec with a digital-toanalog and analog-to-digital converter. It also includes a built-in 8Ω speaker driver, which
eliminates the need to produce our own driver. We are not concerned with the quality of the
sound, so a basic speaker will be attached to this device.
5.5 Software Conversion
Unlike the past where people need to develop an assembly code on their own in order to fit into a
DSP, many developers have developed different software that would convert any computer
source code (in high-level programming language) into an assemble language. This has
Group 2
54
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
increased developers’ efficiency and the time to market dramatically. We have chosen code
generation tools from Texas Instruments because TI has developed compilers specifically for
their DSPs that maximize usage and performance. Their features include common subexpression elimination, software pipelining, strength reduction, auto increment addressing, costbased register allocation, instruction predication and hardware looping [31].
In order to program a DSP processor efficiently, we first need to convert our C++ code into DSP
assembly code. Of course, there are many programs available to perform the conversion, but it is
not that simple. If we can provide an efficient assembly language code into the DSP, we would
be able to maximize the DSP’s performance. Therefore, there are a few guidelines we should
keep in mind with our C++ source code. First of all, while we are programming a C++ code, it is
common to have something as follows:
x[n] = 0;
for (t = 0; t < N; t++)
{
x[n] = x[n] + y[t]*z[n-t];
}
Figure 5.5 - C++ Code Example: Simple for( ) Loop
In C++, this code seems to be fine, but if we were to implement this into a DSP processor, it
would be very inefficient. This is because the code uses array indices instead of pointers. An
array is simply a table of numbers in sequential memory locations. Since only the starting
Group 2
55
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
address of the array is available to the compiler, whenever an array index is being assessed, the
compiler must perform a calculation. For example, in order to locate z[n-t], the compiler must
first load the starting address of the table in memory, load the values n and t, calculate the offset
n-t, and add this offset into the starting address. Without a doubt, this requires lots of address
arithmetic; that is, the compiler must perform at least five operations, three read operations and
two arithmetic operations. C++ has another operation that will make things more efficient, and it
is called the pointer. Although each pointer needs to be initialized at first, we only need to do this
once, therefore eliminating any calculations in the offsets. In reality, using pointers is especially
efficient for DSP processors, as they are built specifically to perform arithmetic at a high rate of
speed. Below are some common commands that the DSP uses from [30, 32]:
•
Register Indirect (*rP) – read the data pointed to by the address in register rP
•
Postincrement (*rP++) - post-increment the address pointer to point to the next value in
the array after reading the data
•
Register Postincrement (*rP++rI) – post-increment the address pointer by the amount
held in register rI to point to rI values further down the array
•
Bit Reversed (*rP__rIr) – post-increment the address pointer to point to the next value in
the array
Another difficulty with using a DSP is its limited memory access compared to that of a PC’s
processor. In most DSPs, the only have four memory accesses per instruction. These include
two reads of operands, one write of the result and the read of one instruction. This is clearly not
Group 2
56
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
enough for a simple C++ code. For example, the code in Figure 5.3 already has one memory
write and three reads of the memory.
for (t = 0; t < N; t++)
{
*x_ptr = *x_ptr + *y_ptr++ * *x_ptr--;
}
Figure 5.6 - C++ Code Example: Excessive Memory Reads
Including the load instructions, the DSP would never have enough memory to perform the code
efficiently. To compensate for this, the DSP processor has many registers that could be used.
We can optimize the code for a DSP as shown in Figure 5.7.
register float temp;
temp = 0;
for (t = 0; t < N; t++)
{
temp = temp + *yptr++ * *x_ptr--;
}
Figure 5.7 - C++ Code Example: Optimized for DSP
By doing so, the inner loop only requires two read operands. Even with the load instruction, we
are well within the limits of the DSP [30,32].
Group 2
57
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
The last problem with the original C++ code is assuming that we can repeatedly access an entire
array of past inputs. This is not suitable for real time applications, since real time systems are
constantly dealing with streams of input data, causing them to operate on one set of input data
and generate one output each cycle. Because we are developing a real time system, it is best to
maintain a history array, which can update a new input sample by shifting the entire history data
one space toward 0. In order to implement a history array, we need to replace *x_ptr with a
history array, and initialize a new pointer to point one location further than the history array, so
that it can be used during the shifting of the data down the array [30].
5.6 Other Possible Solutions
With the advances in DSP technology, there are many different possible configurations that
would suffice for our project. We have proposed the following solutions for the driver
monitoring system and have described the reasons for each decision in the following sections.
5.6.1 TMS320C6416 DSP with CMOS Image Sensor SOC
In order to avoid creating device drivers for the USB web camera, we are proposing to build our
own CMOS web camera using components that are readily available. The Micron MT9D111 is
a CMOS image sensor system-on-chip (SOC) that requires very few components to create a
working camera. With the SOC, the only parts needed for full functionality are a power supply
Group 2
58
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
(Digital Core = 1.7-1.95V), lens and clock source. The block diagram of the Micron MT9D111
is shown in Figure 5.5.
Figure 5.8 - Micron MT9D111 Block Diagram (Source: [33])
The clock needed for the MT9D111 is 80 MHz and the lens we would use is a 1/3.2” optical
format lens to work with a 5.7mm sensor diagonal [34]. The resolution could be set as high as
1600 x 1200 with a frame rate of 15fps. This is one of the potential improvements on our
original system, as the program would detect features of the face easily with a higher resolution.
We could increase the amount of memory for the DSP to 1GB with the addition of two more
256MB SDRAM chips, which would provide more than enough memory to handle the increase
in resolution. The SOC has many other capabilities, considering it includes a microcontroller
and on-board memory, which we would look to utilize to increase the efficiency of our DSPbased video processing.
Group 2
59
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
5.6.2 TI DM644x System-on-Chip Family
The Texas Instruments TMS320DM644x is an integrated system-on-chip that is designed for
digital media, specifically digital video and image processing. It uses a TM320C64x+ DSP
along with an ARM926 processor and a number of hardware and software components that allow
for high-level digital processing. Both the DM6443 and DM6446 devices have a built-in USB
controller and USB port, which would enable development using a PC as well as allow one to
connect USB devices (such as web cameras and digital cameras) for capturing images and video.
The DM6443 and DM6446 are available for approximately $40.70US and $47.85US (per unit),
respectively, thus they are both economically viable for our purposes. Compatible memory is
available from Micron at $9.4380 per unit (when purchased in large quantities) for 512 MB of
DDR2 RAM (MT47H32M16CC-5E). This memory is more than sufficient for our needs, and
gives us some flexibility for upgrading the system in the future.
The main difference between the two DM644x devices is that the DM6446 includes a video
input and output, whereas the DM6443 only has a video output. Considering that we are
capturing video, we feel that the DM6446 is a much better choice for our system and will allow
us to use different types of video recording devices. We would not need to rely on a USB
connection and instead use common video connections like VGA and S-Video. This set-up
could also make use of the Micron MT9D111 CMOS Image Sensor instead of interfacing the
system with a USB web camera.
Group 2
60
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
CONCLUSION
Assessment
We did not have a strong start for this project. In the early stages, we spent too much time
choosing our topic and fell behind in our intended schedule. We were able to catch up somewhat
with some extra effort from all of the group members. We spent a great deal of time developing
a reliable facial recognition algorithm with MATLAB only to find that a C++-based program
would be the key to our success. The hardware design portion of the project was started late and
would definitely be the next phase of implementing the system as a stand-alone unit. If we had a
better start, it is not unreasonable to expect to build a working stand-alone system within the time
frame of the project.
As a modular system, developing this device is definitely worthwhile and has potential for
deployment in any vehicle. With the addition of a memory card, it could be used to track driver
behaviour over a period of time and allow someone to assess the risk the driver poses to others
on the road. We were unable to test the device thoroughly, so we have yet to compare the results
with that of the Driver Fatigue Monitor System by Attention Technologies. We have used a
combination of drowsiness measures, including PERCLOS, but we still feel that we are able to
provide more accurate and reliable results. We have focused on the changes in drowsiness
indicators instead of specific values and thresholds because our test data showed some
differences between subjects.
Recommendations
Group 2
61
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
One of the constraints of developing a real-time application has to deal with the accuracy versus
performance issues. Our real-time system has the constraint of having to be able to analyze a
given frame before the next frame arrives. Our camera operates at 30 frames per second, while
the Viola-Jones image detection algorithm restricts this to a further 15 frames per second. This
leaves our application with a maximum time to of 0.073 seconds to analyze a certain frame. Our
algorithm states that it can process a give 0.067 seconds (theoretical). Incorporating the
performance constraints of our operating system plus addition constraints of fighting with other
applications lead us to believe that our demo application is up to par for real-time processing.
However changing our system from the desktop environment to our proposed DSP environment
would allow all processing power of our DSP chip to be able to handle the 15 frames per second
that our application needs to be processed at.
Our preliminary system based on the laptop and web camera works in an ideal situation, with
sufficient lighting and minimal background disturbances. Further development is definitely
needed for this device. A thorough test phase was not carried out due to time constraints, so we
have yet to perform an exhaustive evaluation of the system. The system should be useful in all
conditions, whether a driver’s head is moving around in the frame of the camera or whether a
vehicle is travelling on a bumpy road. The next step in improving the software would be to
determine the difference between the head dropping and the driver performing a regular
shoulder-check.
Group 2
62
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
Obviously, the implementation of the system as a stand-alone unit is still in the early stages of
development. We know which components we would use and the set-up that would make it
possible to create a device that would easily fit into any vehicle.
Group 2
63
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
REFERENCES
[1]
National Highway Traffic Safety Administration, “Traffic Safety Facts 2004.” U.S.
Department of Transportation. Washington, 2005.
[2]
Beirness, D.; Simpson, H.M.; Desmond, K. “Drowsy Driving” The Road Safety Monitor
2004. Traffic Injury Research Foundation, Feb. 2005.
[3]
“Asleep at the wheel: One in five drivers nods off while driving, poll finds: TIRF.”
Internet: http://www.insurance-canada.ca/market/canada/TIRF-sleep-driving-503.php. [June 8,
2006].
[4]
Federal Highway Administration, “PERCLOS: A Valid Psychophysiological Measure of
Alertness As Assessed by Psychomotor Vigilance.” US Department of Transportation, Oct. 1998.
[5]
N.Galley; R.Schleicher, “Subjective and optomotoric indicators of driver drowsiness”,
3rd International Conference on Traffic and Transport Psychology, Nottingham, UK, 2004.
[6]
Grace, R.; Byrne, V.E.; Bierman, D.M.; Legrand, J.-M.; Gricourt, D.; Davis, B.K.;
Staszewski, J.J.; Carnahan, B., "A drowsy driver detection system for heavy vehicles," Digital
Avionics Systems Conference, 1998. Proceedings., 17th DASC. The AIAA/IEEE/SAE , vol.2,
no.pp.I36/1-I36/8 vol.2, 31 Oct-7 Nov 1998
[7]
Patkus, B. L. “Protection from Light Damage,” The Environment, Technical Leaflet 4,
Section 2. Northeast Document Conservation Center, 1999.
Internet: http://www.nedcc.org/plam3/tleaf24.htm, July 18, 2005 [June 17, 2006]
[8]
Volvo Cars of Canada Corporation, “New Volvo Driver Alert System to Assist Tired and
Inattentive Drivers”
Internet: http://www.volvocanada.com/NewsAndEvents/News.aspx?lng=2&NewsItemID=4#,
November 30, 2005. [June 18, 2006]
[9]
Collins, M. “New Volvo Innovation Aims to Keep Drivers Alert.” Ford Motor Company.
Internet: http://media.ford.com/newsroom/feature_display.cfm?release=22138 Dec. 12 2005.
[June 11, 2006].
[10] Balasukumaran, Y; Babu, R.D. “KLT(Kanade-Lucas-Tomasi) feature tracking algorithm
in embedded hardware.” 2004.
[11] Fisher, R; Perkins, S; Walker, A; Wolfart, E, "Hough Transformation." Internet:
http://www.homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm October 13, 2003. [June 12, 2006]
[12] Amos Storkey, “Hough Transform.”
Internet: http://www.anc.ed.ac.uk/~amos/hough.html 2005. [June 12, 2006]
Group 2
64
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
[13] “RGB Colour Model” Wikipedia Online Encyclopaedia, June 30 2006.
Internet: http://en.wikipedia.org/wiki/RGB [July 16, 2006]
[14] Harrison, G. “Infra Red Webcam”
Internet: http://www.hoagieshouse.com/IR/ [June 14, 2006]
[15] Fasel, I; Fortenberry, B; Movellan, J.R. “Generative Framework for Real-Time Object
Detection and Classification.” Computer Vision and Understanding. 2004.
[16] Viola, P; Jones, M. “Robust Real-Time Object Detection.” Second International
Workshop on Statistical and Computational Theories of Vision – Modeling, Learning,
Computing, and Sampling -Vancouver. July 13, 2001.
[17] Viola, P; Jones, M. “Fast Multi-view Face Detection.” Mitsubishi Electric Research
Laboratories. July 2003.
[18] “Theoretical views of boosting and applications.”
Internet: http://kiew.cs.uni-dortmund.de:8001/mlnet/instances/81d91e8d-dc15ed23e9 [1999]
[19] Axelson, J. “Parallel Port Tutorial-Part 1”
Internet: http://www.geocities.com/gear996/sub/parallel.html, [July 4, 2006].
[20] Perla, H. “Parallel Port Programming (Part2): with Visual C++”
Internet: http://electrosofts.com/parallel/parallelwin.html, [July 4, 2006].
[21] Hewes, J. “Transistor Circuits”
Internet: http://www.kpsec.freeuk.com/trancirc.htm 2006. [July 8, 2006].
[22] “FindWindow Function” MSDN Library - Windows User Interface.
Internet: http://msdn.microsoft.com/library/default.asp?url=/library/enus/winui/winui/windowsuserinterface/windowing/windows/windowreference/windowfunctions/f
indwindow.asp [July16, 2006]
[23] “WM_USER Notification.” MSDN Library - Windows User Interface.
Internet: http://msdn.microsoft.com/library/default.asp?url=/library/enus/winui/winui/windowsuserinterface/windowing/messagesandmessagequeues/messagesandmess
agequeuesreference/messagesandmessagequeuesmessages/wm_user.asp [July 16, 2006]
[24] Frantz, G; Simar, R. “Comparing Fixed- and Floating-Point DSPs.” Texas Instruments
Whitepapers. 2004.
Internet: http://focus.ti.com/lit/ml/spry061/spry061.pdf [July 13, 2006]
[25] “Choosing a DSP Processor.” Berkley Design Technology Institute, 2000.
Internet: http://www.bdti.com/articles/choose_2000.pdf [July 13, 2006]
Group 2
65
Summer 2006
EECE 474 – Instrumentation Design Laboratory
Final Report
[26] “C64x Fast Reference Schematic” Texas Instruments - SPRC137 TMS3206414/C6415/C6416 Reference Design, March 2005.
Internet: http://focus.ti.com/docs/toolsw/folders/print/sprc137.html [July 14, 2006]
[27] “TMS320C6000 System Clock Circuit Example,” Texas Instruments Application Report
SPRA430A, September 2001.
[28] Bell, D. “Reset Circuit for the TMS320C6000 DSP” Texas Instruments Application
Report SPRA431A, 1999.
[29]
“DSP Power Management Reference Guide.” Texas Instruments Application Notes, 2005.
[30] “Introduction to DSP.” Bores Signal Processing, 2005
Internet: <http://www.bores.com/courses/intro/index.htm> [July 15, 2006]
[31] “Code Generation Tools.” Texas Instruments Incorporated, 2006 Internet:
<http://focus.ti.com/dsp/docs/dspsupporto.tsp?sectionId=3&tabId=453> [July 15, 2006]
[32] “DSP Programming Guidelines.” Numerix – Techniques for Optimizing C Code, 2000.
Internet: http://www.numerix-dsp.com/appsnotes/c_coding.pdf [July 15, 2006]
[33] “MT9D111 CMOS Image Sensor System-on-Chip” Micron Technologies Product Flyer,
2006.
Internet: http://download.micron.com/pdf/flyers/mt9d111_flyer.pdf [July19, 2006]
[34] “Lens Selection and Suppliers” Micron Technology, Inc. 2006.
Internet: http://www.micron.com/innovations/imaging/lens [July 14, 2006]
[35] “CMOS Resolution and Formats” Micron Technology, Inc. 2006.
Internet: http://www.micron.com/innovations/imaging/formats [July 14, 2006]
[36] Tyson, J. “Controller”
Internet: http://entertainment.howstuffworks.com/playstation3.htm, [June 22, 2006].
[37] Shi, J; Tomasi, C. “Good Features to Track.” IEEE Conference on Computer Vision and
Pattern Recognition, pages 593-600, 1994.
[38] Starovoitov V.V; Samal, D.I; Briliuk, D.V. “Three Approaches for Face Recognition.”
Pattern Recognition and Image Analysis, 6th International Conference On. October 21-26, 2002,
pp. 707-711.
Group 2
66
Summer 2006