TABLE OF CONTENTS

Transcription

TABLE OF CONTENTS
AUVSI SUAS
Team Buzzed, Journal Paper
Georgia Institute of Technology
Faculty Advisor: David Moroniti
Date Submitted: May 28, 2014
This paper describes the undertaking of the Georgia Institute of Technology Team Buzzed
in the AUVSI SUAS competition. A systems engineering approach was used to understand
the competition requirements, derive capabilities, develop a total system solution, and
validate this system through testing. The vehicle is capable of autonomous flight and aerial
photography of a desired search area. The team developed a system for autonomous target
identification and recognition through custom ground control software. The team has
demonstrated through testing that this system can achieve all primary and desired
secondary tasks. By combining these individual systems and through intensive testing of
the collective system, Georgia Tech believes this entry to be a strong contender in the 2014
AUVSI SUAS competition.
TABLE OF CONTENTS
1.
SYSTEMS ENGINEERING APPROACH ........................................................................................................................3
1.1 Mission Requirements Analysis ....................................................................................................................................3
1.2. Design Rationale ................................................................................................................................................................4
1.3. Programmatic Risks and Mitigation Methods ........................................................................................................5
1.4. Expected Performance ....................................................................................................................................................7
2.
UAS DESIGN .......................................................................................................................................................................8
2.1. Aircraft ..................................................................................................................................................................................8
2.1.1. Propulsion ..................................................................................................................................................................................................................... 8
2.1.2. Planform Sizing........................................................................................................................................................................................................... 9
2.1.3. Drag Analysis ............................................................................................................................................................................................................... 9
2.1.4. Stability and Control ................................................................................................................................................................................................ 9
2.1.5. Manufacturing ........................................................................................................................................................................................................... 10
2.1.6. Modifications ............................................................................................................................................................................................................. 10
2.2. Method of Autonomy..................................................................................................................................................... 11
2.2.1. Data Link ...................................................................................................................................................................................................................... 11
2.2.2. Ground System .......................................................................................................................................................................................................... 12
2.3. Imaging .............................................................................................................................................................................. 15
2.2.1. Camera .......................................................................................................................................................................................................................... 15
2.2.2. Gimbal ........................................................................................................................................................................................................................... 15
2.3.3. Video Transmission ................................................................................................................................................................................................ 16
2.4. Target Recognition and Analysis .............................................................................................................................. 16
2.4.1. Computer Vision....................................................................................................................................................................................................... 17
2.4.2. Localization ................................................................................................................................................................................................................ 19
3.
TEST AND EVALUATION RESULTS ......................................................................................................................... 19
3.1. Payload Systems ............................................................................................................................................................. 19
3.2. Guidance ............................................................................................................................................................................ 20
4.
SAFETY CONSIDERATIONS AND APPROACH ...................................................................................................... 20
TABLE OF FIGURES
Figure 1 – SUAS task requirement decomposition and system mapping .............................................................................. 4
Figure 2 – Gantt chart showing planned execution of tasks. The integration milestone between imaging and
the airborne system is marked by “Buzz”, Georgia Tech’s mascot. ......................................................................... 6
Figure 3 – The Georgia Tech platform for SUAS, Buzzed ............................................................................................................... 8
Figure 4 – Buzzed model in AVL ............................................................................................................................................................... 9
Figure 5 – Air-Drop release mechanism .............................................................................................................................................11
Figure 6 – UAS ground and onboard systems data flow ..............................................................................................................12
Figure 7 – Piccolo Command Center user interface .......................................................................................................................13
Figure 8 – Air-Drop control GUI..............................................................................................................................................................14
Figure 9 – The team’s custom gimbal with the Sony FCB-EV7500 .........................................................................................16
Figure 10 – Timeline of functions in the target recognition and analysis process ..........................................................16
Figure 11 – Image filtering by principal colors ................................................................................................................................17
Figure 12 – Shape classification by binary edge detection and corner count ....................................................................18
Figure 13 – Feature detection (top), matching (bottom left), and filtering (bottom right) .........................................18
Figure 14 – Before and after of image color simplification ........................................................................................................19
Figure 15 – Flight control station and Piccolo command station diagram ..........................................................................20
TABLE OF TABLES
Table 1 – Primary task failure-risk areas and mitigation methods .......................................................................................... 5
Table 2 – SUAS task completion scheduling risk assessment, adapted from MIL-STD-882C ....................................... 7
Table 3 – Expected performance at SUAS ............................................................................................................................................ 7
Table 4 - Parasitic drag breakdown ....................................................................................................................................................... 9
Table 5 – Stability and control properties of Buzzed.....................................................................................................................10
2
Team Buzzed
1. SYSTEMS ENGINEERING APPROACH
1.1 Mission Requirements Analysis
The rules specify two primary and eight secondary tasks for the UAS to perform.
Accomplishing all ten tasks scores the most points and maximizes success at the fly-off. However,
operating under the principle of resource scarcity, it is vital to prioritize tasks and appropriate
resources accordingly, including time, labor and funding. The prioritization process was
particularly important to the Georgia Tech team as a first-time entrant in SUAS. To determine which
tasks to target, each one was decomposed into a set of requirements for the system to perform
based on the rules. In absence of competition experience, three criteria were used to evaluate the
results of the requirements analysis:
1. The amount of overlap in requirements between tasks
2. The number of requirements added per task
3. The complexity of any added requirements
Complexity was rated qualitatively based on the team’s research of past SUAS entries and existing
operational UAS. The tasks to perform were prioritized qualitatively based on a combination of the
three criteria above and an assessment of the team’s capability and background.
The requirements analysis method described above resulted in the framework seen in
Figure 1. The team first decomposed the Primary autonomy and search tasks into their
requirements. Moving into Secondary tasks is not allowed by the rules until threshold Primary
requirements are met, committing the team to all requirements except autonomous landing,
regardless of their difficulty level. The team then derived a high-level architecture that can fulfill the
system requirements, resulting in an autonomous UAS with imaging and computer vision (CV)
capabilities, and a ground station to provide a man-to-system interface.
With an architecture chosen to fulfill the Primary tasks, the Secondary tasks were
decomposed to determine additional derived requirements, their complexity, and how they map to
the high-level architecture, as seen in Figure 1. Many of the Secondary tasks overlap with the
Primary requirements, but only added requirements are graphically illustrated in Figure 1 for
simplicity. Dashed lines from tasks to requirements denote an optional addition. Tasks that only
required a single, relatively simple addition were prioritized for completion. More complex tasks
were targeted as optional, with their completion dependent on progress of simpler tasks. The
overall result of the mission requirements analysis was a tiered approach, with a progression in
task number and complexity as the system is developed. This approach enabled the team to focus
its efforts in a structured manner towards the completion of as many tasks as possible while taking
into account its limited experience, resources, and time.
3
Team Buzzed
Figure 1 – SUAS task requirement decomposition and system mapping
1.2. Design Rationale
The mission requirement analysis seen above showed that SUAS is substantially focused on
system integration. The selected architecture is a platform to complete as many functions as
possible; there is no preference for a specific shape or features as long as the architecture lends
itself well to integrating many sub-systems. This is exemplified by the variety of craft fielded at
SUAS, including fixed-wing airplanes, helicopters, and multi-rotors of all sizes. This functionallyoriented view led to the team’s goal of quickly fielding a working platform, spending more time on
task-related sub-system development and test than vehicle- or software-related design. This
rationale lends itself strongly to the integration of off-the-shelf systems. As a first-time entrant in
SUAS, the team also sought to simplify wherever possible. This led to choosing operational, “known-
4
Team Buzzed
good”, and verified systems, including some commercial off-the-shelf (COTS) components for major
elements of the high-level architecture:
1. Aircraft: an in-house built gas-powered medium-endurance aircraft, with many hard-points
and large payload capacity, which has already been flight-proven as a stable, reliable
platform for the airborne systems.
2. Autopilot: Cloud Cap Piccolo, a professional product with near plug-and-play capability
serves as an integrated solution for nearly all autonomy tasks required by the competition.
3. Ground System: a combination of in-house built stations that have been flight-tested with
the chosen aircraft, and COTS stations to support the autopilot, camera, and gimbal, all
operated by trained personnel.
4. Computed Vision: a MATLAB-developed toolbox for shape and character recognition with
flexible format inputs and detailed online tutorials and support.
1.3. Programmatic Risks and Mitigation Methods
The programmatic risks in SUAS can be classified into schedule-related and failure-related
risks. Scheduling risks include overruns in the critical path and underestimating labor times,
manufacturing times, or shipping lead-times. Failure-related risks include hardware failures that
result in damage to the aircraft, camera, or instrumentation, and task-completion failures such as
being unable to classify a target. Any failures also tie back to the schedule and can cause significant
overruns, but their occurrence cannot be predicted and worked into the schedule a priori. The
approach is thus left to identify potential high-risk areas and have a detailed plan to mitigate them.
The major failure-risk identification and mitigation for Primary tasks is detailed in Table 1.
Although Secondary tasks also have risk, the Primary tasks pose the greatest programmatic risk
because failure in those areas precludes the team from attempting Secondary tasks. Failure-risk
mitigation via safety protocols is detailed in Section 5 – Table 1 is for programmatic risk mitigation.
Table 1 – Primary task failure-risk areas and mitigation methods
Failure-Risk Areas
Loss of aircraft function
Loss of autopilot function
Loss of gimbal/camera command
and control
First-time aircraft-autopilot
integration
Inability to classify >2 features per
target
Mitigation Method
Use a verified, stable, in-house built airframe with over
45 successful flights.
Use a small UAS industry-leading autopilot system with
built-in failsafe features.
Pass gimbal control signal through the autopilot
system; use a reliable camera.
Perform hardware-in-the-loop (HIL) testing, then use a
small, inexpensive aircraft to test autopilot functions
and reduce the consequence of failure.
Use a commercially available, well-supported CV code
with demonstrated similar use-cases.
The failure-risk mitigation strategy echoes the design rationale detailed in Section 1.2: using
known-good operational systems or COTS components is a risk reduction method and reduces the
amount of uncertainty in the system performing reliably. The same rationale also deliberately
eliminated much of the design and development for the major systems, freeing time for sub-system
development and test and reducing scheduling risks substantially. As a consequence, the Gantt
chart shown in Figure 2 does not include any aircraft design and manufacture or major
5
Team Buzzed
programming efforts for the autopilot and CV systems. Instead, the chart only focuses on
completing the task-related requirements.
Figure 2 – Gantt chart showing planned execution of tasks. The integration milestone between
imaging and the airborne system is marked by “Buzz”, Georgia Tech’s mascot.
The Gantt chart follows the flow dictated by the mission requirement analysis in Section 1.1,
which prioritized Primary, then simple Secondary, and finally complex Secondary tasks. Figure 2
shows two scheduling critical paths, which map to the high level architecture of the aircraftautopilot and camera-CV systems. Any underestimation in any phase of the schedule trickles delays
to tasks along the critical paths. This means that the tasks found later in the critical path become
more susceptible to schedule delays. However, the consequence of delays becomes smaller for
increasingly complex tasks because of how they were prioritized: the goal for the team as a firstyear entrant is to complete as many tasks as possible, leaving more complex tasks for future years if
necessary. This risk management method is graphically displayed in the risk assessment matrix
seen in Table 2, where tasks of increasing complexity flow from top-left to bottom-right. This means
that the critical Primary tasks are the most likely to be completed, while less-critical Secondary
6
Team Buzzed
tasks are more prone to scheduling delay. In all, no tasks were at an imperative level, meaning the
risk management strategy is well-suited for the team’s objectives.
Table 2 – SUAS task completion scheduling risk assessment, adapted from MIL-STD-882C
Susceptibility to Scheduling Delay
Severity of
Consequence
Impossible
Improbable
Remote
Critical
7.1
7.2
Marginal
7.6, 7.9
7.8, 7.5
Occasional
Probable
Frequent
7.4
Acceptable
7.10
Negligible
7.3
7.7
Risk Code / Action
- Imperative to suppress to lower risk level
- Take action to mitigate, while balancing design goals
- Operation permissible
1.4. Expected Performance
The team’s tiered approach to targeting tasks has placed substantial effort on achieving the
Primary tasks. At the time of this writing, that effort has translated to progress in both the
autonomous flight and computer vision areas, giving the team a high degree of confidence of
meeting at least the threshold requirements, if not the objectives of the Primary tasks. This
probability of attempting the ten different tasks is listed in Table 3.
Table 3 – Expected performance at SUAS
Task #
Primary
Secondary
Task Name
Probability of Attempt
7.1
Autonomous Flight
High
7.2
7.3
Search Area
Automatic Detection
High
7.4
Actionable Intelligence
7.5
Off-Axis Target
High
7.6
Emergent Target
High
7.7
Remote Information Center
7.8
7.9
Interoperability
Infrared Search
7.10
Air-Drop
Low
Medium
Low
High
High
Medium
Many of the Secondary tasks overlap with the Primary ones and only have small, manageable
additions, increasing the team’s confidence in completing tasks 5, 6, 8, and 9. The Actionable
Intelligence (classifying all features) task depends on the progress of the Primary task, but enough
7
Team Buzzed
information is expected to be collected for a complete analysis if scheduling allows sufficient testing
time. The Air-Drop requires more hardware additions than most Secondary tasks, but the aircraft
used for the competition was originally designed to drop a payload on a target. With some
modifications, it is likely the team will attempt the task, even if logic for an autonomous drop is not
added. Finally, the Automatic Detection and Remote Information Center tasks call for a more
complex development, making them a lower priority for the team at SUAS.
2. UAS DESIGN
2.1. Aircraft
Buzzed is a blended wing aircraft using an H-tail configuration, tricycle landing gear, and a
gas-powered engine arranged in a tractor configuration. This design, pictured in Figure 3, has been
used by the Georgia Tech Design Build Fly team in two separate competitions completing 45 total
flights, proving to be a reliable design. Due to the nature of the past two competitions, Buzzed was
designed for ample stability under conditions of high wing loading and an off-centerline center of
gravity. Stability and reliability are keys to enabling more focus to be put forth on the subsystems
by minimizing risk factors caused by the aircraft, making Buzzed an ideal candidate for the 2014
AUVSI-SUAS competition.
Figure 3 – The Georgia Tech platform for SUAS, Buzzed
2.1.1. Propulsion
The fully utilize the 40-minute flight time allotted in the demonstration period, a 0.46 cubic
inch two-stroke gas engine was selected with a large enough fuel tank. Gas was chosen over electric
components because of its high energy density, reducing weight for at an equivalent flight
endurance. To optimize the performance of this motor multiple propellers were mounted to the
engine on a static thrust stand to determine their thrust. The ideal propeller is capable of providing
enough thrust for takeoff while operating at low RPM during cruise for better propulsion system
efficiency. The results of testing indicated that an 11.5 x 6 propeller will be the most efficient
propeller for the selected engine.
\
8
Team Buzzed
2.1.2. Planform Sizing
The blended-wing body design is more efficient than a fuselage and provides flexibility in hardpoint attachments. The large wing area of 11 ft2 enables cruise at low speed, useful in reducing
motion blur for image capturing. Athena Vortex Lattice (AVL) was used to size the wing, empennage
and control surfaces. AVL is a code developed at MIT that calculates the aerodynamic
characteristics of an airplane by discretizing the wing into a vortex sheet along the span and
camber lines and applying boundary conditions at the wingtips and trailing edges. The Buzzed
configuration in AVL is seen in Figure 4. This virtual model was used to estimate stability and
control characteristics, which were also needed for initial gain tuning and flight simulation with the
autopilot.
Figure 4 – Buzzed model in AVL
2.1.3. Drag Analysis
A parasitic drag estimate was computed by summing each component’s drag contributions,
approximated using empirical estimation techniques in Hoerner’s Fluid Dynamic Drag, and then
normalizing each component according to the wing reference area. Figure 4 shows the
contributions of the main aircraft components. The induced drag was estimated from AVL. The
blended wing design provides high efficiency and low drag to reduce fuel consumption during the
long-endurance mission.
Table 4 - Parasitic drag breakdown
Part
Drag
Percent of Total
Wing
0.0147
64
Landing Gear
0.0050
22
Horizontal Tail
0.0025
11
Vertical Tail
0.0008
3
Total
0.023
100%
2.1.4. Stability and Control
To ensure that the aircraft can successfully complete the design mission, both static and
dynamic stability characteristics were computed in AVL. This information was combined with the
principal moments of inertia found in CAD to determine dynamic stability behavior using the full 6
DOF linearized, coupled differential equations found in Philips Mechanics of Flight. The most
9
Team Buzzed
important static derivatives, deflections, and the static margin are seen on the left side of Table 5,
while the most important high-frequency dynamic mode behavior is seen on the right. The static
evaluation confirms that the aircraft is statically stable with 9.6% margin. The dynamic analysis
indicated that the aircraft is stable in all high-frequency modes, with damping ratios and
frequencies within the expected ranges for small unmanned vehicles. This stable platform lends
itself well to being stabilized and controlled by an autopilot.
Table 5 – Stability and control properties of Buzzed
Static Stability
Wtotal(lbs)
Inputs
V(ft/s)
CL
Aerodynamic
α (deg)
Parameters
β (deg)
Cm,α (rad-1)
Stiffness
Cl,β (rad-1)
Coefficients
Cn,β (rad-1)
Static Margin % Chord
25
50
1.0
7.7
0.0
-0.446
-0.134
0.041
9.6
Dynamic Stability
Mode
Short-Period Dutch Roll
-1
Damping Rate (s )
2.297
0.324
Time to Half (s)
0.302
2.143
Damping Ratio
0.591
0.110
Damped Freq. (s-1)
3.137
2.934
-1
Undamped Freq. (s )
3.888
2.957
Control
-1
Cl,δa (deg )
0.035
δa (deg)
Cm,δe (deg-1)
-0.001
δe (deg)
Roll
2.777
0.250
0
-4.1
2.1.5. Manufacturing
To decrease vibration from the motor and get a more stable image, the team used a Hyde
Motor Mount. Data provided by the manufacturer claims that the Hyde mount lowers the
vibrational amplitude by 70%. Experimental results indicated that a 1/8 inch poplar plywood nose
box was sufficient to withstand the torque and static thrust of the motor. This box was also used as
housing for the fuel tank and various electronics such as the servo motors controlling throttle and
the nose gear. The nose gear itself is also attached to the engine mount. The main landing gear was
fabricated from a solid piece of aluminum attached to the fuselage via screws. Landing gear
placement was dictated by CG position.
The empennage is attached using two carbon fiber tubes attached to the carbon fiber wing
spar. The H-tail has an elevator and a dual rudder system controlled by pushrods actuated by
servos. The control surfaces are attached using a socket style hinge to minimize drag.
A laser cutter was used to print the aircraft from sheets of balsa wood and plywood,
ensuring accuracy between the CAD designs and the final aircraft. This is especially critical for ribs,
as twist introduced in construction can make the airplane difficult to control. The ribs were
specifically designed to fit together accurately like a jigsaw puzzle, allowing for repeatable and
accurate construction.
2.1.6. Modifications
As it stood, the Buzzed platform required several modifications to make it compatible for
the 2014 AUVSI-SUAS competition. A larger fuel tank was fitted to the aircraft to allow 40 minutes
of flight, eliminating the need to land for refueling. The camera gimbal detailed in Section 2.2.2 was
designed to attach to a pre-existing hard-point on the bottom of the aircraft, substantially reducing
the number of modifications needed. The wing was strengthened around the mount for the pitot
10
Team Buzzed
tube, which is required by the Piccolo autopilot system to calculate airspeed. The aircraft had
sufficient internal space for the remaining Piccolo components.
A drop mechanism for a different competition was originally mounted at the gimbal hardpoint. To attempt the Air-Drop mission, the drop was relocated to the wing. The laser cut claw seen
in Figure 5 was chosen to secure and release the payload. The claw is composed of two laser cut
arms whose base act as a gear and are controlled by a small servo. The claw opens to 2.5 inches and
can carry the required payload size and weight. Using simple components minimized weight and
moving parts in the wing.
Figure 5 – Air-Drop release mechanism
2.2. Method of Autonomy
To be able to accomplish the autonomous flight primary task objectives, as well as to
accomplish several of the secondary task objectives, an autopilot flight system is required. The
autopilot system must be capable of autonomous take-off and landing, flight, waypoint tracking, and
respond to flight plan changes during flight while staying within a specified flying zone. The
autopilot system must also be reliable, as a failure in this system can cause a catastrophic crash. To
fulfill these requirements a Piccolo SL Autopilot System has been chosen. In addition to meeting
task requirements, members of Buzzed are familiar with the Piccolo software, reducing the time
required to incorporate the software into the aircraft.
The Piccolo SL was ideal for the competition as the autopilot system is small, lightweight
and has all the capabilities necessary to successfully complete the mission tasks. It uses a
combination of GPS antenna, Inertial Measurement Unit, pitot-tube pressure sensors to determine
position, altitude, orientation, attitude, and airspeed. These inputs allow the autopilot system to fly
the aircraft autonomously.
2.2.1. Data Link
The many data links of the UAS are separated throughout the spectrum so that they do not
interfere with each other. In the unlikely event of one of these data links failing, the team
has established safety procedures to debug and re-establish connections.
2.2.1.1. Autopilot
The Main Data Link is a 900MHz connection that is to be used for communications
between the Piccolo Portable Ground Station (PGS) and the Piccolo SL Autopilot System.
11
Team Buzzed
This link is the main channel of communication between the Flight Control Station and the
Piccolo SL Autopilot System. The safety pilot transmitter and gimbal operations station also
connect with the Piccolo SL Autopilot System through this link for manual control of the
aircraft and gimbal respectively.
2.2.1.2. First-Person View (FPV) Camera
A forward-facing FPV camera is used as a safety factor in case of possible component failure.
If the autopilot system fails while the aircraft is out of sight, the forward facing camera can be used
to fly the plane manually. The video taken from the first person view camera is transferred over an
Immersion RC 5.8 GHz Video Transmitter. This transmitter was chosen for its ability to transmit
video over a wide range of frequencies with a power output of 600 mW and very little noise.
2.2.2. Ground System
The Ground Station will be composed of the Piccolo Portable Ground Station (PGS)
provided by Cloud Cap with the Piccolo SL system, along with several computer stations
running dedicated software based on the task given to the operator.
Figure 6 – UAS ground and onboard systems data flow
2.2.2.1. Flight Control System (FCS)
The operator of the FCS is tasked with monitoring and providing commands to the
Piccolo SL Autopilot System by interfacing with the Piccolo Command Center (PCC)
software installed on the Flight Control Computer (FCC). Through the PCC, the operator can
monitor altitude, airspeed, GPS position, heading and attitude while being able to command
the autopilot, set flight boundaries and limits, and provide or modify a flight plan during
12
Team Buzzed
flight. These are all important tasks as they will allow the team to fulfill primary and
secondary mission objectives. The FCC is linked to the Piccolo PGS through a serial
connection which in turn links to the Piccolo SL Autopilot through the Main Data Link.
The lost communications waypoint safety feature is also set by the FCS operator
through the PCC. If the FCC crashes, the operator will have a backup FCC running in parallel
so they can simply plug the backup to the Piccolo PGS, download the most recent data from
the Piccolo SL Autopilot and resume operations while the main FCC is inspected and
rebooted. During each flight, telemetry data is automatically saved and can be accessed at a
later time.
Figure 7 – Piccolo Command Center user interface
2.2.2.2. Safety Pilot Station (SPS)
The safety pilot is tasked with manually controlling the aircraft in any situation or
circumstances where the autopilot cannot. The SPS is composed of the First Person View
(FPV) computer and the safety pilot transmitter. The FPV computer is directly linked to the
FPV system on board the aircraft through the FPV Video Link and is a safety feature added
in case the aircraft flies out of the line of sight of the safety pilot and they need to take
manual control of the aircraft. Manual control of the aircraft can be attained by the safety
pilot through a switch on his transmitter. The transmitter is directly linked to the Piccolo
PGS and communicates with the Piccolo SL through the Main Data Link.
2.2.2.3. Payload Operations System (POS)
The GOS operator is in charge of the gimbal control and monitoring software. The
GOS is composed of Gimbal Control Computer (GCC) computer and the Gimbal remote
control (GRC). The GCC receives a live feed from the Sony block camera on board the
13
Team Buzzed
aircraft via the Gimbal Video Link. The GRC controls the gimbal movement through
software installed on the GCC. The GCC is connected to the payload pass-through port on
the Piccolo PGS and controls the gimbal through the Piccolo SL Autopilot.
2.2.2.4. Image Recognition Station (IRS)
The IRS operators are tasked with maintaining and monitoring the IRS. The IRS is
composed of several computers running image recognition software written by the team in
parallel to find and identify all targets characteristics autonomously.
2.2.2.6. Air-Drop Control (ADC)
During estimation of the payload drop location; position and velocity of the aircraft are
taken into consideration. External factors such as wind, the drag profile of the payload during the
drop trajectory, and the data rate that informs the ground-station of the state of the aircraft are also
taken into consideration.
The first objective of the mission model is to provide an estimation of the drop location.
Using basic physics the payload follows:
assuming no external interference.
From the estimated altitude a computer model determines the time, t, that is required for the
payload to reach the ground. Using this calculated time, the model then calculates the distance
traveled along the ground. This portion of the model introduces a constant, k that is multiplied to
the distance traveled along the surface in order to attempt to model the effects of drag on the
payload during the drop trajectory. The ground station uses the data obtained from the on-board
sensors to create a GUI that indicates the current state of the aircraft and the desired drop location
of the payload. The desired drop location of the payload, recorded before the start of the mission, is
indicated by the red ‘X’. The blue ‘X’ indicates the estimated location of the aircraft. A black line
connects the last estimated location of the aircraft with the desired drop location. A black ‘X’
indicates the predicted drop location.
Figure 8 – Air-Drop control GUI
14
Team Buzzed
2.3. Imaging
2.2.1. Camera
For this competition a camera must have a pixel density or zoom large enough to identify a
letter on a sign and reduce the effects of plane vibration as much as possible while being able to
transmit an image quickly. The system must also be able to identify a heated sign in order to
successfully complete the mission. High definition video cameras and Digital Single-Lens Reflex
(DSLR) cameras capable of both high pixel density and high magnification zoom are the two most
commonly used devices in unmanned aerial reconnaissance. DSLR cameras typically provide a very
high-resolution image, a large range of manual zoom capabilities, high shutter speeds, and a large
array of manual settings. Video cameras provide a real time, high definition stream with moderate
zoom capabilities. Both camera types are capable of image stabilization, which is critical for an
aerial camera. The main differences between the two types are that a DSLR is most efficient for still
images while an HD camera constantly streams video, sending constant data to the operator. This
difference enables the HD camera to stream nearly in real-time, allowing a faster reaction to the
aircraft’s environment. HD cameras also have a smaller form-factor and weigh less than DSLRs,
making them easier to accommodate on the aircraft. The real-time and form-factor advantages led
the team to select HD cameras over DSLRs.
A Sony FCB-EV7500 HD video camera was chosen due to its small size, low weight, high
quality imaging capabilities, and built in software features. The camera allows for near real-time
1920x1080p video streaming, 30x optical zoom, and advanced image stabilization, as well as a built
in infrared capabilities for identifying heated objects. These capabilities provide the team with high
quality imaging for the Search Area, Infrared, Emergent Target, and Actionable Intelligence tasks.
Ground tests have indicated very clear images even at a worst-case condition of full zoom with
external vibration present.
2.2.2. Gimbal
The gimbal is designed to provide 180 and 360 degrees of pitch and yaw freedom
respectively, creating a complete hemisphere of target tracking under the aircraft. Pitch and yaw
control is necessary design because after identifying a target the Sony FCB-EV7500 will be zoomed
in for increased target recognition abilities, and will likely need to be reoriented to the target as the
aircraft moves. Increased control provides the team with both a method of scanning the searchable
area without having to execute multiple flybys and zooming on the target to increase the ease with
which it is identified.
The main constraints on the gimbal are to ensure it does not interfere with takeoff and
landing, and being of minimal size and weight to reduce its negative effects on aircraft flight. Two
Futaba S3003 servos were selected to control the pitch and yaw of the gimbal based on their torque
and angular resolution, while a slip ring provides the gimbal with infinite rotation around the yaw
axis without tangling wires. To satisfy the gimbal constraints and securely fix the servos and
camera into place, the gimbal was custom designed and built utilizing precision laser cutting and
additive manufacturing capabilities. The gimbal was 3D printed in three separate sections, and
assembled to include steel ball bearings to create a mechanical slew ring. The gimbal achieves yaw
rotation via a servo that is geared with two laser-cut acrylic gears. The complete gimbal system can
be seen in Figure 9. The gimbal camera’s pointing direction can be interpolated from the
commanded position of the pitch servo, and from an encoder located on the upper surface of the
gimbal, providing yaw information.
15
Team Buzzed
Figure 9 – The team’s custom gimbal with the Sony FCB-EV7500
2.3.3. Video Transmission
Target acquisition video from the gimbaled camera is sent to the ground station using a
Microhard 2.4 GHz Wi-Fi radio. This model was chosen for it compact, lightweight design. It weighs
24 grams with 1 watt of RF output and up to 12 mbps of bandwidth. This model’s major feature is
its long-range capabilities, with a line of sight range up to 14 miles. Using Wi-Fi radio to transmit
video to the ground station is the fastest and most reliable way to send large packets of data over
large distances and will limit the interference caused by other teams. In order to stream the video
to the ground station with minimized lag the system uses an Airborne Innovations h.264 video
encoder board, which compresses 1080p video at up to 30 frames per second.
2.4. Target Recognition and Analysis
The recognition of the target and its analysis is a complex process that involves hardware,
software and human interaction. Figure 10 below displays the flow of the recognition and analysis
process with respect to time. The on-board hardware and its functions were described in Section
2.3, while the analysis functions are described in detail in the sections to follow.
Figure 10 – Timeline of functions in the target recognition and analysis process
Target recognition begins with the gimbal system operator captures an image of the target
for classification analysis. When the image is captured, the onboard systems record the aircraft
GPS position, altitude above ground level (AGL), heading angle, gimbal angles, and aircraft motion
angles for localization and orientation analysis. After these values are obtained they are
wirelessly transmitted along with the captured image to the ground computational unit
(GCU). The GCU consists of a directional receiver antenna to support long-range Wi-Fi
16
Team Buzzed
connections, and a portable computer with MATLAB code that performs all desired image
recognition and analyses. The results from the CV analysis are compared to a human’s visual
analysis of the image to confirm or deny the automated output, ensuring accurate recognition in
cases of false computer detections.
2.4.1. Computer Vision
The Computer Vision (CV) algorithm is a combination of functions that perform a detailed analysis
of the input images to output the four visually-based features required by the for the Search Area,
Actionable Intelligence, and Infrared tasks:
1.
2.
3.
4.
Alphanumeric character
Shape of platform backgrounds
Character color
Background color
Images acquired by the gimbal operator are stored in a folder on the GCU. The CV code then
automatically reads every new file and performs the analyses detailed below.
2.4.1.1. Image Filtering
Before any shape or letter recognition, the image is filtered to reduce background noise.
Good filtering substantially increases the success rate of the required recognition tasks. The
filtering function segments the input image into multiple images based on color strength. Color
distribution data are obtained from the image and used to split the main colors into separate
images. An example is shown in Figure 11 below. A baseline image is split into its 6 strongest
principal colors to give cleaner images, which are passed through to the next phases of analysis.
Figure 11 – Image filtering by principal colors
2.4.1.2. Shape Recognition
The filtered images are converted into binary (black and white) images, to which an edge
detecting function is applied. The edge detection is performed through neighboring pixel
comparison, where steep local changes in pixel values are classified as an edge. These edges can
create outlines of geometric shapes with a distinct amount of corners. If the edges form an enclosed
space within angular and Cartesian limits, the algorithm counts the number of corners by finding
discontinuities in a spline fit of the edges. The corners which define the target shape are used to
crop the original image to a manageable size for color and letter recognition as well as better corner
recognition. The number of corners is used to classify shapes, since common polygons have a
distinct number of corners. An example of the shape detection process is shown in Figure 12 below.
17
Team Buzzed
Figure 12 – Shape classification by binary edge detection and corner count
2.4.1.3. Letter Recognition
A copy of the cropped image acquired from the shape recognition process is sent to the
letter recognition code. The target image will rarely be a direct overhead image, so it is straightened
for easier analysis. The angle of deflection between vectors of straight lines that are nearly
horizontal and horizontal reference of the image is computed and used to rotate said vectors to the
horizontal. To recognize the letter, the filtered and rotated images are compared to the alphabet
templates one-by-one until there is a match as seen in Figure 13. The comparison is performed
using feature detection which extracts feature components of the template letter, calculates
their centers (seen as the green circles in Figure 13), then groups these feature components by
their coordinates.
Figure 13 – Feature detection (top), matching (bottom left), and filtering (bottom right)
The last step in letter recognition process is to perform a feature match. The comparison
results in an output file that contains the number of identical points. Some of the results
might be inconsistent or erroneous, so a filter which erases points that do not follow an overall
pattern, is applied. The template that had the most feature matches with the image is chosen as
the letter.
2.4.1.4. Color Recognition
A copy of the cropped image acquired from the shape recognition process is sent to
the color recognition code. The cropped target image is initially read as Red-Green-Blue
18
Team Buzzed
(RGB) space, is simplified and transformed into Hue-Saturation-Value(HSV) space, then
compared to a color-map and classified under a major color hue for both the shape and
letter.
In order to differentiate between the background color, the shape color and the
letter color, the algorithm uses the first pixel in the image as a reference, and is compared to
all other pixels in the image. If this background pixel is different from the new pixel
selected, and the pixel is for the first time, then the pixel color is saved as the shape color. If
the new pixel color is different from both the shape pixel color and the background pixel
color, then the new pixel color is saved as the letter color, the algorithm ends, and exports
the shape color and the letter color to an excel file.
Figure 14 – Before and after of image color simplification
2.4.2. Localization
2.4.2.1. Position
In order to find the GPS coordinates of a ground target, the aircraft’s direction and position
must be obtained from the Piccolo software while the gimbal direction must be interpreted from
the servo commands. Using the orientation of the aircraft, as well as the direction the camera is
pointing, the relative direction of the image can be calculated. This relative direction, in
conjunction with the altitude of the aircraft, can be used to find a ground level distance to the target
from the aircraft. Combining the ground distance between the aircraft and the target with the GPS
coordinates of the aircraft yield the position of the target.
2.4.2.2. Orientation
To perform recognition of image orientation two data sets are required: a magnetometer
reading at the time when image was taken, and the angle of rotation for the image calculated in the
section 2.4.1.3. The magnetometer reading determines deflection of the heading vector of the plane
from the magnetic north. The image rotation angle gives the deflection from the plane’s heading
vector to the target’s baseline. Thus the sum of these deflections provides the total vector angle
from the cardinal north direction to the heading of the target image. The result must be rounded to
the nearest 45 degrees and compared to each cardinal direction. The cardinal direction is acquired
from the division of the angles into 45 degree segments. When the result is the same angle as the
represented cardinal direction, the direction is output.
3. TEST AND EVALUATION RESULTS
3.1. Payload Systems
Based on the results from extensive testing of the image recognition software and the
gimbal system operations as well as the past experience of the Buzzed platform all primary
objectives and desired secondary objectives will likely be met.
19
Team Buzzed
Image recognition has provided promising results by successfully identifying letters,
shapes, and colors on different backgrounds. The threshold requirements for the search area tasks
have been met and the next phase of development will be to incorporate the gimbal angles,
computer vision, and aircraft orientation to derive the sign locations. The selected camera has also
successfully demonstrated its ability to locate a heated target using its infrared capabilities. Based
on the camera’s tested imaging capabilities and the gimbal’s controllability, the emergent target is
likely to be found.
3.2. Guidance
The process to achieve autonomous flight was an extensive one. First, an Athena
Vortex Lattice (AVL) model of the aircraft had to be developed and a software simulation
done. The software simulation and AVL model served to establish a baseline for the
Autopilot control gains and coefficient values. The next step is to perform a hardware
simulation. The Piccolo SL Autopilot System and avionics needed to be integrated to an
aircraft to perform a hardware simulation. The Skyhunter FPV UAV Platform was used for
preliminary testing and training as the aircraft has a similar amount of control surfaces to
our competition aircraft and its frame would protect the avionics in case of an emergency.
The hardware simulation served to validate the baseline and to further tune the system
before a test flight was performed. Finally, a test flight was performed in a safe area, closely
following a pre-established flight plan to test out the Piccolo SL Autopilot System.
Figure 15 – Flight control station and Piccolo command station diagram
The same process was followed to integrate the Piccolo SL Autopilot System into the
competition aircraft. Simple maneuvers like were performed during the initial test flights
while tuning the control gains, subsequent test flights involved more complex maneuvers
for example autonomous take-off and a closed search ladder circuit within a set boundary
limit that involved waypoints at different altitudes.
4. SAFETY CONSIDERATIONS AND APPROACH
A successful safety plan must be thorough, consistent, and practiced extensively without
deviation. Buzzed’s safety plan includes elaborate pre-flight and post-flight checklists, battery
monitoring, and crew familiarity with the aircraft. The flight crew is extremely experienced with
operating the aircraft having flown over 45 flights following the same strict safety plan with each
20
Team Buzzed
flight. The batteries are brightly colored for quick identification in the unlikely event of a crash. A
system is also in place to keep track of battery life by monitoring charges and the general health of
the batteries. Buzzed has multiple redundant systems in place to minimize the likelihood of a single
failure to cause mission deviation. Although AUVSI-SAUS competition flight is autonomous, a safety
pilot has the ability to override the autopilot system at any time and the authority to do so if he or
another crewmember finds the need to do so.
Risk
Probability of Occurrence
Consequence
Ground Station
Piccolo
Software Crash
Moderate - Software
occasionally crashes.
Low - Reboot Piccolo software Automatically reconnects to the aircraft.
Computer
Crash
Improbably - Computers show
no issues with software
Low - Backup computers present which ru
essential flight software in case of failure.
Primary Comm
loss > 10 sec
Improbable - Comms tested at
competition ranges
Moderate - Vehicle shall automatically
return home.
Primary Comm
Loss > 3 min
Improbable - Comms tested at
competition ranges
Moderate - Terminate flight via an
autonomous landing.
Loss of Payload
Comms
Improbable - Comms tested at
competition ranges
Moderate - Check ground station antenna
angle, restart modems and receiver.
Power Failure
at Tent
Improbable
Low - All computers and ground station
components have two hours battery life.
Safety Pilot
Loses sight of
Aircraft
Moderate – Distance and
orientation can reduce visibility
Low - Pilot utilizes FPV system to return
to field - If FPV fails, pilot commands
autonomous return to home.
Aircraft
Power failure
on piccolo
Improbable - Battery life
condition checked every charge
Severe - Aircraft systems failure.
Motor failure
Improbable - Motor tested and
used often before flight.
Severe - Attempt manual unpowered
landing.
Camera failure
Improbable - Camera were
extensively tested before flight.
Moderate - Reboot camera software - If
unable use FPV to return home.
Gimbal failure
Improbable - Structure and
servos were extensively tested
before flight.
Moderate - Use FPV to return home.
Loss of Control
Surface
Improbable - Preflight and
postflight checks to vital
aircraft components.
Moderate - Redundant control surfaces.
Subsystem
battery failure
Improbable - Battery life has
been meticulously tracked
during normal usage.
Severe - Terminate Flight via Autonomous
Landing.
21
Team Buzzed