University of California, San Diego

Transcription

University of California, San Diego
Technical Paper
University of California, San Diego
Faculty Advisor: Dr. Ryan Kastner
Project Manager: Tim Wheeler
Hardware Lead: Paul Vinh Phan
Software Lead: Lewis Anderson
Safety Pilot: Sarah Lohman
Team:
Justin Chang, Elioth Fraijo, Zack Grannan, Steven Kestler
Eric Lo, Bryan Ritoper, Jordan Sendar
May 2012
Abstract
This paper describes the design and testing of the UC San Diego Falco Teams Unmanned Aerial
System (UAS) for the 2012 AUVSI student UAS competition. The team’s “Peppy” airframe is
a Sig Rascal 110 fixed wing aircraft modified with an electric motor and tricycle landing gear.
Team Falco stressed performance, safety, and reliability for the 2012 competition, modifying the
2011 flight vehicle for ease of access, transportability, payload arrangement, and reduced setup
time. The Peppy UAS provides slow, steady flight with a capable payload size, featuring a fully
functional autopilot, live video feed, and automated gimbal. The ground station provides automatic image processing and target identification, as well as gimbal targeting control and automatic
pathing. The aircraft is controlled autonomously with in-flight navigation adjustment and an R/C
safety hard cutover for maximum safety and reliability. The design and development of the UAS
are described in three components: design philosophy and rationale, individual component design,
and system testing. The Peppy UAS is a complete system capable of providing the Intelligence,
Surveillance, and Reconnaissance (ISR) required.
Contents
1 Introduction
1.1 Mission Requirements and
1.2 Team Falco . . . . . . . .
1.3 Brief System Overview . .
1.4 Competition Preview . . .
.
.
.
.
2
2
2
2
3
2 Flight Systems Design and Overview
2.1 Airframe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Autopilot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
4
7
Goals
. . . .
. . . .
. . . .
3 Payload Design and Overview
3.1 Overview . . . . . . . . . . .
3.2 Cameras . . . . . . . . . . . .
3.3 On Board Processing . . . . .
3.4 Gimbal . . . . . . . . . . . .
3.5 Payload Communications . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10
10
10
11
12
13
4 Ground Systems Design and Overview
4.1 Graphical User Interface Software . . .
4.2 Image processing . . . . . . . . . . . .
4.3 Manual Target Review . . . . . . . . .
4.4 Imaging Station to Autopilot Interface
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
13
14
17
17
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 Testing, Performance and Safety
17
5.1 Individual Systems Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Full Systems Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6 Acknowledgments
19
6.1 Sponsors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.2 Faculty and Staff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1
1
1.1
INTRODUCTION
Introduction
Mission Requirements and Goals
AUVSI’s 2012 SUAS competition, sponsored by the Seafarer chapter, simulates a real world
mission in which a UAS autonomously gathers intelligence for a military platoon. Following a 40
minute set up time, the vehicle must autonomously take off, navigate a through a preset waypoint
route, locate an unknown quantity of static targets, and land. The UAS must be capable of providing continuous imagery, dynamic retasking, and safe navigation within constrained airspace. Total
mission time is not to exceed 40 minutes.
The mission is judged for successful target location and identification, degree of autonomy, actionable intelligence, and overall execution. Safety is of highest priority. The optional radio beacon
challenge will not be attempted.
The goal for 2012 was to improve upon the UAS used in the previous year. The 2012 UAS was
designed to have the capability to complete all of the mission objectives while maximizing autonomy,
reliability, and safety. To accomplish these goals the team maintained and furthered its extensive
software suite while retrofitting its airframe for improved transportability, payload access, and ease
of use.
1.2
Team Falco
The University of California San Diego’s AUVSI SUAS competition team is a multidisciplinary
project with members from the Aerospace, Electrical, Mechanical, and Computer Engineering fields.
The team is led by Tim Wheeler, also head of autopilot integration. Lewis Anderson leads the
computer science portion of the team which includes Zack Grannan, Eric Lo, and Jordan Sendar.
Hardware, comprised by Bryan Ritoper, Steven Kestler, Justin Chang, and Elioth Fraijo; is led by
Vinh Phan. Sarah Lohman is acting safety pilot. Dr. Ryan Kastner of the UCSD Computer Science
and Engineering department acts as faculty advisor.
1.3
Brief System Overview
UCSD’s UAS is made up of three subsystems: flight systems, payloads, and ground systems.
The flight systems include the modified Sig Rascal 110 airframe and the Kestrel autopilot which
is responsible for the autonomous operation of the UAS. The autopilot is managed via a ground
station laptop, providing a direct means of viewing aircraft diagnostics and granting human-in-theloop control. Payloads are comprised of a a gimballed camera, a gimbal microcontroller, a power
supply, a receiver multiplexer, and two transmitters.
Imagery is acquired using a single Sony FBC-EX11D block camera that can shoot video in up
to 530i quality with automatic focus, white balance and serial controlled zoom. The gimbal for the
camera was designed and fabricated by the team to fit the mission specifications. It is capable of
±90◦ pitch and ±45◦ roll with a resolution of .12◦ . All payload components, including the autopilot,
are contained in their own housings on the airplane with standard interconnects for power and
communications for ease of maintenance. All payloads are assembled together into a payload unit
for easy installation and retrieval from the airframe. Payloads were arranged with maintainability
in mind such that components with lower reliability were placed for easier access than parts with
higher reliability to reduce maintenance time.
2
UC San Diego AUVSI
1.4
Competition Preview
1
INTRODUCTION
Figure 1: The UCSD “Peppy” Payload System Overview
The ground systems include the software and hardware used to process video, identify targets,
and control the gimballed camera on the airplane. The image station includes a video capture card
that collects the video and feeds it into a graphics processing unit (GPU) to automatically locate
and identify targets. The image station also receives all telemetry data necessary to geo-reference
the pixels in each video frame so that the GPS coordinates of the targets can be determined. While
targets are identified autonomously, a remote client application allows for efficient human verification.
1.4
Competition Preview
The UCSD UAS has been retrofitted both for performance and ease of use. Hull mounted power
switches, transmission-free gps lock verification, an easy-maintenance payload unit and compartmentalized payload configuration allow for quick troubleshooting if any payload components are not
functioning properly. All systems are checked via preflight procedure before takeoff. The RC safety
pilot acts as pilot-in-command and has the final say on aircraft airworthiness.
A powerful dual 21.5” monitor computer for imagery and a laptop for the autopilot have separate
communications channels to the aircraft which are checked before each flight. The autopilot ground
station is used to program the takeoff procedure and the initial navigation route. The imaging station
is fully linked with the autopilot computer and a sophisticated pathing algorithm uploads the search
waypoints through the autopilot ground station to the UAS. Finally, the image station computer
begins video processing and handles control over the camera state so that the lens is protected
during takeoff.
The project manager will go through a final safety checklist with the team to verify the go-no
go criteria. Upon its successful completion, the UAS will be readied for its mission by arming the
flight motor. The autopilot ground station gives the command to take off . After an autonomous
takeoff , the aircraft transitions into a loiter while the team performs diagnostics, ensuring control
and communications are satisfactory for mission completion. The image station then instructs the
camera gimbal to operate in stabilization mode, where the gimbal’s position compensates for the
airplane’s attitude. Simultaneously, the autopilot is commanded to begin waypoint navigation. At
this point the mission is fully underway. As the UAV navigates through its waypoints, the video
3
UC San Diego AUVSI
2
FLIGHT SYSTEMS DESIGN AND OVERVIEW
camera wirelessly streams video in real time and receives synchronizes telemetry through the autopilot
station. Video passes through an image processing algorithm running on the image station that
automatically returns the target parameters. When a target is located, the camera will be fixed on
the target’s GPS point and then zoomed in to provide a high resolution image for better identification.
Targets identified by the algorithm are forwarded to a remote client application allowing for human
verification before final target approval.
The UAS is fully capable of changing search pattern in flight. A new pathfinding algorithm
keeps track of which areas have already been viewed by the aircraft and directs it to unsearched
areas. Once all targets are found, the operator commands the UAS to land. The motor is disarmed,
transmission is terminated, and all data is handed in, thus completing the mission.
2
Flight Systems Design and Overview
2.1
2.1.1
Airframe
Introduction
The UCSD Peppy is a modified SIG Rascal 110 tractor propeller, fixed wing airframe. It has a
wingspan of 8.8 ft and a length of 6.1 ft. The SIG Rascal was modified to incorporate an electric
propulsion system, tricycle landing gear with steerable nose gear, and a removable tail. The aircraft
has a minimum takeoff weight of 15.5 lbs (MINTOW), a maximum takeoff weight of 30 lbs (MTOW),
and a nominal mission takeoff weight of 23 lbs (NMTOW). At NMTOW the airframe has a stall speed
of 23 mph and a cruise speed of 32 mph with a total flight endurance of 30 minutes.
2.1.2
Design Methodology
The 2012 UCSD team required an off-the-shelf airframe that would be able to successfully compete in the AUVSI competition. Portability, payload accessibility and mild flight characteristics were
identified as key requirements from past competition experience. The SIG Rascal airframe was modified to have a removable tail, reducing overall shipping volume as well as ensuring that the airframe
would fit in a small station wagon for transportation to and from flight tests. Payload accessibility
and reliability concerns were addressed by creating a removable ”payload unit” which houses all payload components, thus minimizing the total payload volume needed while increases ease of access.
Peppys flight characteristics are derived from modifying a known airframe for predictable results.
Figure 2: From top right CW: UCSD Sig Rascal 110, Raytheon Cobra, ACR Manta, Arcturus T-15, Optimum
Solutions Condor (center)
4
UC San Diego AUVSI
2.1
Airframe
2
Parameter
Wing Span
Length
Propulsion
MTOW
Payload weight
Payload volume
Cruise velocity
Units
Feet
Feet
Type
Pounds
Pounds
cu-in
knots
FLIGHT SYSTEMS DESIGN AND OVERVIEW
UCSD Sig 110
8.8
6.1
Electric
16
11
756
28-40
Manta
8.8
6.2
Gas
61
15
776
39-90
Cobra
10.2
9.3
Gas
>100
45
2600
50-60
Condor
10.6
6.2
2xElectric
40
13
<1000
49
T-15E
10.8
6
Electric
45
10
800
50
Table 1: Key flight parameters of competitors listed for comparison
2.1.3
Competitive Assessment
The Sig Rascal 110 must be competitive for the AUVSI competition, but its design must also be
justified with a competitive assessment of the current UAV airframe market. Competition included
the ACR Manta, Raytheon Cobra, Arcturus T-15E, and the Optimum Solutions Condor 300. These
aircraft had capabilities similar to requirements set forth by the team, but were inadequate for one
reason or another. The commercial UAV airframes were far too costly with only the T-15E being
electric powered. All four airframes required large shipping crates and a station wagon or SUV for
short-haul transportation. The T-15E would need landing gear for AUVSI operations. The Sig Rascal
110 was previously used and modified by the UCSD team to electric propulsion and tricycle gear,
but it still lacked the payload volume and easy payloads accessibility. In order to be competitive,
the airframe needed to be easier to transport, offer a reasonable payload volume and weight, and
maximize structural and flight efficiency. Specifications for the Sig Rascal 110 and its competitors
are listed in table 2.
2.1.4
Design
The Sig Rascal 110 airframe “Peppy” was the basis of the custom “Falco” airframes, used by
UCSDs team for the competition in 2009 and 2010. Both the Falco and its corresponding backup
counterpart were lost in testing in 2011. Using the Sig Rascal as the competition airframe last year
and as the primary airframe for the 2012 competition has punctuated the many strong points of the
airframe and has allowed for the remedy of its shortcomings inherent in the airframe.
As listed previously, the strengths for which the airframe were chosen are its desirable cruise
velocity, flight duration, handling characteristics, and - above all - its proven track record. The
handling qualities of the Sig Rascal are well-liked by the team’s safety pilot and the wooden airframe
has been easy for the Hardware team to modify and repair.
With its wing placement, the center of gravity is kept below the wing while the entire upper
surface of the fuselage can be opened up for easy access to payloads. The wing is a triple-tapered
design which mimics the efficiency of an elliptical planform while maintaining the manufacturability
and stall characteristics of a rectangular or single/double tapered wing.
Peppy’s tricycle landing gear layout provides good ground stability for the autonomous takeoff
and landing of the aircraft. This layout also protects the camera gimbal and propeller from impact
during landing. The nose and main gear can be removed by using one allen-key each in order to
reduce shipping volume.
5
UC San Diego AUVSI
2.1
Airframe
2
FLIGHT SYSTEMS DESIGN AND OVERVIEW
Figure 3: XFLR5 allowed the aerodynamic design to be
Figure 4: A comparison of the old gear box (top) to
refined with relative ease
the new gear box (bottom)
The electric propulsion system is a new, more powerful version of the previous version. New
mounting hardware was made to accommodate a larger 6 mm shafted gearbox, as seen in Figure
GEARBOX. Propulsion is provided by a Neutronics Corporation 1515-2y motor with a 6.7:1 gearbox
controlled by a Castle Creations Phoenix HV-85 speed controller. The aircraft uses a standard APC
propeller. A total of twelve 5500mAh lithium-polymer battery cells in series provide the aircraft
with a nominal 44.4 volts and a maximum installed current draw of 65 amps. The propulsion system
has been shown in flight testing to allow for missions of over 30 minutes. A summary of all aircraft
parameters is provided in Table 3.
Parameter
Units
Specifications
Wing Span
Inches
106.0
Length
Inches
73.0
Height
Inches
25.0
Wing Area
Square Inches
1429.6
Aspect Ratio
7.86
Usable Payload Volume
Cubic Inches
756
Maximum Payload Weight
Pounds
11.0
12.0
Structural Weight
Pounds
Empty Takeoff Weight
Pounds
15.5
Nominal Mission Takeoff Weight
Pounds
22-26
Maximum Takeoff Weight
Pounds
30.0
Load factor at MTOW
Gs
±4
Flight Duration at NMTW
Minutes
30.0
Stall, Cruise, Max Velocity at NMTW
Knots
22.6, 28-40, 55
Propulsion
Neu 1515-1.5y, 6.7:1 Gearbox, 20x11 APC Prop
Power
2x Neu 5500 6-cell Lithium Polymer Packs
Table 2: Peppy Specifications
6
UC San Diego AUVSI
2.2
Autopilot
2.2
2
FLIGHT SYSTEMS DESIGN AND OVERVIEW
Autopilot
2.2.1
Procerus Kestrel Autopilot
One of the primary mission objectives given for this competition is aircraft autonomy. An
autopilot system must satisfy a diverse set of requirements to safely and successfully accomplish a
complex flight mission without human intervention, most importantly:
• Robust autonomous control for flight stability
• Autonomous waypoint navigation
• Autonomous takeoff and landing capabilities
• Reprogrammable mission
Hardware considerations include the following:
• Physical dimensions and ease of integration with a custom built airframes
• Weight
• Cost effectiveness
• Robust integrated sensor suite
• Expansion capability to communicate with serial devices
Table 3 compares several current autopilot options.
Name
APM 2
Attopilot 3
Gluonpilot
Kestrel
Micropilot MP2128g
OpenPilot CoptorControl
Paparazzi Lisa M
Piccollo SL
Cost
US $
199
3075
285
5000
6500
130
290
-
Weight
g
40
35
22
34
28
20.1
25.8
110
Footprint
mm
65x40x10
34x34x21
59x46x10
50.8x34.8x11.9
100x40x15
36x36x15.5
60x34x10
130x59x19
Servo Outputs
Serial Ports
8
4
8
12
8
6
6
14
2
1
3
4
2
3
3
Table 3: Key flight parameters of competitors listed for comparison
Since none of the other autopilots appear to offer significant improvements in functionality, the
Kestrel autopilot was again selected for this years competition.
Included in the Kestrels sensor suite are GPS, IMU, pressure sensors, altimeter and a magnetometer providing full telemetry for the system. The chip, which weighs only 34 grams, carries out
communications at 900 MHz. These features used in combination with the rich Virtual Cockpit user
interface make this full-featured micro autopilot one of the best commercially available solutions
The autopilot hardware uses a suite of sensors to provide the necessary data for controlling
the aircraft, including a magnetometer, three gyroscopes, three accelerometers, dynamic and static
pressure ports and a GPS receiver. The Kestrel uses an onboard 8-bit 29MHz processor to handle
all sensor data and run the code necessary for controlling the aircraft via servo ports on the Kestrel
unit. The Kestrel uses a Maxstream 9XTend 900MHz 1W wireless modem to communicate with the
ground station; this connects to the bottom face of the autopilot’s PCB board through the modem
header. The autopilot is housed in an open aluminum bed with for easy access.
7
UC San Diego AUVSI
2.2
Autopilot
2
FLIGHT SYSTEMS DESIGN AND OVERVIEW
Figure 6: The autopilot station showing an overlay
Figure 5: The Kestrel autopilot.
2.2.2
of a field where extensive testing was conducted.
Virtual Cockpit Ground Station
The ground station for the Kestrel autopilot consists of Virtual Cockpit (VC) software running
on a laptop and connected to a Commbox via RS232. The Commbox in turn communicates with the
autopilot via a 900MHz Maxstream wireless modem. The Commbox and laptop each have internal
batteries, allowing for their use in the field without external power. A standard R/C transmitter
interfaces with the Commbox through a trainer cable, allowing manual control of the aircraft through
the Kestrel.
The VC allows the user to configure the Kestrel and safely operate the UAS during autonomous
and semi-autonomous flight. A flight profile corresponding to the specific servo arrangement on the
aircraft was created and trimmed through the user interface. The VC supports sensor calibration,
both for the magnetometer and pitot tubes to compensate for hysteresis and variation due to the
particular sensor arrangement. The VC also features a PID gain window, used to tune all control
loops for safe and reliable UAS control.
The VC is primarily a front end for interacting with the autopilot. Waypoint navigation can
be achieved by editing a flight route over a georeferenced image and uploading it to the autopilot.
A heads up display with relevant aircraft parameters and modes is present in the main window of
the VC at all times for the benet of ground station personnel. Telemetry data for each flight is
automatically stored by the software for post analysis and replying the flight in the VC.
A custom software application runs in the background on the Virtual Cockpit Ground Station
and automatically synchronizes with the VC. All telemetry information obtained from the autopilot
is automatically forwarded by the software to the Imaging Station through RS232. This gives the
imaging station direct telemetry information, and by using pre-existing telemetry channels, mitigates
an extra radio link for the gimbal microprocessor. The software link is bidirectional, allowing the
imaging ground station to send commands to the VC, and hence the autopilot. This enables the
pathing algorithm for the imaging station to automatically update the planes flight path without
tedious manual input from the user.
8
UC San Diego AUVSI
2.2
Autopilot
2.2.3
2
FLIGHT SYSTEMS DESIGN AND OVERVIEW
Software in Loop Simulation
After the 2011 competition there were few issues that needed to be addressed with the autopilot.
As in previous years, university concerns about performing autonomous flights in the San Diego area
have caused fully autonomous flight testing has been limited.
In order to deal with this challenge a software in loop simulation was developed to interface the
Virtual Cockpit software with the RealFlight 4.5 simulator. The team created an accurate virtual
airplane model, with the same mass properties and aerodynamic characteristics as the UAS, and
carried out testing in the RealFlight 4.5 simulation environment. From there the PID gains were
further tuned in the virtual environment. The simulation environment also allowed the team to
practice using the autopilot in order to be fully prepared to perform an autonomous flight.
This year the software in loop simulation environment was further developed to support hardware
in loop testing. This allowed the team to closely inspect the fully assembled UAS and all flight
systems, minus throttle, perform virtual autonomous flight. By adding hardware into the loop,
simulation became as close as possible to autonomous flight testing without having to take the
associated risks.
2.2.4
Autopilot Safety Considerations
The autopilot is one of the most critical components on the airframe, and as such, several safety
realizations have been put into place. The first is an 8 channel receiver multiplex (RxMux) which
assigns control of the plane servos between the autopilot and RC safety pilot (figure 7). With is the
RC safety pilot can take immediate control over the airframe at any point during the flight. The
RxMux is a critical hub in the aircraft; its failure would mean complete loss of the control of the
plane. The RC-Mux was replaced and its wiring was re-done to allow for safe and easy-to-troubleshoot
connections while preventing popped wires which had been a problem in the past.
Figure 7: New RxMux board with servo header pins
The second major safety consideration is the autopilot failsafe. This governs the autopilots behavior should communication with the ground station be lost. The autopilot failsafes are set up in
compliance with the rules set up by AUVSI for the competition. After 30 seconds of communication
loss the autopilot will return to the takeoff location and loiter; after three minutes without communications the autopilot will put the airframe into a spiral dive. Additional failsafes, including loss of
GPS and a low battery warning are also enabled.
9
UC San Diego AUVSI
3
3
3.1
PAYLOAD DESIGN AND OVERVIEW
Payload Design and Overview
Overview
The payloads for the UAV are a Sony FCB-EX11D block camera, a gimbal assembly that houses
and points the camera, a Digi Rabbit microprocessor that handles all the inputs and outputs for the
payloads, a power supply unit, and a digital video transmitter from Broadcast Microwave Systems
(BMS). The goal of the payloads system is to stabilize and control the camera gimbal assembly such
that the camera points at a commanded GPS position on the ground and can zoom in to collect high
resolution target data.
The payloads system has been redesigned to incorporate two extremes in past payload design
approaches. In order to improve troubleshooting a modular system approach was used to group
similar items into enclosures with common inputs and outputs. A complete payload unit was then
created from these modular components to maximize efficiency of the small payload space and reduce
the need for manipulation of the payload once it is inside the airframe. Panel switches have also been
added to allow complete assembly of the airframe prior to mission start.
Figure 8: The new payload configuration which can be slotted as one unit into the airframe
3.2
Cameras
One of the primary objectives of the competition is to locate targets and identify the following
parameters: background color, shape, orientation, alpha-numeric, and alpha-numeric color. The
team’s design considered a single still-image digital camera, multiple USB cameras, an analog video
camera, and a high definition block camera. The functional requirements for identifying targets are:
• Resolve the targets at altitudes up to 500 ft
• Lightweight and small enough to physically mount within the airframe
• Take color images/video
• Transmit images/video to the ground or onboard computer in real time
• 120 degree field of view to locate any off-path targets
Considering the team had committed to designing a pointing gimbal, the field of view restrictions could be met with physically moving the camera instead of solely with camera’s field of view.
Therefore, video quality and features such as remote zoom capability, auto-focus, and auto-white
10
UC San Diego AUVSI
3.3
On Board Processing
Image Quality
Features
Weight
Size
Lens Distortion
Cost
Power Consumption
Ease of Interfacing
Total
Weight
x4
x3
x2
x2
x4
x1
x1
x1
3
FCB-EX11D
4
4
3
3
2
1
3
3
55
PAYLOAD DESIGN AND OVERVIEW
Canon S3-IS
3
3
1
2
3
4
2
1
46
TSC-TC33USB
2
2
4
4
1
4
4
2
44
Analog
1
1
2
1
2
4
1
4
32
Table 4:
Camera trade study between Sony FCB-EX11D block camera, Canon S3-IS digital still camera, Sentek
TSC-TC33 USB video cameras, and a Blackwidow AV analog video camera.
balance were the most important aspects in selecting a camera, which led to the Sony FBC-EX11D
block camera being used in the system as seen in Table 4.
The Sony camera streams video in NTSC. The camera has auto-focus and auto-white balance
features which allow for good video to be taken in any lighting conditions that could be expected at
competition. Another important feature of the camera is that it has up to 10X optical zoom which
can easily be controlled by the microprocessor over the serial link, allowing for sufficient resolution
for target identification from any altitude allowed at competition.
3.3
On Board Processing
Proper mission performance requires that the camera, autopilot, and gimbal function together.
Synchronization is provided by an onboard microprocessor which handles serial communication between the three payloads devices. The requirements for this processor were:
• 4 serial ports: 3 for payloads devices, 1 for wireless link to ground
• Lightweight and low power consumption
• Processing power to perform gimbal calculations
• Parallel processing capability
Figure 9: Rabbit BL1800 processor.
The Rabbit BL1800 processor stoods out for its ability to handle serial communication. There
are four serial ports available on the Rabbit; each serial port with a dedicated pin connection. Unlike
11
UC San Diego AUVSI
3.4
Gimbal
3
PAYLOAD DESIGN AND OVERVIEW
many other microcontrollers, the Rabbit is programmable using a relatively high-level language,
Dynamic C. Dynamic C allowed the team to develop a lightweight operating system which can manage
multiple processes concurrently. There are no moving parts involved with the Rabbit processor,
improving reliability and redicing its footprint.
3.4
Gimbal
The camera gimbal allows the system to point the camera for ground target location and identification. The camera was selected before the gimbal design was finalized, therefore driving requirements for the gimbal design. The rest of the requirements were driven by standard airframe flight
conditions. Requirements for the design were:
• Maximized roll and pitch
• At least 1.0◦ resolution in both degrees of freedom
• Camera protection during takeoff and landing
• Low weight
Figure 10: The new gimbal design, designed to fit within the small allotted space within the aircraft.
With these minimum requirements in mind, a two degree of freedom gimbal using servos to drive
its motion was designed and fabricated as seen in figure 10. The finished gimbal met or exceeded all
of the requirements, with the resolution on the gimbal pointing being .12◦ in both degrees of freedom.
Delrin was chosen as the primary material over aluminum for its low weight, stiffness, and impact
resistance. In addition, Delrin is easy to machine and can be cut using a LaserCAMM.
The gimbal has two modes of operation, manual and GPS lock. It switches between the two
modes by taking commands from the ground station computer. In manual mode, the gimbal is
controlled manually through a joystick operated from the ground station. The joystick provides an
intuitive and cost effective way to manually control the motion of the gimbal. Manual mode does not
automatically compensate for the motion of the plane, and therefore its primary purpose is initial
testing, troublshooting, and target searching.
While in GPS lock mode, the gimbal will compensate for the roll, pitch, and yaw of the plane so
that the center of the camera is always pointed at the desired GPS coordinate. The gimbal will lock
onto the chosen location by adjusting its pitch and poll. Manual mode provides stable images of the
desired location as the camera zooms in. Toggling user mode is made easy through the user’s game
pad.
12
UC San Diego AUVSI
3.5
3.5
Payload Communications
4
GROUND SYSTEMS DESIGN AND OVERVIEW
Payload Communications
A total of three wireless links connect the airframe to the ground during a mission. The autopilot
communicates with the Virtual Cockpit ground station over a 900MHz link. The safety pilot uses
a standard 72MHz RC controller and has authority over the airframe controls should anyything go
amiss. A 2.3 GHz coded orthogonal frequency division multiplexing (COFDM) video link transmits
live video to the imaging station. The transmitter used to maintain this video link was loaned to
UCSD by BMS, and allows video to be transmitted digitally instead of via analog signal. The video
transmitter only transmits NTSC quality video in standard definition at 720X486 resolution and
therefore acts as the primary bottleneck in video quality.
4
4.1
Ground Systems Design and Overview
Graphical User Interface Software
The ground station software is entirely custom written for this competition. It is encapsulated
into one program, referred to as the “Image Station.” The main purpose of the Image Station is
to provide a graphical user interface (GUI), figure 11, displaying essential flight information in an
intuitive and organized fashion. It is the processing core of the entire imaging pipeline and runs on
specialized hardware to provide complex computation in real time.
Figure 11: The two monitor graphical user interface developed for displaying and analyzing target data. The left
columns display telemetry data and warnings, in the middle are video options and the real time video stream, and the
right shows the plan and captured targets in Google Earth.
The image station displays a feed of live video from the airplane. Video is acquired through a
capture card which is agnostic to input format - this means that virtually any type of video device
can be connected to the system and it will perform reliably. The video data is constantly being
processed by the imaging pipeline, which first calculates a saliency map to extract small regions of
the image to be run through the more intensive object recognition algorithms. This data is saved into
a database and classified into one of three categories: “candidate regions,” which contains interesting
regions as determined by saliency,”possible targets,” which contains autonomously identified targets,
and ”verified targets,” which are possible targets that have been manually confirmed by a human
operator.
13
UC San Diego AUVSI
4.2
Image processing
4
GROUND SYSTEMS DESIGN AND OVERVIEW
In addition to a heads up display, airplane telemetry data is also displayed in a tabular fashion.
All information is logged to disk for post analysis. Telemetry data enters from a direct serial connection with the autopilot ground station. The image station requires no other connectivity to operate;
it simply needs a video and data link. An operator can manually manipulate the gimbal using a
gamepad. Moving the game pad joystick causes an instantaneous response to the gimbal position
which can be observed by watching the video display or looking at the live telemetry data.
The image station possesses the capability to record video and quickly export images to disk.
Video is automatically compressed and split into manageable sized files. The image station also interfaces with Google Earth, displaying the plane’s current location, camera field-of-view, and identified
targets in an intuitive manner.
As the center of the computing system, the image station makes identified targets available to
all computers on a local network such that other systems can interface with and modify the list of
identified targets. (section 4.2.3)
Finally, the image station manages the system’s automated search capabilities (section 4.2.1,
4.2.2), building an efficient search path, sending it to the autopilot, and reviewing the progress of
the search. At any time, the search can be paused or restarted without losing any progress, and it
can quickly and easily adapt to changing field shape.
4.2
4.2.1
Image processing
Image Rectification
An essential task for imagery captured from an aerial platform is to exactly determine the
physical location of any point in an image - a process known as georeferencing. In order to find
the latitude and longitude of a potential target, it was necessary to perform a series of coordinate
transformations to take the camera position vector to a local WGS 84 Cartesian coordinate system.
Under this WGS 84 coordinate system, actual physical locations can be derived for any point in the
image. To accomplish this, the following rotation matrix was constructed:
R = Rcamera,gimbal Rgimbal,airf rame Rairf rame,unrotated Runrotated,W GS84local
Where each Rij takes one frame of reference to another coordinate system. The individual
Rij were constructed using a series of Quaternion transfroms about the necessary axes of rotation,
and change-of-basis matrices that converted from North-East-Down to North-West-Up coordinate
systems.
The end result is a positional vector in the local WGS 84 coordinate system which can be used
along with the positional information of the UAS to project any point in the image onto the earth
and derive its physical location. Given a low cruise altitude, the simplifying assumption of a flat
Earth was made, causing negligible error (< 1 cm) and greating reducing computation complexity.
This information can further be used to create a homography - a transformation mapping points
on the image to points on the earth plane. This transformation removes the effects of projective
distortion and gives the image the appearance of being viewed from directly overhead. Without this
transformation, under projective perspective, shapes such as squares can appear as any arbitrary
quadrilateral. After rectifying the image, right angles and parallel lines are restored to how they
should look. This transformation is essential for performing automated target recognition.
14
UC San Diego AUVSI
4.2
Image processing
4.2.2
4
GROUND SYSTEMS DESIGN AND OVERVIEW
Automated Search
This year an algorithm was developed to generate a map of the search area to distinguish areas
which have been searched for targets. This map is rendered on the Image Station and is updated
in real-time. The algorithm forms a boundary if the search area from given GPS coordinates and
generates a grid representation of the search area with 1 m2 resolution. Elements of the matrix are
marked as being either seen, not yet seen, or outside of the search area. The cost of generating this
matrix is directly proportional to the size of the search area, resulting in O(n) algorithm efficiency.
Updating the map and rendering the map are similarly O(n), resulting in an efficeint algorithm which
scales linearly with map size.
An additional algorithm was developed to generate the best UAS search path for most efficiently
scaning the search area. This algorithm converts the visualization map into an undirected graph,
where each node on the graph represents a 50 m2 hexagonal section of land. Nodes that lie entirely
outside the search area are removed, and nodes that contain areas both inside and outside the
search area are repositioned within the search area. The time complexity of generating the graph is
remarkably O(n), where n is the size of the search area. In order to generate the path, a modified
depth-first-search is performed that returns the most efficient flight path.
4.2.3
Automated Target Location and Identification
At standard definition the onboard camera captures 320mb of data per second, so an approach is
needed to filter this data quickly and reliably. The chosen solution is saliency, a biologically inspired
approach for finding regions of interest in an image. By imitating the methods a human eye uses to
effortlessly identify interesting regions, the saliency algorithm identifies regions of each frame that
stand out based upon several types of contrast, including color and edge intensity. Since the targets
being searched for are strong-edged, well defined shapes that consist of solid colors they will inevitably
cause a large contrast with their background of mostly natural objects such as grass and foliage.
Thus, for each frame of video the algorithm autonomously filters out an overwhelming proportion as
“uninteresting” and does not waste time processing it. This allows the target identification to run its
analysis in real time. The output of the saliency algorithm can be thought of as a heat map which
displays more interesting” areas as bright regions, while regions deemed “unintersting” are darker as
seen in figure 12.
15
UC San Diego AUVSI
4.2
Image processing
4
GROUND SYSTEMS DESIGN AND OVERVIEW
Figure 12: Example of the Saliency algorithm running on a set of practice targets. The targets appear as bright
white regions in this noisy example, and these regions would be fed through the autonomous identification algorithms.
The saliency algorithm itself is implemented to run on a graphics processing unit (GPU), which
allows for massively parallel computation and real-time analysis for both standard and high definition
images. An additional advantage is that other CPU bound processes can be performed without the
overhead of saliency processing.
It is important to note that saliency does not make any subjective discrimination as to what
type of objects it finds “interesting” or not. The algorithm has no knowledge of what a target is,
what constitutes background noise, or what a human would like it to find interesting. It simply
uses biologically motivated conditions for finding a region “interesting” that match what the human
visual system would do given a similar situation. A lower saliency threshold was set such that the
algorithm labels some irrelevant regions as “interesting.” However, the lower threshold ensures that
relevant regions are not accidentally discarded.
Figure 13: Two samples of the autonomous target and shape recognition algorithm which show the shape and
alphanumeric being extracted from the full color image of the target.
From an operator perspective, saliency makes searching for targets far more feasible in a time
constrained environment - it acts like a second set of eyes scanning over images and finding regions
of importance. By highlighting potential targets in real time, a human operator can perform other
tasks while watching the video stream without fear of missing a target.
Once these interesting regions are found via saliency, they are processed using a trained algorithm
that performs more complex analysis to autonomously determine target parameters such as location,
shape, letter, and color information.
16
UC San Diego AUVSI
4.3
Manual Target Review
4.3
5
TESTING, PERFORMANCE AND SAFETY
Manual Target Review
The Client Apps offer a cross-platform user interface solution for target identification. The
application provides a viewport for displaying targets as they are captured in the video feed. The
user may verify and enter critical information about each target and submit this information to the
Image Station’s database. In the past, this application was offered as a native Windows application,
allowing only a single computer to verify the data. With the power of HTML5 and Javascript, the
application was expanded to support all computers and tablets with a web browser.
By expanding the platform support for the Client Apps, we have seen viewing accuracy increase
by 300% when compared with a single native app with many users. This, combined with a 5x faster
result delivery makes the Client Apps a very powerful tool. Target identification is now spread among
numerous individuals, thereby increasing the number of targets viewed and identified.
This fast cross-platform interface lessens the burden on the Image Station by distributing the
task of target verification to other devices. This is crucial as it lets the Image Station operator
continue to monitor its connections and assure the continued retrieval and transport of valuable
target information without interruption. The more tasks that are shifted away from the Image
Station, the more reliable the gathered intelligence. By operating the Ground Station and analyzing
targets simultaneously, the team has up to 5 times longer to view and identify targets, thereby
increasing final target identification accuracy.
4.4
Imaging Station to Autopilot Interface
The Virtual Cockpit Interface, a direct serial link between the image station and the autopilot
station running the VC, removes an extra modem link and gives the image station direct access to all
VC features and functions. The interface was written in Java, and is hosted on the autopilot station.
The VC Interface automatically establishes communication with the VC upon initialization. It
makes full use of the communication protocol published by Procerus Technologies, thus allowing
data from the autopilot to be requested in real time. This gives the imaging station the ability to
undertake any action available to the user of the VC, such as adjusting flight waypoints, sending and
requesting sensor updates, and ordering a takeoff or landing. Enabling the interface has put UCSD
AUVSI one step closer to a mission entirely controlled by the image station software.
5
Testing, Performance and Safety
5.1
5.1.1
Individual Systems Tests
Autopilot Testing
The UCSD team began the 2011-2012 year with a functioning autopilot that could perform
all of the mission objectives. Because the university was concerned about legal and liability issues
regarding autonomous flight testing, the primary method of testing the autopilot for most of the year
were software and hardware in loop simulations. However, the team did perform test flights near the
end of the year to ensure that the autopilot still performed as expected.
17
UC San Diego AUVSI
5.2
Full Systems Tests
5.1.2
5
TESTING, PERFORMANCE AND SAFETY
Imagery Testing
To test the image systems in a realistic setting the team constructed over 20 competition style
targets, and testing for the imagery systems was done in segments as functionality was developed.
Initially, a camera was used to take pictures of targets from the roof of buildings on campus to
collect sample target data. The camera was then statically mounted in the airframe to judge image
quality from the air. Once the gimbal was completed and installed in the airplane additional testing
was performed, which showed that targets could be identified from a reasonable altitude even without
zoom implemented. Finally, remote zoom on the camera was implemented, which allowed all targets
to be identified.
5.1.3
Airframe Testing
Additional flight verification was required following the modification of the Peppy airframe for
the 2012 competition, including the addional weight added to support the joint for a removable
tail. Tail assembly time is five minutes, requires only a screwdriver, and can be installed before the
airframe is carted to the air field for competition. Taxi testing proved that the aircraft absolutely
does not tip over under maximum lateral forces thus protecting the wing-tips and propeller. The
first flights were conducted without payloads at its empty flying weight in order to learn its handling
characteristics and expand its flight envelope. The control surfaces provided excellent control and
authority, and static ground tests confirmed that the addition of the payload unit left the CG well
within the flight envelope. Drag was reduced slightly by the addition of the hood cover. The empty
aircraft (more susceptible to winds and gusts) was flown in crosswinds at 17 knots gusting up to 22.
5.1.4
Communications Testing
Three separate communication signals are present on the airframe as well as one gps receiver.
Ensuring these signals retain high signal to noise ratios and do not interfere with one another is of
paramount concern. Video is transmitted from the airframe by a 1 watt 2.3 GHz video downlink,
Autopilot and gimbal control is achieved with a 900 MHz 9XTend transceiver, and the RC safety
pilot signal is received on the 72 MHz band. Multiple range tests were performed to guarantee that
both 900 MHz and 2.3 GHz signals would not decrease the strength of the RC signal and cause loss
of control of the airframe. In conjunction with the RC range tests, all communication signals were
tested to establish the best antenna orientation and positioning. These tests also revealed that the
addition of ferrite rings to the main power supply lines improved RC range by 100%.
5.2
Full Systems Tests
Full system tests were performed to verify that all system components could function together
and communications work without interference. Six full systems tests were performed locally in
San Diego at an RC field with the autopilot on board processing GPS and attitude data, but not
controlling the airplane. Targets were placed in the field for each test and as gimbal functionality was
increased, better images were collected. Video and stills from all flight tests were captured using the
ground station computer to be analyzed later for areas where the image systems could be improved.
In progressing towards a final competition-ready state, the team intends to perform fully autonomous practice missions. These final tests will focus on completing simulated setup and missions
within the time constraints placed by the competition. Actual size plywood targets will be set up,
18
UC San Diego AUVSI
6
ACKNOWLEDGMENTS
and the team operating the ground systems will be tasked with locating them within the competition
time frame.
6
6.1
Acknowledgments
Sponsors
Partial Funding was provided by the ASUCSD. This publication does not represent the views or
opinions of the ASUCSD.
6.2
Faculty and Staff
We would like to thank the following members of the UC San Diego community for all of the help
and guidance that they have given our team:
• Professor Ryan Kastner, Faculty Advisor, Computer Science and Engineering
• Professor John Kosmatka, Structural and Aerospace Engineering
• Dr. Mike Wiskerchen and Tehseen Lazzouni, California Space Grant Consortium
• Dr. David Woodhead, Broadcast Microwave Systems
• Mr. Chriss Cassidy, UCSD Design Studio
• UCSD AIAA Student Chapter
• UCSD IEEE Student Chapter
Special thanks go to Pedro for his amazing RC piloting skills and extreme patience.
19
UC San Diego AUVSI