Earth Remote Sensing - The Aerospace Corporation

Transcription

Earth Remote Sensing - The Aerospace Corporation
Contents
Crosslink
Summer 2004 Vol. 5 No. 2
4
Earth Remote Sensing: An Overview
David L. Glackin
Spaceborne remote-sensing instruments are used for applications ranging from global
climate monitoring to combat-theater weather tracking to agricultural and forestry assessment. Aerospace has pioneered numerous remote-sensing technologies and continues to advance the field.
Best Laid Plans: A History of the
11 The
Manned Orbiting Laboratory
Departments
2 Headlines
47 Bookmarks
50 Contributors
52 The Back Page
Jupiter’s Icy Moons
Steven R. Strom
In the mid to late ‘60s, an ambitious project to launch an orbital space laboratory for
science and surveillance came to dominate life at Aerospace.
Infrared Background Signature Survey:
16 The
A NASA Shuttle Experiment
Frederick Simmons, Lindsay Tilney, and Thomas Hayhurst
The development of remote-sensing systems requires an accurate understanding of the
phenomena to be observed. Aerospace research helped characterize space phenomena of interest to missile defense planners.
20 Active Microwave Remote Sensing
Daniel D. Evans
Active microwave sensing—which includes imaging and moving-target-indicating radar—
offers certain advantages over other remote-sensing techniques. Aerospace has been
working to increase the capability of this versatile technology.
and Simulation of Electro-Optical
27 Engineering
Remote-Sensing Systems
Stephen Cota
In designing remote-sensing systems, performance metrics must be linked to design
parameters to flow requirements into hardware specifications. Aerospace has developed tools that comprehensively model the complex interaction of these metrics and
parameters.
On the cover: Part of the Mojave
Desert near Barstow, California,
acquired by the Spaceborne
Imaging Radar-C/X-Band
Synthetic-Aperture Radar, which
flew on the space shuttle in April
1994. Design by Karl Jacobs.
32 Data Compression for Remote Imaging Systems
Timothy S. Wilkinson and Hsieh S. Hou
Remote imaging platforms can generate a huge amount of data. Research at Aerospace has yielded fast and efficient techniques for reducing image sizes for more efficient processing and transmission.
40 Detecting Air Pollution From Space
Leslie Belsma
The use of satellite data for air-quality applications has been hindered by a historical lack of collaboration between airquality and satellite scientists. Aerospace is well positioned to help bridge the gap between these two communities.
45 Synthetic-Aperture Imaging Ladar
Walter F. Buell, Nicholas J. Marechal, Joseph R. Buck, Richard P. Dickinson, David Kozlowski,
Timothy J. Wright, and Steven M. Beck
Aerospace has been developing a remote-sensing technique that combines ultrawideband coherent laser radar with
synthetic-aperture signal processing. The goal is to achieve high-resolution two- and three-dimensional imaging at long
range, day or night, with modest aperture diameters.
50
Commercial Remote Sensing and National Security
Dennis Jones
Aerospace helped craft government policy allowing satellite imaging companies to sell their products and services to
foreign customers—without compromising national security.
From the Editors
O
n a clear day, you can see forever. On a cloudy day,
you can still see a lot from a very great distance. Today’s remote-sensing systems sample Earth and its
environment more often with more spatial resolution
over more of the electromagnetic spectrum than ever before.
The Aerospace Corporation has been involved in spaceborne
remote sensing of Earth and its environment for more than 40
years. The corporate expertise literally spans the spectrum, from
X ray, ultraviolet, visible, and infrared to microwave wavelengths
and frequencies. This expertise includes physics, phenomenology,
and sensing techniques as well as methods for storing, transmitting, and analyzing remote-sensing data.
Aerospace work in remote sensing has yielded considerable
benefits for the defense and intelligence communities. But while
Aerospace continues to focus on national security concerns, the
company has also come to play a significant role in applying
remote-sensing technologies to other areas of national interest.
For example, future generations of DOD’s Defense Meteorological Satellite Program (DMSP) and NOAA’s polar-orbiting environmental satellites will be merged into a new NOAA/DOD/
NASA program called NPOESS (National Polar-orbiting Operational Environmental Satellite System). This integrated system
will eventually boast some of the most diverse and sophisticated
sensors ever sent into orbit. Aerospace has been reviewing plans
for NPOESS in terms of both technology and policy. The goal is
to help effect a smooth transition while ensuring that the demands
of the military, scientific, and commercial sectors are appropriately balanced.
NPOESS is emblematic of a greater change within the
remote-sensing field, which has witnessed a remarkable increase
in capabilities outside the military sector. In fact, DOD has become the largest customer for commercial satellite imagery at
1-meter resolution—and this demand is prompting development
of even finer optical systems. At the same time, instruments such
as the Special Sensor Microwave Imager/Sounder (which flew
aboard the latest DMSP satellite) and the Conical-scanning
Microwave Imager/Sounder (which will fly on NPOESS) are
pushing the limits of satellite-based sensing. Synthetic-aperture
imaging ladar—an area in which Aerospace offers unequalled
expertise—may well usher in the next technology leap.
This issue of Crosslink presents a broad overview of Aerospace work in remote sensing, including historical programs,
dominant methodologies, information processing, policymaking,
and next-generation techniques. We hope it will provide an
interesting introduction while spotlighting some of Aerospace’s
visionary research in the field.
Headlines For more news about Aerospace, visit www.aero.org/news/
A Ringside Seat
2 • Crosslink Summer 2004
Sergio Guarro, director of Aerospace’s
Risk Planning and Assessment office.
Guarro developed the risk assessment
methodology to support the environmental assessment and launch approval
process for the mission. Aerospace assisted with the risk assessment from early
phases of the mission planning and development until launch approval. The importance of this work was recognized by
design, installation, and test of a modified
Solid Rocket Motor Upgrade actuator.
Aerospace supported integration of the
payload, including special acoustic tests,
thermal analysis, electromagnetic compatibility analysis, loads analysis, targeting,
and software testing for the first Centaur
launched on a Titan IVB.
In 1998 and 1999, at the request of
JPL, Aerospace implemented a number of
software enhancements to its Satellite
Orbit Analysis Program (SOAP) to
model the Cassini
mission, said David
Stodden, senior project engineer in the
Software Assurance
and Applications Department. Aerospace
developed Cassini
solid models and
trajectories in 2002
and rendered them to
help visualize maneuvers and scientific observation opportunities. JPL used
SOAP for visualization and analysis of
the June 11 Phoebe
flyby, and Cassini is
using it to visualize pointing and camera
fields of view.
Aerospace also supported in October
2003 a review of the Saturn orbit insertion, the climax of Cassini’s long journey
and the crux of mission success. “These
maneuvers were performed very efficiently, so it appears that the spacecraft
may have sufficient propellant to conduct
an extended mission beyond the planned
four years,” said David Bearden, Aerospace Systems Director, Jet Propulsion
Laboratory Program Office. “Aerospace
congratulates JPL on Cassini’s successful
seven-year journey to Saturn and insertion
into orbit, and looks forward to the
tremendous scientific return during the
coming years,” he said.
NASA/JPL
A
fter years of traveling through the
lonely depths of space, the
Cassini spacecraft finally
reached its destination this
summer, surviving a critical insertion into
near-perfect orbit around Saturn on July
1. Since then, Cassini has been transmitting remarkable images of the planet’s
rings and principal moon, Titan. The success of this mission, managed for NASA
by Caltech’s Jet
Propulsion Laboratory (JPL), has given
scientists around the
world a cause for
celebration—including some at Aerospace, who provided
technical support
during various
phases of the
program.
For example,
from approximately
1995 through launch
in 1997, Aerospace
and Lincoln Laboratory jointly conducted an external
independent readiness review of the
satellite for NASA.
James Gilchrist,
Aerospace cochair of the review, said it
encompassed the spacecraft design, most
of the instruments built by U.S. manufacturers, and the Huygens probe (sponsored
by the European Space Agency). Aerospace also conducted the independent review of the Cassini ground operations.
The review lasted more than two
years and began with an early independent assessment of the trajectory design,
which included an Earth flyby. This trajectory held potential risk because the
spacecraft carried about 33 kilograms of
radioactive plutonium dioxide to power
its thermal generators.
Formal risk assessment was required
because of the presence of this nuclear
power source onboard the spacecraft, said
NASA with a project award signed by the
former administrator, Daniel Goldin.
William Ailor, director of the Aerospace Center for Orbital and Reentry Debris Studies, was chair of the Interagency
Nuclear Safety Review Panel’s Reentry
Subpanel for the Cassini mission. Ailor’s
group focused on how well the material
protecting the radioisotope would perform
under reentry velocities approaching 20
kilometers per second—far beyond the reentry velocities from standard Earth orbits, which range closer to 7.5 kilometers
per second.
Aerospace participated in launch
readiness tests and the Titan IVB launchvehicle processing and was instrumental
in developing procedures to support the
Satellite Sentries
T
he United States
and the European
Commission signed
a historic agreement
covering the compatibility
and interoperability of their
respective satellite navigation services, the Global
Positioning System and
Galileo.
The “Agreement on
the Promotion, Provision,
and Use of Galileo and
GPS Satellite-Based Navigation Systems and Related
Applications” calls for the
establishment of a common
civil signal. As a result,
civilian users will eventually enjoy more
precise and reliable navigation services. At
the same time, the agreement ensures that
signals from Galileo (which is still in development) will not harm the navigation capabilities of U.S. and NATO military forces
and equipment.
Aerospace has been working in recent
years to help define U.S. position with respect to Galileo—which could have
evolved to rival, not complement, GPS. For
ESA
S
purred by a need for greater “situational awareness” in space, the
Air Force is moving ahead with
development of the Space-Based
Space Surveillance (SBSS) system. The
Initial Operating Capability version of this
system has been used to detect, track,
identify, catalog, and observe man-made
objects in space, day or night, in all
weather conditions. The complete system
will enable key warfighter decisions based
on collection of data regarding military
and commercial satellites in deep space
and near-Earth orbits without the inherent
limitations (e.g., weather, time of day,
location) that affect ground systems.
“The SBSS system will provide the
ability to find smaller objects, precisely fix
and track their location, and characterize
many objects in a very timely manner,”
said Dave Albert, Principal Director, Space
Superiority Systems, and Jack Yeatts, Future System Director. During the creation
of the program, Aerospace performed key
mission-assurance risk assessments for the
Air Force Space and Missile Systems Center (SMC). During the technical requirements development and source selection,
“Aerospace’s technical evaluations led to
convincing risk-mitigation actions on the
launch vehicle and the focal planes,” said
Arthur Chin, SBSS Program Lead.
Navigating Europe
example, Aerospace investigated the potential benefits of a shared signal and common
reference frame and examined alternative
approaches. Aerospace also identified candidate signals for Galileo that would be
compatible with current GPS signals and
facilitate future interoperability.
The United States and the European
Union have shared technical analyses and
information needed to implement the provisions of the new agreement.
Successful Launch for GPS
A near-term operational pathfinder,
which will operate in low Earth orbit, has
completed source selection and is scheduled for launch in June 2007 to significantly improve the current on-orbit capability. It will be launched by a Peacekeeper
space-launch vehicle that is under SMC/
Aerospace mission-assurance and launchreadiness review. The follow-on constellation will begin acquisition in 2005, with
initial operational capability slated for 2012.
A
GPS Block IIR satellite was
successfully launched from Cape
Canaveral aboard a Delta II
rocket on June 23, 2004. The
unit will replace an aging satellite as part
of routine constellation management.
“The launch countdown for GPS IIR12 was the smoothest one that I had ever
seen,” said Wayne Goodman, General
Manager, Launch Vehicle Engineering and
Analysis. The mission was the 37th consecutive launch success for the Air Force
Space and Missile Systems Center, he said.
The launch occurred on the fourth attempt; the first three were scrubbed because
of thunderstorms. “On the second launch
attempt, there was a concern that the
vehicle may have been damaged by high
winds,” said Goodman. Analyses performed by the launch contractor and reviewed by Aerospace validated that the
vehicle was undamaged, he said. Visual
inspections performed by the contractor
and Aerospace also did not reveal any
damage to the vehicle.
This was the 51st GPS satellite
launched and the 40th carried on a Delta II.
It marked the second of three GPS replacement missions scheduled for 2004. The
next is slated for liftoff in late September.
Crosslink Summer 2004 • 3
Earth
Remote
Sensing:
An Overview
NOAA
4 • Crosslink Summer 2004
Spaceborne remote-sensing instruments are used for applications
ranging from global climate monitoring to combat-theater weather
tracking to agricultural and forestry assessment. Aerospace has
pioneered numerous remote-sensing technologies and continues to
advance the field.
David L. Glackin
A
DOD,
NOAA, and
NASA have merged
their separate polar-orbiting
environmental satellite programs
into a single program called NPOESS.
Aerospace provides support in requirements
development, system and payload specification
and evaluation, systems engineering, mission operations planning, and acquisition and contract
oversight for this interagency program.
lthough the first weather satellite,
TIROS I, was launched in 1960,
the field of satellite-based remote
sensing of Earth really began to
take form in the 1970s. The launches of
Landsat-1 in 1972, Skylab in 1973, Nimbus-7 in 1978, and Seasat in 1978 set the
stage for modern environmental remote
sensing.
During these years, the Defense Meteorological Satellite Program (DMSP) provided many scientists and engineers at
Aerospace the opportunity to investigate
new phenomenology and instrumentation.
For example, the first sensor to remotely
monitor the density of the upper atmosphere above 80 kilometers was conceived
and built at Aerospace. The first reported
analysis of spaceborne imagery of the aurora was done at Aerospace using DMSP
low-light visible imagery. When a
snow/cloud discrimination sensor flew
on DMSP in 1979, Aerospace
demonstrated that the combination of visible and shortwave
infrared imagery could be
used not only to discriminate snow from clouds,
but water clouds from
ice clouds as well.
Aerospace analyzed defense
satellite data on
the eruption of Mt. St. Helens in 1980 and
tracked the volcanic plume using stereo observations from two satellites. And in the
days before DMSP, Aerospace built the second ozone profiler ever to fly in space,
which flew in 1962.
Today, Aerospace work in remote sensing supports not only the Department of
Defense (DOD), but NASA, NOAA, and
other governmental agencies as well. In the
coming years, as these organizations seek to
coordinate their remote-sensing efforts,
Aerospace research and analysis will play
an important role in determining what type
of systems are developed and deployed.
Remote Sensing in Perspective
Since the pioneering work of the 1970s, the
field of satellite environmental remote sensing has steadily evolved. Before 1990, only
about half a dozen nations owned environmental satellites, but since then, the number
has nearly quintupled. Earlier programs primarily involved civil and military systems
of high cost and complexity; more recently,
the focus has shifted to include missions
involving smaller satellites, greater commercial involvement, and lower complexity
and cost.
The civil, commercial, and military
communities all pursue environmental remote sensing activities, but these communities have different needs and objectives.
Crosslink Summer 2004 • 5
Civil institutions tend to focus on problems
such as monitoring and predicting global
climate change, weather patterns, natural
disasters, land and ocean resource usage,
ozone depletion, and pollution. Commercial
organizations typically invest in systems
with higher spatial resolution whose imagery can support applications such as mapping, precision agriculture, urban planning,
communications-equipment siting, roadway
route selection, disaster assessment and
emergency response, pipeline and powerline monitoring, real-estate visualization,
and even virtual tourism. Military users typically concentrate on weather monitoring
and prediction as it directly supports military operations. The military is also interested in high-resolution imagery, and has in
fact become the primary customer for commercial imagery at resolutions of 1 meter or
better.
Types of Instruments
Remote-sensing instruments fall into the
general classes of passive and active electrooptical and microwave sensors. Passive devices collect and detect natural radiation,
while active instruments emit radiation and
measure the returning signals. Electrooptical devices operate in the ultraviolet,
visible, and infrared spectral regions, while
microwave (and submillimeter- and
millimeter-wave) devices operate below the
far infrared.
Passive electro-optical instruments include multispectral imagers, hyperspectral
imagers, atmospheric profilers or sounders,
spectrometers, radiometers, polarimeters,
CCD cameras, and film cameras. Active
electro-optical instruments include
backscatter lidars, differential absorption
lidars, Doppler wind lidars, fluorescence
lidars, and Raman lidars (“lidar” is an
acronym for “light detection and ranging”).
Passive microwave instruments include imaging radiometers, atmospheric sounders,
synthetic-aperture radiometers, and
submillimeter-wave radiometers. Active
microwave instruments include radars,
synthetic-aperture radars (SARs), altimeters, and scatterometers.
Passive Electro-optical
Passive electro-optical multispectral imagers observe Earth’s natural thermal radiation or solar radiation that has been reflected and scattered back toward space.
The scanning optics can move in a cross-
6 • Crosslink Summer 2004
Illumination
Sensor Collection System
Sun
Scan mechanism
CCD focal plane
Photon-to-electrical
conversion
Optics
Photon collection
image formation
Analog processor
Noise reduction,
signal digitization
Interface to downlink
Multiplex/format
Digital processor
Correction, calibration,
data compression
Transmit
Spectral
atmospheric
effects (down)
Spectral
atmospheric
effects (up)
and haze
Receive
Wideband serial
digital downlink
Ground Processing System
Data
Data
demultiplexer
decompress
Ground/target
spectral
reflectance
Ground track/
field-of-view
(FOV)
Image enhance/correct
Soft copy display
Hard copy display
Storage/archive
A typical electro-optical sensor design. Aerospace creates end-to-end simulations to assist in sensor
design, planning, and performance analysis.
track “whiskbroom” fashion, or the motion
of the spacecraft can simply carry the field
of view along track in a “pushbroom” fashion. The radiation captured by the primary
mirror is transferred through a set of optics
and bandpass filters to one or more focal
plane arrays, where it is converted to electrical signals by a number of detectors.
These signals are then digitized and may be
compressed to reduce downlink bandwidth
requirements. Whiskbroom imagers can
scan a wide swath of the planet with relatively few detectors in the focal plane array,
while pushbroom imagers can be built with
no moving parts; each approach involves
trade-offs.
If the multispectral imagers are calibrated to quantitatively measure the incoming radiation (as indeed most are), they are
termed imaging radiometers. Such instruments typically detect radiation in a few
(less than 20) spectral bands. Multiple
wavelengths are almost always required to
retrieve the desired environmental phenomena. A single “panchromatic” wavelength
can be used purely for imaging at higher
spatial resolution across a broader spectral
band. Multispectral imagers are used to
study clouds, aerosols, volcanic plumes,
sea-surface temperature, ocean color, vegetation, land cover, snow, ice, fires, and many
other phenomena.
Some of the sensors on NPOESS (National Polar-orbiting Operational Environmental Satellite System) include: VIIRS (Visible/Infrared Imager/Radiometer
Suite), which collects radiometric data of Earth's atmosphere, ocean, and land
surfaces; CMIS (Conical-scanning Microwave Imager/Sounder), which collects global microwave radiometry and sounding data; CrIS (Crosstrack Infrared Sounder), which measures Earth's radiation to determine the vertical
distribution of temperature, moisture, and pressure in the atmosphere; OMPS
(Ozone Mapping and Profiler Suite), which collects data for calculating the distribution of ozone in the atmosphere; ATMS (Advanced Technology Microwave Sounder), which provides observations of temperature and moisture
profiles at high temporal resolution; and ERBS (Earth Radiation Budget Sensor).
CMIS
ATMS
CrlS
VIIRS
In contrast to multispectral imagers,
hyperspectral imagers typically cover 100
to 200 spectral bands, producing simultaneous imagery in all of them. Moreover, these
narrow bands are usually contiguous, typically extending from the visible through
shortwave-infrared regions. This makes it
easier to discriminate surface types by exploiting fine details in their spectral characteristics. Hyperspectral imagery is used for
mineral and soil-type mapping, precision
agriculture, forestry, and other applications.
A few hyperspectral imagers operate in the
thermal (mid- to long-wave) infrared, notably the Aerospace SEBASS (Spatially
Enhanced Broadband Array Spectrograph
System), an airborne instrument.
Profilers or sounders monitor several
frequencies across a spectral band characteristic of a particular gas (e.g., the 15micron band characteristic of carbon dioxide). Typically operating in the thermal
infrared, they are most often used to measure the vertical profile (a mapping based on
altitude) of atmospheric temperature, moisture, ozone, and trace gases.
Spectrometers exploit
the spectral “fingerprints”
of environmental species,
providing much higher
spectral resolution than
multispectral imagers. They
use a grating, prism, or
more sophisticated method
(such as Fourier transform OMPS
spectrometry) to spread the
incoming radiation into a
continuous spectrum that
can be detected and digitized. Spectrometers are
typically used for measuring trace species
in the atmosphere or the composition of the
land surface.
The distinction between the various
classes of instruments is often blurred. For
example, a sounder might use bandpass filters to observe discrete spectral bands, or it
might employ a spectrometer to observe a
continuous spectrum from which the appropriate sounding frequencies can be extracted. Similarly, a hyperspectral imager
will typically use a spectrometer for
ERBS
NOAA
spectral discrimination (in which case, it is
known as an imaging spectrometer).
Non-imaging radiometers are typically
used to study Earth’s energy balance. They
measure radiation levels across the spectrum from the ultraviolet to the far infrared,
with low spatial resolution. They can measure such quantities as the incoming solar irradiance at the top of the atmosphere and
the outgoing thermal radiation caused by
the sun’s heating of the planet. These are
The Remote-Sensing Spectrum
The portions of the electromagnetic spectrum that are most useful for remote sensing can be defined as follows: The ultraviolet
extends from approximately 0.1 to 0.4 microns, the visible from
0.4 to 0.7 microns, the near infrared from 0.7 to 1.0 microns,
the shortwave infrared from 1 to 3 microns, the midwave infrared from 3 to 5 microns, the long-wave infrared from 5 to 15
microns, and the far infrared from 15 to 100 microns. These
ranges are typically defined in terms of wavelength, but other
ranges can be defined in terms of frequency as well. Thus, the
submillimeter range encompasses wavelengths from 100 to
1000 microns or frequencies from 3 terahertz to 300 gigahertz. The millimeter range extends from 300 to 30 gigahertz
or 1 millimeter to 1 centimeter, and the microwave region from
30 to 1 gigahertz or 1 to 30 centimeters.
Within these spectral regimes, there are “window bands” of
low atmospheric absorption (in which imagers typically operate) and “absorption bands” of relatively high atmospheric absorption (in which sounders operate). There are relatively few
applications for remote sensing in the ultraviolet because of its
strong absorption by ozone below 0.3 microns (ozone monitoring is an obvious exception). The midwave infrared is unique
in that, during daytime, it is a confusing mix of reflected solar
and emitted thermal radiation. The submillimeter or terahertz
regime (between the electro-optical and microwave regimes) is
only beginning to be explored for remote-sensing purposes.
Crosslink Summer 2004 • 7
Active Electro-Optical
A lidar sends a laser beam into Earth’s environment and measures what is returned via
reflection and scattering. This typically requires a large receiving telescope to capture
the returning photons. The returning signal
can be measured either by direct detection
or by heterodyne (coherent) detection. With
direct detection, the receiving telescope acts
as a simple light bucket, which means that
phase information is normally lost. With
heterodyne detection, the returning photons
are combined with the signal from a local
oscillator laser, which generates an intermediate (lower) frequency that is easier to detect while maintaining the frequency and
phase information.
Few lidars have ever flown in space,
owing to limitations involving high power,
high cost, and the availability of robust laser
sources. Lidar remote sensing is primarily
limited to aircraft (although the shuttlebased Lidar In-space Technology Experiment, or LITE, was quite successful).
Lidars can potentially generate highresolution vertical profiles of atmospheric
temperature and moisture because the returns can be sliced up or “range gated” in
time (and thus space) if they are strong
enough. Lidar also has potential for profiling winds, determining cloud physics,
measuring trace-species concentration, etc.
Backscatter lidar is the simplest in concept: A laser beam scatters off of aerosols,
clouds, dust, and plumes in the atmosphere.
The data can be used to generate vertical
profiles of these phenomena, except where
the beam is absorbed by clouds. A related
device is the laser altimeter, which records
the backscatter from Earth’s surface to
measure features such as ice topography
and the vegetative canopy (e.g., the tops of
trees for biomass studies).
8 • Crosslink Summer 2004
Differential absorption lidar (DIAL) transmits at two wavelengths,
one near the center of a
spectral absorption line of
interest, the other just outside it. The difference in
the returned signal can be
used to derive species
concentration, temperature, moisture, or other
phenomena, depending
on the spectral line selected. The differential
technique requires no absolute calibration, so it’s
relatively easy to achieve
high accuracy (e.g., partsper-million to parts-perbillion for species concentration).
Doppler lidar measModel of the Conical-scanning Microwave Imager/Sounder (center)
ures the Doppler shift of
with a model of the DMSP Special Sensor Microwave/Imager (right)
aerosols or molecules that and a microwave imager for the Tropical Rainfall Measurement Misare carried along with the sion (left). CMIS, a multiband radiometer that will be deployed on
wind. Thus, wind speed
NPOESS, integrates many features of heritage conical-scanning raand direction can be deter- diometers into a single radiometer. It will offer several new operational products (sea surface wind direction, soil moisture, and cloud
mined if two separate
views of each atmospheric base height) and quantifiable resolution and measurement range improvements over existing remotely sensed environmental products.
parcel are acquired to
measure velocity in the
profiles from microwave instruments on
horizontal plane. In concept, this can be
DMSP.
done with a conically scanning lidar and a
large receiving telescope. The available
Passive Microwave
aerosol backscatter is too low to measure
Passive microwave imaging radiometers
the complete wind profile as desired (from
(usually called microwave imagers) collect
the surface to 20 kilometers in altitude), but
Earth’s natural radiation with an antenna
molecular scattering can be used to cover
and typically focus it onto one or more feed
the aerosol-sparse regions. Strong competihorns that are sensitive to particular fretion exists in the United States between two
quencies and polarizations. From there, it is
schools of thought that propose using direct
detected as an electrical signal, amplified,
or heterodyne detection. Although wind lidigitized, and recorded for the various fredar has been studied in the United States
quencies and polarizations (linear or circusince 1978, it appears that the first Doppler
lar). The amount of radiation measured at
lidar in space will be launched by the Eurodifferent frequencies and polarizations can
pean Space Agency in 2007.
be analyzed to produce environmental paFluorescence lidar is tuned to a spectral
rameters such as soil moisture content, prefrequency that is absorbed by the species of
cipitation, sea-surface wind speed, seainterest, then reradiated at a different fresurface temperature, snow cover and water
quency, which is detected by a radiometer.
content, sea ice cover, atmospheric water
A related technology, Raman lidar, exploits
content, and cloud water content. Unlike
the Raman scattering from molecules in the
visible imagers, microwave imagers can opair, a process in which energy is typically
erate day or night through most types of
lost and the scattered light is reduced in freweather. The natural microwave radiation
quency. The potential for this type of lidar
from the environment is not dependent on
to fly in space is remote. It is being used by
the sun, and microwave radiation over
Aerospace in a portable ground-based lidar
broad ranges of frequencies is quite insensifor ground verification of atmospheric
tive to water in the atmosphere.
Boeing Space Systems
two of the principal quantities that determine the net heating and cooling of Earth.
Polarimeters, which can be imaging or
nonimaging devices, exploit the polarization signature of the environment. The electromagnetic vector that characterizes the
radiation from Earth can be linearly (or
elliptically) polarized, depending on the
physics of reflection and scattering. The
resulting information can be used to study
phenomena such as cloud-droplet size distribution and optical thickness, aerosol
properties, vegetation, and other land
surface properties.
synthesis. In this concept,
which has long been used
in radio astronomy, the
operation of a large solid
dish antenna is simulated
by using only a sparse
aperture or “thinnedarray” antenna. In such
an antenna, only part of
the aperture physically
exists and the remainder
is synthesized by correlating the individual antenna elements. This
technique has been
proven in aircraft flight
demonstrations.
Active Microwave
Altimeters measure surface topography, and radar altimeters are typically used
to measure the surface topography of the
ocean (which is not as uniform as one might
think). They operate using time-of-flight
measurements and typically use two or
more frequencies to compensate for ionospheric and atmospheric delays. Altimeters
have been flying since the days of Skylab in
1973. Aperture synthesis and interferometric techniques can also be employed in altimeters, depending on the application.
Scatterometers are a form of instrument that uses radar backscattering from
Earth’s surface. The most prevalent application is for the measurement of sea surface
wind speed and direction. This type of instrument first flew on Seasat in 1978. A special class of scatterometer called delta-k
radar can measure ocean surface currents
and the ocean wave spectrum using two or
more closely spaced frequencies.
Synthetic-aperture radars also flew for
the first time on Seasat. These radars sometimes transmit in one polarization (horizontal or vertical) and receive in one or the
other. A fully polarimetric syntheticaperture radar employs all four possible
send/receive combinations. Syntheticaperture radars are powerful and flexible instruments that have a wide range of applications, such as monitoring sea ice, oil spills,
soil moisture, snow, vegetation, and forest
cover.
Active microwave instruments can be broadly divided into real-aperture
The prototype BASS (Broadband Array Spectrograph System) instru- and synthetic-aperture
ment being used to study cirrus clouds simultaneously with NOAA radars. They all transmit
radars in background, as part of NOAA’s Climate and Global microwaves toward
Change Program.
Earth and measure what
is reflected and scattered
back. Some are interferometric, meaning
Microwave profilers or sounders, like
that they exploit the signals that are seen
electro-optical sounders, operate in several
from two somewhat different locations,
frequencies around a spectral band characwhich is a powerful means of elevation
teristic of a target gas. They are often used
measurement. This can be done using two
to measure the vertical profiles of temperaantennas separated by a rigid boom, or usture and moisture in the atmosphere. The
ing a single antenna on a moving spacecraft
oxygen band near 60 gigahertz, which bethat acquires data at two slightly different
comes more or less opaque as a function of
Aerospace Support
times, or using similar antennas on two
atmospheric temperature, is usually used for
Traditionally, environmental remote sensing
separate spacecraft.
temperature sounding, while the wateractivities at Aerospace supported military
Real-aperture radars can be further catvapor band at 183 gigahertz is typically
programs such as DMSP. In the early
egorized as atmospheric radars, altimeters,
used for moisture sounding. The advantage
1990s, that began to
and scatterometers.
of microwave over electro-optical sounding
change. Aerospace supAtmospheric radars
is that it can be done through most forms of
The advantage of microwave
port to NOAA (the Naare useful for
weather and cloud cover.
over electro-optical sounding is tional Oceanic and Atstudying precipitaPassive microwave imagers and
that it can be done through most mospheric
tion and the threesounders generally operate at frequencies
Administration) grew
dimensional strucranging from 6 to 183 gigahertz. Higher freforms of weather and cloud
to include the GOESture of clouds. The
quencies have recently been used in socover.
NEXT series of geouse of more than
called submillimeter-wave radiometers for
synchronous weather
one frequency is
measuring cloud ice content. Lower fresatellites, AWIPS (the Advanced Weather
beneficial for separating the effects of cloud
quencies, around 1 gigahertz, can be used to
Interactive Processing System), and risk asand rain attenuation from those of backscatmeasure soil moisture and ocean salinity;
sessment of a proposed spaceborne global
ter. Only one atmospheric radar is now flyhowever, such low frequencies are not alwind-sensing system. At the same time,
ing in space (for measuring tropical rainways practical. For a given antenna size,
Aerospace conducted a series of independfall), but others slated for launch include
spatial resolution decreases as the frequency
ent reviews for NASA programs, including
NASA’s CloudSat mission, which will perdecreases. Most microwave imagers are
the Shuttle Imaging Radar, the Total Ozone
form the first 3-D profiling of clouds. This
limited to a lower frequency of about 6 gigaMapping Spectrometer, and the NASA
mission is important because clouds and
hertz because a large antenna would be reScatterometer, designed for ocean wind
aerosols are the primary unknowns in the
quired at 1 gigahertz to achieve acceptable
measurement. In 1992, Aerospace develglobal climate-change equation.
resolution. This difficulty can be overcome
oped the concept for a DMSP digital data
through a technique known as aperture
Crosslink Summer 2004 • 9
This portable ground-based lidar system uses Rayleigh and Raman scattering of light to generate vertical profiles of atmospheric temperature and water vapor. It is used to verify calibration of the sounding channels on environmental satellites in orbit.
archive that was implemented by NOAA and
the National Geophysical Data Center in 1994.
The DOD’s next-generation DMSP
and NOAA’s next-generation POES (Polarorbiting Operational Environmental Satellite) programs were officially merged by
presidential directive in 1994, creating a
new triagency NOAA/DOD/NASA program called NPOESS (National Polarorbiting Operational Environmental Satellite System). NASA’s initial role was to
provide technology transfer. Aerospace began work on NPOESS with a small team in
1992 when the transition was first announced, studying issues such as whether
the needs of both NOAA and DOD could
be addressed by shared instruments. This
program provides a good example for comparing and contrasting the needs of the civil
and military communities in terms of requirements, instrument design, research,
and operations. Designing a single visible/
infrared imager that satisfies the civil community’s need for accurate radiometric
calibration and long-term stability and the
military’s need for high-quality imagery—
including nighttime visible imagery—has
been a particular challenge that Aerospace
has helped to address during the last
decade. During that period, NPOESS has
also become the follow-on to the Earth Observing System climate mission. NPOESS
will be an important Aerospace program for
10 • Crosslink Summer 2004
many years to come, requiring support for
everything from basic physics to ground
systems.
As of 2004, Aerospace support in environmental remote sensing extends to the
Earth Science and Technology Directorate
of Caltech’s Jet Propulsion Laboratory,
NASA’s Earth Science Technology Office,
NASA’s Goddard Space Flight Center and
the U. S. Geological Survey on Landsat,
NOAA’s Office of Systems Development
on the future of the geosynchronous
weather satellite program, and NASA Goddard on the NPOESS Preparatory Project,
which is a bridge between the Earth Observing System and NPOESS. Aerospace
members serve on the federal Interagency
Working Group on Earth Observation and
the international Group on Earth Observation, assisting these bodies in their attempt
to coordinate future remote-sensing satellites and data. Aerospace further supports
the remote-sensing space policy community
by keeping tabs on the remote-sensing
plans of every nation.
Recent Developments at
Aerospace
In areas where the company has unique expertise, Aerospace constructs proof-ofconcept instruments and collects data in
field tests. For example, BASS (the Broadband Array Spectrograph System) is a
patented infrared spectrometer for groundbased and airborne remote sensing. Under
the aegis of NOAA’s Climate and Global
Change Program, BASS has been used to
support efforts to combine infrared and
radar reflectivity studies of cirrus clouds to
understand their physical properties better.
Clouds and aerosols are the primary sources
of uncertainty in global climate-change
models, so improved understanding of their
physical properties will advance scientific
understanding of global change.
Aerospace has advanced the field of
hyperspectral remote sensing through an
evolution of BASS called SEBASS (Spatially Enhanced BASS). As mentioned earlier, hyperspectral instruments typically
span the visible through shortwave infrared.
SEBASS, on the other hand, operates in the
thermal infrared. Although a few other
hyperspectral instruments also cover this
range, SEBASS does so with greater sensitivity. Aerospace has developed several
other instruments that stem from the original BASS design.
Aerospace is also working on a new
remote-sensing technique known as SAIL
(synthetic-aperture imaging ladar). This
technique, still in its infancy, uses aperture
synthesis to achieve unprecedented spatial
resolution with a ladar (or laser radar). The
groundbreaking work by Aerospace on the
SAIL technique has the potential to one day
afford extremely high resolution imaging of
objects with ladar.
Conclusion
For more than 40 years, Aerospace has pioneered the design and development of systems for remote sensing of Earth. Aerospace researchers have worked on every
major type of instrument, as well as user requirements, system architecture, modeling
and simulation, image compression, image
processing, and algorithms for understanding the data, in support of programs managed by DOD, NASA, JPL, NOAA, and
others. Familiarity with these user communities puts Aerospace in a unique position to
help coordinate their efforts and ensure that
sensing systems keep pace with the customers’ changing needs and goals.
Further Reading
D. L. Glackin and G. R. Peltzer, Civil, Commercial, and International Remote Sensing
Systems and Geoprocessing (The Aerospace
Press and AIAA, El Segundo, CA, 1999).
H. J. Kramer, Observation of the Earth and Its
Environment: Survey of Missions and Sensors,
Fourth Edition (Springer-Verlag, 2002).
The Best Laid Plans: A History of the
Manned Orbiting Laboratory
In the mid to late ‘60s, an ambitious project to launch an orbital space laboratory for science
and surveillance came to dominate life at Aerospace.
Steven R. Strom
D
uring one particularly momentous press conference on
December 10, 1963, Secretary of Defense Robert McNamara announced both the death of the Dyna-Soar space
plane and the birth of the Manned Orbiting Laboratory
(MOL). Like the Dyna-Soar, MOL was a farsighted Air Force program that explored the potential for piloted space flights. Like the
Dyna-Soar, it was cancelled before reaching its goal—but not before
making some important contributions in the field of spaceflight and
space-station technologies.
MOL had a profound influence on Aerospace for two important
reasons. First, in terms of sheer size, the MOL program office represented an enormous expenditure of corporate funds, human resources, intellectual capital, and effort. Second, its cancellation in
1969 had a deep psychological impact on all Aerospace personnel—
not just those who worked on it—because it was the first time that
the company was forced to make any sizeable reductions in workforce. The program’s termination represented a stark ending to the
large budgets and expansive optimism that had characterized
America’s space programs in the 1960s and foreshadowed leaner
budgets and lower expectations for the years to come.
Concept Development
By the early 1960s, the demise of Dyna-Soar already seemed imminent, and the Air Force was searching for a viable way to continue human activities in space. An orbiting space platform offered opportunities for human surveillance over the Soviet Union and China, which
was important because American reconnaissance capabilities were severely limited after Col. Francis Gary Powers and his U-2 plane were
brought down over Soviet territory in 1960. Remote-sensing satellites,
such as Corona, were still limited in their surveillance capabilities.
A few days before Secretary McNamara’s announcement, a team
of representatives from the Air Force Space Systems Division and
Aerospace flew to Washington, DC, to review several possible implementations of MOL. Consultation with other NASA and Department
of Defense (DOD) personnel produced a working sketch of the
program. Planners envisioned a pressurized laboratory module,
Crosslink Summer 2004 • 11
approximately the size of a small house
trailer, that would enable up to four Air
Force crewmembers to operate in a “shirtsleeve” environment. The laboratory would
be attached to a modified Gemini capsule
and boosted into near-Earth orbit by an upgraded Titan III. Astronauts would remain
in the capsule until orbit and then move into
the laboratory. In addition to military reconnaissance duties (still largely classified), the
astronauts would conduct a variety of scientific experiments and assess the adaptability
of humans in a long-duration space environment (up to four weeks in orbit). When their
mission was complete, they would return to
the capsule, which would separate from the
laboratory and return to Earth. Launch facilities would be located at Vandenberg Air
Force Base in California to permit launch
into polar orbit for overflight of the Soviet
Union.
Planners agreed that the use of existing
Gemini technologies would make MOL’s
acceptance easier for those in Congress
who were concerned about additional defense spending and those within the space
community who worried that a concurrent
Air Force space program could slow down
work on the Apollo program, possibly endangering the U.S. effort to beat the Soviets
to the moon. The press release announcing
the startup of MOL stressed cooperation
with NASA to emphasize that the Air Force
was not embarking on an entirely solo project: “The MOL program will make use of
existing NASA control facilities. These include the tracking facilities which have
been set up for the Gemini and other space
flight programs of NASA and of the Department of Defense throughout the world.
The laboratory itself will conduct military
experiments involving manned use of
equipment and instrumentation in orbit and,
if desired by NASA, for scientific and civilian purposes.” NASA continued to provide
a great deal of logistical support to MOL
over the course of the program’s lifetime.
A Quick Start
Following McNamara’s announcement,
Aerospace immediately began work as part
of the concept study phase. At the beginning of 1964, seven Aerospace scientists
and 19 engineers developed possible experiments for MOL and worked to define possible MOL configurations as well as vehicle
and subsystems concepts. On February 1,
1964, the Air Force Space Command
announced the creation of a special MOL
12 • Crosslink Summer 2004
management office, headed by Col. Richard
Jacobson. Two days later, Aerospace initiated a major organizational restructuring,
with Pete Leonard appointed to lead the
newly formed Manned Systems Division.
The next month, Walt Williams came to
Aerospace from NASA to become vice
president and general manager of this new
division. By the end of the year, the number
of Aerospace technical staff members assigned to work directly on MOL had increased to 34. These researchers regularly
gave presentations and briefings on their
findings in Washington throughout 1964;
still, outside the Defense Department, MOL
lacked a committed core of government
supporters.
The Air Force assigned more research
contracts for the MOL laboratory vehicle in
early 1965, and Aerospace continued studies concerning the future of the military in
space. Although the first MOL crew was
scheduled to fly sometime between late
1967 and early 1968, full approval of the
program was contingent on the DOD’s
demonstrating a genuine national need to
deploy military personnel in space. To facilitate approval, the Defense Department affirmed that NASA’s lunar landing program
would remain the top priority and that duplicative programs would be avoided, with
the Air Force continuing its use of existing
hardware and facilities and cooperation
with NASA on MOL experiments.
The program finally received formal
approval from President Lyndon Johnson
on August 25, 1965. Johnson’s announcement included a budget of $1.5 billion for
MOL development. The MOL program
would enable the United States to gain
“new knowledge of what man is able to do
in space,” Johnson said, “and relate that
ability to the defense of America.” Johnson’s approval marked the formal recognition that the Defense Department had a
clear mandate to explore the potential applications of piloted spaceflight to support national security requirements.
Early Successes
Following official approval, the MOL program immediately began work on Phase I,
which extended from September 1, 1965, to
May 1, 1966. After working primarily with
the planning for MOL, including the design
concepts for the spacecraft, Aerospace now
had formal GSE/TD (general systems engineering/technical direction) for both the
spacecraft and the Titan IIIC launch vehicle
under contract to Air Force Space Systems
Division, commanded by Gen. Ben I. Funk.
Pete Leonard was appointed head of a new
MOL Systems Engineering Office, with
Walt Williams as his associate and William
Sampson as his assistant. The three were
collectively known as “the troika” by Aerospace employees. During Phase I, the Aerospace technical contingent working on
MOL more than doubled in size, from 80 to
190. The Air Force’s MOL program office
had a complex organizational structure,
with Gen. Bernard Schriever serving as program director in Washington, DC, and Brig.
Gen. Russell Berg, who reported directly to
Schriever, acting as deputy at the Space
Systems Division in El Segundo, California. To improve administrative efficiency,
Aerospace began colocating employees
from its MOL Systems Engineering Office
with members of the Air Force MOL program office in early 1966.
Aerospace Phase I activities were primarily directed toward firming up contractor work statements and duties and initiating contractor and in-house studies required
for system definition. Aerospace conducted
numerous cost analyses to verify the accuracy of contractor estimates for MOL components. About halfway through the first
phase, the Air Force and Aerospace received instructions to design MOL so that it
could also operate without an onboard
crew—just in case the Soviet Union objected to overflight of its territory by military personnel. Aerospace had already conducted automation tests and was able to
direct the contractors on necessary changes.
The alterations, however, added roughly
one ton to the space-station weight. As a result, Aerospace had to conduct additional
studies during the next year to determine
which subsystems could be reduced in mass
without harming the space station’s overall
performance.
The Phase II schedule called for a series of seven qualifying test launches of the
laboratory from the Western Test Range
beginning in April 1969, with the first piloted flight set for December 15, 1969.
Thus, it was an important milestone when
construction began on Space Launch Complex 6 (SLC-6) at Vandenberg on March 12,
1966. This was one of the most complex
construction projects ever attempted by the
Air Force at Vandenberg. Aerospace had a
major role in the launch site’s design and
construction as part of the company’s
GSE/TD responsibilities.
Construction began on Vandenberg’s SLC-6 in March 1966. This was one of the most complex construction projects ever attempted by the
Air Force at Vandenberg. With the cancellation of MOL, SLC-6 would have to wait several decades for its first successful launch.
US Air Force
In November 1966, MOL enjoyed a
much-needed success when a Gemini capsule, attached to a modified Titan II propellant tank (to simulate the laboratory), was
launched from the Eastern Test Range by a
Titan IIIC. One important purpose of this
launch was to test the stability of a hatch
door that had been cut into the heat shield
of the Gemini capsule, an addition that
would enable the astronauts to transfer directly from their capsule to the laboratory.
The capsule was ejected and recovered near
Ascension Island, and the heat-shield test
was declared a success. This test flight
marked the only occasion that the Titan
IIIC/MOL configuration was actually
flown.
By the end of 1966, MOL planners
were seeing genuine signs of progress, but
these were tempered by several negative
trends—most notably, the continued underfunding of the project and the concurrent
cost overruns. These budget problems
would only worsen as the program grew in
complexity and increasingly had to compete for funds with the Vietnam War.
A Growing Project
The principal MOL contractor and major
subcontractors were selected in early 1967.
Negotiations were somewhat protracted
because the government insisted on fixed-
price contracts. These contracts, intended to
save costs, only added to the work of Aerospace, which had to conduct numerous
studies to verify the pricing information
submitted by the contractors.
When Project Gemini successfully
concluded, 22 members of that program office were transferred to MOL, where their
expert knowledge of Gemini hardware
could be effectively used. Some veterans of
the Mercury and Gemini programs were
disappointed that they would not get to support the Apollo program, which would have
been a logical next step if the Air Force had
not decided to embark on its own piloted
space program. In February, Aerospace
made another organizational adjustment,
reflecting management’s belief that MOL
would remain a major component of the
company’s activities. Three directorates
were established under the aegis of the
MOL Systems Engineering Office: Engineering, led by Sam Tennant, who would
later serve as president of Aerospace; Operations, headed by Robert Hansen; and the
Planning, Launch Operations, and Test Directorate, led by Ben Hohmann, who had
achieved such great success with the Mercury and Gemini programs.
In a reflection of the growing bureaucratic and engineering complexity of MOL,
by May 1967, Aerospace had 28 MOL
working groups, including software management, environmental control and life
support, crew transfer, and ground-systems
coordination. The proliferation of bureaucracy, not only at Aerospace but in the Air
Force as well, sometimes made the transmission of information difficult. Joe
Wambolt, who served as the director of
launch operations in Ben Hohmann’s directorate, remembers that, “It was almost impossible to find out what another office was
doing. No one ever seemed to know the ‘big
picture’ of what was going on. A lot of people knew a great deal about what was happening in their particular offices, but the
only person who ever understood everything that was going on in the entire MOL
program, in my opinion, was Sam Tennant.”
A Shrinking Budget
A variety of problems surfaced in 1967.
The year began with the tragic Apollo 1 fire
on January 27, in which three astronauts
died testing their Apollo capsule on the
ground. The fire prompted several reviews
of the Aerospace decision to use a mixture
of 70 percent oxygen and 30 percent helium
onboard MOL, but as Ivan Getting, who
served as the president of Aerospace during
the life of the MOL program, noted in an
interview, the mixture proposed by Aerospace “was much safer from the standpoint
Crosslink Summer 2004 • 13
of ignition and fire” than the all-oxygen environment used by NASA inside Apollo 1.
Meanwhile, in March, the increasing
weight of the laboratory module forced the
Air Force to propose upgrading the Titan
IIIC. (The crew-rated version of Titan IIIC,
under development specifically for the
MOL program, was designated Titan IIIM.)
Much support for MOL came from the
Aerospace Titan program office, which was
assigned to study the proposed Titan IIIC
improvements.
Further financial woes arose later in
the year when details of the next federal
budget were released. The president only
allocated $430 million for total MOL
spending, slightly more than half of the
$800 million that contractors said they
needed to complete their work. This drain
on MOL funding, caused by the escalating
costs of the Vietnam War, forced the Air
Force to push back scheduled MOL
launches.
Aerospace had been making recommendations for technical and schedule
changes to cut costs since the beginning of
1966—but by the fall of 1967, funding
problems became so severe that the Air
Force asked Aerospace to review the entire
program to identify its most important objectives and note measures that could save
money. This study was known formally as
Project Upgrade, while a concurrent technical audit conducted by Aerospace was
named Project Emily (the derivation of this
name is unknown). Project Upgrade eventually identified 22 major MOL objectives,
and in March 1968, Aerospace published a
new performance and requirements document that became the standard guide for
MOL contractors. The idea was to reduce
costs by eliminating requirements that
could not be traced to program objectives.
When specifics of the federal budget
for fiscal year 1969 began to appear in June
1968, further problems arose. The $515
million proposed for MOL—at least $100
million below estimates of the amount
needed—necessitated another series of
schedule changes. The first launch was still
set for late 1970, but the third was pushed
back three months. It was now planned for
MOL to be operational by 1971. By this
time, constant schedule changes and budgetary problems were affecting workforce
morale. Joe Wambolt recalls that, “No matter how hard we worked, we were always a
year away from launch. We just never
seemed to get ahead.” The 1969 budget also
forced Aerospace to cut the number of technical personnel working on MOL from 300
to 275. According to Air Force and Aerospace projections, roughly $700 million annually would be needed for the next few
years—but with the Vietnam War still raging, there was little likelihood of receiving
more than $500 million for each of the next
three fiscal years at least.
The Air Force asked Aerospace to conduct another series of technical reviews to
determine possible changes for the program
to accommodate the reduced budgets. In
January 1969, a new president, Richard M.
Nixon, was inaugurated, but there was little
likelihood that he would increase funding
for a program like MOL after campaigning
on a platform of greater restraint in federal
spending.
US Air Force
Impending Disaster
Artist’s conception of the MOL ascending into
orbit. When this image was made, in 1964,
planners expected to use a Titan IIIC to lift the
laboratory.
14 • Crosslink Summer 2004
Despite cutbacks and constant budget limitations, MOL still had the largest support of
any research and development program
within the DOD. Moreover, by 1969, the
program had made many significant advances, including substantial progress toward the completion of SLC-6 as well as
the development of the Titan IIIM launch
vehicle and various MOL subcomponents.
Fourteen pilots (eleven Air Force, two
Navy, one Marine) had already been selected as MOL astronauts and were in training. It still appeared to many Air Force and
Aerospace observers that a viable military
“man-in-space” program was on the verge
of implementation. Thus, with the approach
of June and the announcement of the 1970
fiscal budget looming, there was nervousness among MOL team members as to how
much funding that the program would receive, but apparently no sense of impending
disaster.
On June 10, 1969, Ivan Getting was in
Washington, DC, attending a meeting of the
Vietnam Panel of the President’s Scientific
Advisory Board, when he heard the startling news that Defense Secretary Melvin
Laird had just told Congress that MOL had
been cancelled. In an effort to reduce costs,
President Nixon had opted to cut further
funding for MOL in favor of NASA’s much
more visible Skylab program, which was
also in development as a follow-up to
Apollo. Even though roughly $1.4 billion in
development funds had already been spent
on MOL, the projected cost increases, the
continuing advances in automated space
surveillance systems, and the lack of supporters outside the DOD made MOL an
easy target. “Regardless of the justice of the
decision,” Getting wrote in his autobiography, “the impact on Aerospace and its people was traumatic.” The Air Force was similarly stunned by Nixon’s decision, and the
official Air Force announcement of MOL’s
cancellation was made at the site of the
nearly completed SLC-6 at Vandenberg.
When the cancellation of MOL was
announced, nearly 600 Aerospace employees were working on the program. Beside
the 205 working in the MOL program office, this number included employees working in various support functions, such as the
50 technical staff members in the Titan office assigned to work on the Titan IIIM.
One out of every six members of Aerospace’s technical workforce was affected by
the cancellation. The fiscal year would end
on June 30, leaving only three more weeks
of funding for the Aerospace program office. MOL represented about 20 percent of
the work performed at Aerospace; job cuts
were inevitable.
Nonetheless, Getting refused to allow
the company to lose some of the country’s
most productive technical minds. Working
closely with Aerospace management and
US Air Force
Fourteen pilots (eleven Air Force, two Navy, one Marine) were selected as
MOL astronauts. A MOL fact sheet from early 1968 notes that, “in addition to their formal training in advanced aeronautics, they work as engineering consultants, providing the pilot’s view in the design of equipment.
the Air Force, he quickly initiated a process
of screening and reassigning MOL staff to
other Aerospace programs. The Air Force,
well aware of the quality of the Aerospace
MOL scientists and engineers, assisted the
transfer of some Aerospace personnel to
support other program offices. Still, there
were only so many slots available, and a
corporate-wide layoff took place over the
next several weeks. Even though these layoffs were not as severe as initially feared,
they did affect corporate morale. In the final
year of the 1960s, the boundless optimism
of that decade came to an abrupt halt for
many at Aerospace who wondered if their
programs might be axed next. It was, wrote
Getting, “a bitter pill.”
The MOL Legacy
Though undeniably important in the history
of The Aerospace Corporation, MOL also
played a vital role in the history of the
American space effort. It remains, much
like the Dyna-Soar, one of the great “whatifs?” in the history of space exploration.
Had it not been terminated, MOL would
For example, in the past year tests have been successfully conducted by
the crew members in a specially equipped jet aircraft flying parabolic arcs
to demonstrate the capability of astronauts to transfer back and forth between the Gemini B and the laboratory in a weightless environment.”
have been the first U.S. orbital space
station, and its crews would have been the
first to reach space from the Western Test
Range (a feat still unaccomplished).
Despite the contention in 1969 that
technology had overtaken the need for human observers in space, the same argument
originally used to support the presence of
MOL astronauts is used today to justify a
crew onboard the International Space Station. Some MOL experiments were eventually performed on Skylab missions, and
some of the reconnaissance systems were
later employed on the KH series of satellites. MOL’s use of Gemini technology, proposed at the time as a useful maneuver to
help the program win approval, has its admirers in the space community today because of the widespread perception that
Gemini hardware was able to perform its
tasks using relatively cheap, yet reliable,
technology. With renewed emphasis today
on the importance of space to U.S. military
efforts, more and more observers are looking back to the concepts first proposed 40
years ago by the advocates of MOL.
Further Reading
The Aerospace Corporation Archives, Manned
Orbiting Laboratory Collection, AC-073.
The Aerospace Corporation Archives, Orbiter
Collection, AC-005.
The Aerospace Corporation Archives, President’s Report to the Board of Trustees, Vol. II
(all quarterly reports published 1964–1970),
AC-003.
I. Getting, All in a Lifetime: Science in the Defense of Democracy (Vantage Press, New
York, 1989).
I. Getting, oral history interview, March 7,
2001.
Donald Pealer, “Manned Orbiting Laboratory
(Parts 1 and 2),” Quest: The History of Spaceflight Quarterly, Vol. 4, No. 2,3.
Space and Missile Systems Center, Historical
Archives, MOL files.
Joe Wambolt, oral history interview, May 27,
2004.
Crosslink Summer 2004 • 15
The Infrared Background Signature
Survey: A NASA Shuttle Experiment
The development of remote sensing systems requires an accurate understanding
of the phenomena to be observed. Aerospace research helped characterize
space phenomena of interest to missile defense planners.
NASA
Frederick Simmons, Lindsay Tilney, and
Thomas Hayhurst
16 • Crosslink Summer 2004
S
DIO, the Strategic Defense Initiative Organization (precursor of the
Missile Defense Agency), conducted numerous experiments in
the late 1980s to study phenomena related
to the passage of intercontinental ballistic
missiles through the upper atmosphere. Understanding such phenomena was considered a critical step in building systems to
detect and track such missiles.
In an effort to involve NATO allies in
its research, SDIO invited the West German
government to join in an experiment involving deployment of the Shuttle Pallet Satellite (SPAS-II), developed and flown by
West Germany in a prior research mission.
In its primary mode, deployed from the
cargo bay, it would transport sensors for remote observations and be retrieved once the
data were collected.
The Germans proposed installing an
infrared scanner and spectrometer on the
satellite to measure the radiance profiles of
the Earth limb, the bright background
against which a missile defense system
would have to discriminate midcourse targets. Hence, the experiment was termed
the Infrared Background Signature Survey, or simply IBSS.
A panel of scientists from several organizations (including Aerospace) was
assembled to review the plan. Their immediate reaction was that the German instrument was ill suited for the job. Moreover, they pointed out that SDIO was
already funding development of an instrument at the Air Force Geophysics Laboratory for that very purpose (a cryogenic infrared radiometer, which in fact flew on
the same shuttle mission as IBSS). Accordingly, the group began looking for
other experiments that could effectively
use the German instrument.
Aerospace recommended two experiments that were accepted by SDIO. The
first involved using the sensors aboard
SPAS-II to observe the plumes from the
shuttle’s orbital maneuvering system engines (OMS) and the primary reaction
control system thrusters (PRCS). These
engines would approximate the thrusters
that powered the various postboost vehicles of concern to SDIO.
The second experiment involved the
deployment of small canisters that would
release liquid rocket propellants to simulate the rupture of a missile tank by a
boost-phase interceptor. Characterization
of such propellant releases could provide
a basis for a missile defense system’s “kill
assessment.”
The plume observations were
planned and coordinated by the Institute
for Defense Analyses, with subsequent
analyses performed at Aerospace and
other organizations. The responsibility
for the propellant releases was given to
Aerospace.
NASA
and managing its orbital operations. The complex operations of
this mission were planned and designed at the Aerospace Conceptual Flight Planning Center using
the NASA Flight Design System
software. Aerospace also helped
develop crew procedures and
flight-planning requirements to ensure that the astronauts carried out
the experiments properly.
The IBSS experiments were
conducted from shuttle flight STS39, launched April 28, 1991, into a
circular orbit of 260-kilometer
altitude and 57-degree inclination.
Aerospace engineers served as
The IBSS Shuttle Pallet Satellite being deployed from the
technical advisors for the director
bay of the orbiter Discovery by the remote grappler.
and manager for cargo operations.
The various onboard activities required two full shifts of astronauts (Guion
knowledge of the payloads and orbiter caBluford, Jr., now an Aerospace trustee, was
pabilities facilitated the successful implea mission specialist for the accompanying
mentation of contingency plans, mission
payload on this flight).
timeline changes, and operational
After the shuttle was launched, several
workarounds.
deviations from the nominal timeline posed
Aerospace assisted the team that congreat challenges—most notably, dealing
tinuously updated 12-hour timelines for the
with the effects of a change in launch date,
upcoming shifts of personnel on the ground
a delayed SPAS-II deployment, increased
and in the orbiter. Aerospace provided conallocation of data collected while the sateltinuous support at NASA Johnson Space
lite was attached to the remote manipulator
Center to ensure that the data collection resystem, and a delay in the timing of the
quirements were adequately met. Aerohigh-priority observations. Aerospace
space personnel were on 12-hour shifts at
consoles, supporting tests and helping in the
experiment timeline replanning efforts.
Orbital Burns
Preparations and Deployment
NASA
Aerospace played a large role in the
program as a whole by overseeing
the integration of the IBSS
payload into the orbiter
Visible image of the orbital maneuvering system plume recorded by the
video camera aboard the Shuttle Pallet
Satellite.
The postboost-vehicle simulation burns of
the OMS and PRCS engines were conducted with the thrust vectors in a direction
normal to the orbiter flight path. This orientation represented cross-range burns of a
postboost vehicle deploying its payload of
reentry vehicles. Each burn for observations
was followed by a “null” burn to maintain
orbital position. The orbiter remained behind SPAS-II to prevent exhaust products or
natural particles in the upper atmosphere
from contaminating the sensors. A total of
22 burns were made in the course of these
observations.
The design of the experiment was
based on the observation that rocket engines discharging into a rarified atmosphere
while moving at high velocity create a
plume consisting of two components. The
“near-field” or “intrinsic-core” component,
localized near the nozzle exit (within a few
meters or tens of meters), is independent of
Crosslink Summer 2004 • 17
Range: 1 or 10 kilometers
Orbiter
(Long dimension of detector array is out of the page)
v
v
SPAS II
0.36 mr
Scan direction
Orientation of the orbiter and the Shuttle Pallet Satellite during the observation of the orbital maneuvering system burns. The plume was scanned
vehicle altitude and velocity and represents
a minimum observable infrared intensity.
Further from the nozzle, the plume interacts
with the atmosphere to form the “far field”
or “enhancement” of the total intensity. The
latter component is highly dependent on the
vehicle’s altitude as well as its attitude and
velocity with respect to the atmosphere. For
a missile in a rising trajectory, the apparent
enhancement peaks at about 100 kilometers
in altitude and 3 kilometers/second in velocity and then diminishes rapidly until
only the intrinsic core can be observed. For
that reason, the intrinsic core is sometimes
termed the “vacuum-limit.” The near-field
observations of the OMS and PRCS
plumes were made at a range of about 1
kilometer; those of the far fields required
separation of 10 kilometers.
The principal goals of the observations
were to measure the spatial distributions of
radiances in two spectral bands selected as
candidates for postboost-vehicle detection
in a defense system and to measure the
spectra for both components of both
plumes. Of particular significance were the
observations in the 4–5-micron region for
the emission from the characteristic bands
of carbon dioxide and carbon monoxide,
observations that can only be made in space
because of the blanketing effect of absorption by carbon dioxide in the atmosphere.
These plume observations led to two
significant discoveries. First, the constancy
of the far-field radiances in the expanding
plumes—up to 1600 meters from the nozzle exit in the OMS plume—implied that
the rate-controlling process was the influx
of highly reactive atomic oxygen, the principal species in the upper atmosphere. Second, the spectra of the far field indicated the
principal radiating species in the plume to
be carbon monoxide rather than carbon
dioxide, the latter being dominant in the
near field and previously thought to be in
the far field as well. Accordingly, these results provided a much better basis for estimating the infrared emission from postboost vehicles observable to space-based
sensors; such studies have recently been
performed at Aerospace in support of the
development of an advanced surveillance
system intended to replace that of the Defense Support Program.
Propellant Releases
The propellant-release experiments were
quite different in nature. A liquid propellant
vented into a near vacuum will undergo a
flash evaporation. Part of the mass will expand rapidly as a cloud of vapor, which will
interact with atomic oxygen in the upper atmosphere and produce chemiluminescent
1E-03
10
1
400
800
1200
1600
Distance from orbiter (meters)
Radiances in the orbital maneuvering system
plume in the 4–5-micron region. The individual
detectors in the 20-element array of the scanner
show the variations in the near field, then coalesce into a single value in the far field.
18 • Crosslink Summer 2004
1E-05
Spectral radiance
Spectral radiance
Infrared radiance
100
0.1
0
with the 22-element detector array oriented as indicated. A total of 22
burns of the OMS and PRCS thrusters were made in these experiments.
1E-04
1E-05
1E-06
1E-07
4.0
4.4
4.8
5.2
Wavelength (microns)
5.6
Medium-wave infrared spectra of the near field
of the orbital maneuvering system plume. The
bands of carbon dioxide and carbon monoxide are both evident in the plume close to the
nozzle exit. The narrowness of the bands is
consequent to the very low temperatures resulting from the rapid expansion of the exhaust
gases.
Band heads
1E-06
1E-07
4.0
4.4
4.8
5.2
Wavelength (microns)
5.6
Medium-wave infrared spectra of the far field
of the orbital maneuvering system plume. The
quasi-periodic structure is due to “band heads”
characteristic of changes in the vibrational energy of carbon monoxide consequent to the
very energetic interaction with atomic oxygen
entering the plume at the orbital velocity of
more than 7 kilometers/second.
subsatellites discharged their contents upon
command from the Western Test Range,
which had been providing radar tracking
and relaying position information to the orbiter. The video pictures of the releases and
growths of the resultant clouds were relayed
to the ground; the infrared data were
recorded aboard SPAS-II and subsequently
transmitted to Aerospace for analysis. The
results of these experiments contributed immensely to the knowledge of such phenomena and its impact on missile surveillance.
The success of the actual data collections
were in great measure due to the early
design of the experiment timelines. Many
Aerospace people contributed to that
planning.
Other Experiments
There were other important and productive
experiments, conducted mainly by the Air
Force Geophysics Laboratory, which provided valuable data in viewing terrestrial
scenes, the Earth limb, and the orbiter environment, and observing effects resultant to
the release of various gases from containers
in the cargo bay. Particularly important
were the observations of the “shuttle glow”
seen by astronauts on previous flights (a
phenomenon that has been attributed in part
to the recombination of atomic oxygen in
the atmosphere on the surfaces of the orbiter). It was also observed that the glow
was considerably enhanced during and immediately following OMS and PRCS burns.
A series of measurements of Earth backgrounds in the midwave infrared bands provided information much needed in the design of advanced space-based sensors for
NASA
emission in the infrared; the rest will form a
cloud of frozen particles embedded within
the vapor cloud, which will strongly scatter
sunlight. The propellant release observations were designed to evaluate the infrared
properties of such clouds, which could impact the functioning of a missile-detection
system. The propellants were transported in
three canisters or “subsatellites” deployed
in sequence from launchers in the orbiter
bay; two contained about 25 kilograms of
the fuels monomethylhydrazine and unsymmetrical dimethylhydrazine, and one contained about 6 kilograms of the oxidizer
nitrogen tetroxide.
Prior to each chemical release, the orbiter would maneuver for a separation of
about 100 kilometers for the observations.
The subsatellites carried an optical beacon
and a radar reflector to facilitate acquisition
of the canister by the astronauts using the
video camera aboard SPAS-II, thus ensuring the precise pointing of the other sensors.
These subsatellites were designed and built
by a defense contractor under the close
supervision of Aerospace, with particular
attention to NASA safety requirements.
These experiments required considerable planning for the orbital arrangements.
In particular, the liquid propellants had to
be released in sunlight and in view of the
ground station, from which commands
were sent to turn the optical beacon on and
to open the propellant valves.
All three chemical release operations
were successful. In each case, the astronaut
in control was able to acquire the optical
beacon with the video camera to optimize
the pointing of the infrared sensors. The
A chemical release observation canister being
deployed via the launch tube in the orbiter bay.
improved missile surveillance. Finally,
some of the most spectacular images of auroras viewed from space were obtained
during this mission.
Acknowledgements
Individuals from a number of organizations
played key roles in the planning and execution of the IBSS experiments. Among the
people at Aerospace who made significant
contributions are Ron Thompson, Larry
Sharp, Kitty Sedam, Jo-Lien Yang, Jim
Covington, and Linda Woodward.
Further Reading
L. Baker et al., “The Infrared Background
Signature Survey, Final Report,” SDIO Document 29 January 1993.
F. Simmons, Rocket Exhaust Plume Phenomenology (The Aerospace Press and AIAA, El
Segundo, CA, and Reston, VA, 2000).
P. Albright et al., “Analysis of the IBSS Orbiter Plume Experiments,” Proceedings, JANNAF Plume Technology Meeting, Albuquerque (February 1993).
T. Hayhurst, “The Infrared Background Signature Survey Chemical Release Observation
Experiment Performance Report,” Aerospace
report TOR-93(3083)-1 (November 1992).
F. Simmons, “Application of the IBSS Plume
Data for PBV Signature Estimates,” Aerospace Report TOR 2002(1033)-3 (March
2001).
Video image of the release of monomethylhydrazine. The cloud had grown to about 4 kilometers in diameter; the bright spot in the center
is a sun glint from the subsatellite body. The
cloud was simultaneously scanned across the
center with the radiometer. This provided the infrared radiance profiles in selected spectral
bands.
Crosslink Summer 2004 • 19
Active Microwave
Remote Sensing
Daniel D. Evans
NASA/JPL
Active microwave sensing—which includes imaging and moving-targetindicating radar—offers certain advantages over other remote-sensing
techniques. Aerospace has been working to increase the capability of
this versatile technology.
A
ctive microwave sensors are
radars that operate in the microwave region (1 to 30 gigahertz in
frequency, 1 to 30 centimeters in
wavelength). Unlike passive microwave
sensors, they provide their own illumination
and do not depend upon ambient radiation.
Microwaves propagate through clouds and
rain with limited attenuation. Thus, active
microwave sensors operate day or night, in
all kinds of weather.
Early radar systems involved a fixed
radar source that scanned a field of view to
track military targets, such as ships or airplanes. Current and proposed systems take
many more forms and can operate as cameras, generating high-quality images from
moving platforms. Research at Aerospace
has been helping to advance the capabilities
of microwave imaging and target-detection
systems and expand their practical use.
Fundamentals
Image of the Los Angeles area from NASA’s
Shuttle Radar Topographic Mapping project, with color-coding of topographic height.
Pulsed radar operates by emitting bursts of
electromagnetic energy and listening for the
echo. The ratio of the pulse duration (the
transmission period) to the time between
pulses (pulse repetition interval) is a key design parameter known as the duty factor. A
higher duty factor lessens the peak power
requirement at the expense of eclipsing, or
the loss of returned signal energy when the
radar is in transmission mode.
Resolution in the range direction
(along the antenna boresight) can be determined by the pulse duration—the shorter
the pulse, the finer the resolution. In this
case, the range resolution would be the
pulse duration multiplied by half the speed
of light (to account for the round trip). One
difficulty associated with this approach is
that it would require extremely high and
typically unobtainable peak power to be
transmitted in a very short time to achieve
suitable resolution. This problem is avoided
through a technique known as pulse compression, which uses coded pulses or waveforms followed by signal processing. The
necessary processing is achieved by
matched filtering: The returned signals are
correlated with a bank of ideal signals
(matched filters) representing returns from
specific ranges illuminated by the radar.
Range resolution in this case is calculated
as the speed of light divided by twice the
bandwidth of the waveform. Therefore, resolution increases with the bandwidth of the
waveform: The wider the bandwidth, the
more precise the assumed location of the target must be to correlate the returned signal.
In this way, the peak power requirement
may often be reduced three orders of magnitude or more.
The processing gain associated with
pulse compression is achieved by exploiting
the coherent rather than random nature of
the transmitted pulse. In the classic “random walk” problem, every step from a
given starting point can go in any direction
with equal likelihood. After n steps, the
walker is not n paces from the starting
point, but a shorter distance averaging the
square root of n. Integrating n voltage vectors is analogous to taking n steps. If the
voltage vectors are coherent, they point in
the same direction—that is, they have the
same phase. If they are incoherent, they
have random directions, or random phase.
Power is the square of the magnitude of
voltage; consequently, n coherent signals
upon integration result on average in n
times the power as n incoherent signals.
Coherence, or lack thereof, is a key issue in
radar performance.
Similarly, when moving targets need to
be resolved in Doppler frequency, the necessary coherent processing is also performed by banks of matched filters. Assuming constant range rates, this is usually
implemented with a fast Fourier transform,
an algorithm for computing the Fourier
transform for discretely sampled data. This
type of processing is also key in imaging
radar: If one looks at a point p on the
ground through a telescope while flying
past it, the points surrounding p appear to
rotate about it. Doppler filtering exploits
this phenomenon.
Range (pulse) compression and
Doppler filtering result in coherent integration gain, an increase in the target signal
above the noise level. Coherent gain also results from the physics of antenna beam formation and reception. The gain of an antenna upon transmission and reception is
proportional to its area. In addition, the
strength of a target’s radar cross section is
determined by both the existence and the
coherence of the currents that are induced
when the target is illuminated by radar. If
the current or voltage vectors are coherent,
they have the same phase. If they are incoherent, they have random phases. In the
case of a parabolic dish antenna, signals
from a large distance arrive in phase along a
plane wave front. Rays parallel to the axis
of the antenna (i.e., its mechanical boresight) are reflected onto the focus, which,
because all paths are of the same length,
Equal
path
lengths
Plane wave
arriving along
antenna
boresight
Plane wave from
off-axis direction
With a parabolic antenna, signals from a large distance arrive in phase
along a plane wave front. Rays parallel to the axis of the antenna are reflected onto the focus; because all paths are of the same length, these rays
arrive in phase and thus combine coherently. Rays significantly off the
arrive in phase and thus combine coherently. Rays significantly off the mechanical
radar boresight are not coherent, nor do
they intersect at the focus.
The “radar range equation” addresses
all of these concepts and other fundamental
physics. It predicts performance in terms of
signal-to-interference ratio based upon the
radar hardware, the distance to the target,
the target’s radar cross section, and the total
system noise. The equation recognizes five
primary factors that determine signal
strength: the density of radiated power at
the range of the target; the radar reflectivity
of the target and the spreading of radiation
along the return path to the radar; the effective receiving area or aperture of the antenna; the dwell time over which the target
is illuminated; and signal losses caused by
physical phenomena, such as conversion to
heat, and processing losses, such as result
from the weighting of data.
The noise expressed in the radar range
equation primarily encompasses thermal
noise, which results from both ambient radiation and the receiver electronics. Interference can also occur from other
sources—for example, when a target is on
Earth’s surface, the radar return from the
surrounding surface and vegetation can
cause interference (commonly known as
ground clutter).
Another important concept in radar is
ambiguity, which can arise in several ways.
For example, if the pulse repetition frequency is increased to the extent that the returns from two or more pulses arrive simultaneously, then they will be inseparable.
This is known as a range ambiguity, and is
22 • Crosslink Summer 2004
mechanical radar boresight do not combine coherently, nor do they intersect at the focus. Likewise, upon transmission, a coherent beam is formed
along the antenna boresight when radiation from the focus is reflected off
the parabolic surface.
avoided by lowering the pulse repetition
frequency; however, the lower pulse-topulse sampling rate can cause Doppler ambiguities (a phenomenon related to the way
car and stagecoach wheels can appear to
rotate backward in movies). In the case of
imaging radars, the only way to simultaneously avoid both ambiguities is to illuminate a small enough area, which requires a
larger antenna.
Phased-array antennas are susceptible
to ambiguity in the form of so-called grating lobes. These antennas are composed of
arrays of small transmit/receive modules,
generally spaced about a wavelength apart.
They are particularly useful because they
allow steering of the antenna beam by applying a linear phase progression from element to element. Ambiguity occurs when
returns are received from two directions
such that an additional distance of half a
τ
wavelength (one wavelength two ways)
occurs from module to module. As a result,
radiation is received in perfect coherence
from both directions. Grating lobes are suppressed by avoiding the illumination of targets in the direction of grating lobes. The
necessary narrowing of the antenna beam is
achieved by increasing the antenna size.
Synthetic-Aperture Radar
The beam from a radar—like the beam
from a flashlight—will produce an elliptical
illuminated region on the ground when directed downward. The higher the radar, the
wider the ellipse—and, if the beam is
scanned to form an image, the lower the
resolution of the image. Synthetic-aperture
radar (SAR) overcomes this difficulty by
employing pulse compression to obtain high
range resolution and synthesizing a large
antenna width to obtain high azimuthal
Pulse repetiton interval
t
Transmit
“on time”
Receive
“on time”
Radar operates by transmitting pulses of electromagnetic energy and detecting the backscattered
energy by listening during the time between
pulse transmissions.
λc
RF carrier wavelength
λc = c/fc
Transmit/receiver
switch or circulator
Transmitter
Exciter/waveform-generator
Pulse modulation, frequency
references, timing and control
High-power
amplifier
Antenna
Coherent reference
and timing
Low-noise
RF amplifier
Signal
processor
1st local
oscillator
IF amp
1st
mixer
Receiver
Synchronous
detector and
A/D converter
Data or display
Q
receive mode, the detected signal passes from the antenna through the
transmit/receive switch to the receiver, which consists of a low-noise amplifier, a mixer that converts the data to a lower intermediate frequency, a
matched filter, and a detector and analog-to-digital converter.
board antenna and an auxiliary antenna suspended from the shuttle by a long boom.
In addition to single-pass interferometry, double-pass interferometry is also possible. An important special case occurs
when two voltage images (containing magnitude and phase) of the same area from the
same instrument taken at the same viewing
geometry are interfered or subtracted. Signals from targets that have not moved are
cancelled, leaving only noise and signals
from targets that have moved. Land deformations from earthquakes have been imaged in this way from space.
Another important variant is inverse
SAR, which exploits the relative motion of
the radar and the target, just as in standard
SAR. Here, however, the target is moving,
and its motion is critical because it is neither
controllable nor known a priori. A classic
application is the imaging of ships on the
Synthetic
aperture
End synthetic-aperture
data collection
Start synthetic-aperture
data collection
I
“Matched filter”
The classic coherent radar hardware architecture of a basic antenna with
a single receive channel. In transmit mode, the exciter produces the signal,
which flows to the high-power amplifier and transmitter before passing
through the transmit/receive switch (the circulator) to the antenna. In
resolution. This aperture synthesis is
achieved by coherently integrating the
returned signal pulse-to-pulse as the radar
moves along its path. The azimuth resolution attained in this manner is half a wavelength divided by the change in viewing angle during the aperture formation process.
Thus, if the same angle is swept out at different altitudes, there is no loss in resolution.
An important variant of this technique
is interferometric SAR. Here, in essence,
two images are formed from slightly different geometries. Interferometry then provides estimates of surface height for each
pixel, enabling the creation of terrainelevation maps. Elevation accuracy for a
given posting grid increases with radar resolution. The technique was first performed
from space during the NASA Shuttle Radar
Topographic Mapping (SRTM) project. This
was a single-pass radar mission with an on-
Timing
and
control
v
u
ocean for identification. Because a ship may
be yawing, pitching, or rolling, inverse SAR
can generate images of the ship’s side, front,
or top. For any single attempt at imaging,
however, neither the cross-range resolution
nor even successful imaging can be predicted.
An emerging technique, still in its infancy, is synthetic-aperture imaging lidar, a
variant of SAR employing extremely high
frequencies. By operating at such high frequencies, it is theoretically possible to attain
extremely fine resolution.
Moving-Target Indication
Airborne SAR provides imagery for intelligence, surveillance, mission planning,
bomb-damage assessment, navigation, and
target identification. Targets include structures, cultural features, and stationary or
slow vehicles with medium radar cross
sections of roughly one to tens of square
Synthetic-aperture radar (SAR) uses pulse compression to obtain high range resolution and
synthesizes a large antenna width to obtain
high azimuthal resolution. The unit vector in the
azimuth direction lies in the plane in which the
image is focused and is perpendicular to the
projection of the range unit vector u into that
plane. This aperture synthesis is achieved by coherently integrating the returned signal pulse-topulse as the radar moves along its path. The
azimuth resolution attained in this manner is half
a wavelength divided by the change in viewing
angle during the aperture formation process.
Thus, if the same angle is swept out at different
altitudes, there is no loss in resolution.
Beam
footprint
Crosslink Summer 2004 • 23
24 • Crosslink Summer 2004
JPSD project office
meters at short ranges of 10 to 100 kilometers. Fine location accuracy within a few
meters is generally achieved.
One way to extend the military and intelligence usefulness of SAR is to combine
it with a complementary ground-movingtarget-indicating (GMTI) radar mode that
detects moving targets on the ground in addition to the fixed targets imaged by SAR.
Specifically, GMTI data can be overlaid on
the SAR image; it can also be overlaid on a
road map or simply reported in terms of latitude and longitude. High-quality GMTI
systems require sophisticated hardware and
processing techniques.
Airborne GMTI radars provide widearea battlefield surveillance. Targets include
personnel, vehicles, and aircraft. An average
target will have a radar cross section from
one to tens of square meters. Medium
ranges vary from 50 to 300 kilometers, and
target range rates vary from 3 to 100 knots.
Location accuracy varies from tens to hundreds of meters.
Airborne-moving-target-indicating
(AMTI) radars are used in early warning
systems and for aerial combat. Targets include aircraft and possibly missiles. Detection at ranges exceeding 700 kilometers is
possible. Targets with radar cross sections
less than 1 square meter and targets moving
at speeds from 100 knots to Mach 3 can
usually be detected as well as highly maneuverable targets accelerating at more than
9 g’s. Location is coarse—on the order of
kilometers. Systems deployed on airborne
interceptors, where both the radar and target
are moving, rely on a wide variety of specialized waveforms to address different scenarios. Waveforms exhibiting high pulse
repetition frequency (i.e., range-ambiguous
waveforms) are primarily used for air-to-air
detection. Waveforms with low pulse repetition frequency (i.e., Doppler-ambiguous
waveforms) are most attractive for air-tosurface radars. Waveforms with medium
pulse repetition frequency (exhibiting both
range and Doppler ambiguities) are also used.
The slower a target is, the longer it can
be observed without drifting outside its optimal range/Doppler detection cell. At one
extreme (e.g., SAR), the targets are motionless, and one can integrate long enough to
filter out everything but the target, maximizing the signal-to-interference ratio. Because
of the large amount of coherent integration
gain associated with range and Doppler
compression, relatively little power is required. At the other extreme (e.g., AMTI
High-resolution urban SAR image taken by Sandia National Laboratories for the Rapid Terrain Visualization Advanced Concept Technology Demonstration. Minor streaking shows azimuth (travel direction) is in the horizontal direction. Shadows from trees show illumination from the top of the image.
radar), targets are moving and maneuvering
rapidly, permitting limited dwell time and
consequently limited range and Doppler
compression gain. The shortfall has to be
made up with increased power or a more
highly focused antenna beam (which in turn
requires a larger antenna).
Space-Based Radar
Active microwave sensing has proved its
value in numerous airborne applications.
Aerospace has been assisting efforts to apply this technology to spaceborne assets as
well. The potential benefits are numerous.
For example, space-based radar would be
globally available and provide high-arearate theater coverage, allowing continuous
theater surveillance, situation assessment,
and tracking, in any weather. Additionally,
spaceborne radars would not place pilots
and aircraft at risk. Long-range surface-toair missile threats are pushing airborne
standoff operations further back. With
space-based radar, deep access into denied
areas would no longer be an impediment.
Deeper targeting would provide support for
new precision strike systems. Finally,
higher grazing angles would improve lineof-sight access (whereas with airborne assets, large areas can be obscured by mountains, for example).
On the other hand, for a given antenna
size, the long range to Earth can result in a
much larger beam footprint on the ground.
To avoid ambiguities, larger antennas are
then required to keep the illuminated area
from becoming too large. The large size of
such spaceborne antennas contributes to
cost and affects the affordability of potential
spaceborne SAR systems.
Determining the optimal use of spaceborne and airborne assets is no trivial task.
The potential use of multiple systems in
military conflicts is an area of study unto itself. Aerospace has supported detailed
analysis-of-alternatives studies to ask and
answer a host of important questions. For
example, two particular difficulties that existing systems do not completely address involve target identification and the proper association of detections from one
observation to the next to allow tracking.
Aerospace is conducting research to help
resolve these issues.
Future Science Applications
NASA recently completed its technology
planning for passive and active microwave
remote sensing of Earth for the next 10
years and will issue a comprehensive report
on its findings. NASA’s Earth Science Technology Office relied heavily on Aerospace
during the process, and Aerospace was
given responsibility for approximately 90
percent of the final product, working with
material generated by NASA, JPL, academia, and Aerospace. Responsibilities included the scientific foundation for the plan,
the instrument concepts and measurement
scenarios, the detailed technology development plan and technology roadmaps, and
cost estimates. The final report is the Earth
Science Technology Office’s first technology planning document whose recommendations are firmly rooted in science. It will
support funding requests submitted to the
Synthetic-Aperture Radar
The cross-range or azimuth resolution of a scanning real-beam
radar is determined by the product of the range and the antenna
beamwidth in radians (this beamwidth is one wavelength divided
by the antenna width measured perpendicular to the boresight). On
the ground, the spaceborne or airborne radar beam spreads out
over a large area. Thus, to attain high resolution, the radar would
need a large antenna, or aperture, to obtain a narrower beam. The required aperture is typically so large that it cannot be
formed with an actual physical antenna.
Synthetic-aperture radars (SARs) synthesize a large aperture by coherently inteRadar
grating the returned signal pulse-to-pulse as
the radar moves. The azimuth resolution attained in this manner is half a wavelength divided by the change in
viewing angle (in radians) during the aperture formation process
(twice that which would be achieved with a real aperture of the
same size). Thus, if the same change in viewing angle is maintained,
there is no loss in resolution upon moving to a higher altitude.
The formation of synthetic apertures and associated processing is
most easily understood from the standpoint of Doppler processing.
Consider a fixed radar pointing at a target on a rotating turntable,
where both the radar and the target lie on the same plane. Targets
moving toward the radar source will exhibit a positive Doppler shift,
while those moving away will exhibit a negative Doppler shift, proportional to their distance from the center, or hub, of the turntable.
Thus, subsequent to range compression, azimuth compression is efficiently achieved simultaneously for all targets at a given range by
Doppler processing with a fast Fourier transform. In a spaceborne
or airborne application, however, the radar moves while the target
remains fixed. In this case, the data must first undergo motion compensation to drive the range rate from the radar to a fixed motioncompensation point on the ground (the
effective hub) to zero. The data then corω
respond to the case of a fixed radar illuminating a surface rotating about the
Hub
motion-compensation point.
For stretch waveforms (employing a long
pulse that progresses linearly in frequency), both range and azimuth compression may be performed efficiently with fast Fourier transforms;
however, for higher resolutions, depth of field becomes increasingly
important. The depth of field denotes the horizontal and vertical
space over which fast Fourier transforms can be employed for
range and azimuth compression without loss of resolution and geometric distortion. One way to overcome a limitation in the depth of
field is to generate an image from pieces that are assembled into a
complete picture. When depth of field becomes a serious issue, a
“polar” transformation is usually applied to the data prior to compression. This linearizes the phase histories in range and azimuth so
a two-dimensional fast Fourier transform will properly compress the
data from a region typically orders of magnitude larger.
Turntable
AMTI
Power-aperture,
area rate
Coarse GMTI
Fine GMTI
SAR imaging
Dwell time
High
cell size, clutter level, target agility
Low
To the extent coherent integration gain is limited by target motion, the shortfall has to be made up with increased power or antenna gain (area).
Crosslink Summer 2004 • 25
Finding Moving Targets on the Ground
When viewed by a moving radar platform, fixed targets on the
ground lie within a particular Doppler bandwidth. One could simply infer that targets detected outside this bandwidth were moving;
however, this approach is generally far from adequate. Many moving targets on the ground may lie within the same bandwidth. A
more reliable approach takes advantage of a technique used to
suppress ground clutter, the fixed-target returns that interfere with the
moving-target returns.
Moving-target-indicating (MTI) radars can suppress ground clutter
by employing multiple phase centers—portions of the antenna that
act as independent antennas to form a so-called displaced phasecenter antenna. The basic concept is to keep pairs of phase centers
motionless from pulse to pulse, simulating an antenna that stays
motionless in space. This has the effect of driving the Doppler bandwidth of clutter to zero so it can be cancelled upon subtraction of
Congressional Office of Management and
Budget and help prioritize the agency’s
technology development program.
The plan supports future missions using microwave and near-microwave sensors
to measure precipitation, monitor freeze/
thaw cycles, perform interferometric SAR,
monitor ocean topography and river levels,
measure snow cover, measure polar ice and
ice thickness, measure atmospheric water
and ozone, monitor land cover and land use,
and measure biomass. The plan reflects a
trend toward the use of higher-altitude instruments for greater coverage and the development of onboard data-processing
hardware. The development of radiationhardened radar hardware that can withstand
the harsher high-altitude radiation environment was thus part of this plan.
the data from these pulse pairs. With the background “removed,”
all that remains are moving objects and noise. Moving targets will,
however, suffer some amount of loss upon subtraction, depending
upon their range rate and the difference in time between observations.
Still, after detection, the location of the target remains unknown.
Multiple phase centers can, in an approximate sense, solve this
problem by means of monopulse techniques. In classic airborne interceptor designs, the antenna is divided into portions along both
azimuth and elevation. Amplitude or phase comparisons are made
between returns from these subapertures to estimate the direction of
arrival of the target signal. With a minimum of three phase centers
both the displaced phase-center antenna technique of clutter cancellation and monopulse techniques for location of targets can be
combined to detect and locate targets on the ground.
Aerospace also recently performed the
“Jupiter Icy Moons Orbiter High-Capability
Instrument Feasibility Study.” The purpose
was to assess the capability of a suite of instruments selected for the Jupiter Icy
Moons Orbiter, a proposed spacecraft that
would orbit three of Jupiter’s moons for extended observations. Building upon earlier
conceptualized instruments, Aerospace selected, designed, and evaluated a 35-gigahertz interferometric SAR and a 3-gigahertz
fully polarimetric SAR with penetration
into the shallow subsurface. The crosspolarized return from the latter instrument
would provide a measure of the multiple
scattering indicative of an icy regolith.
At the request of NASA, Aerospace
has also provided independent review of
progress in developing innovative microwave and near-microwave spaceborne
Targets from a GMTI radar overlaid
on an annotated map (approximately 120 by 120 kilometers) show
a massive retreat of Iraqi forces in the
first Gulf War. The radar employed
the minimum of three phase centers to
cancel clutter and detect and locate
targets. Additional phase centers and
space-time adaptive processing
could be used to increase performance.
26 • Crosslink Summer 2004
instruments and supporting hardware and
algorithms. This has recently included the
continuing development of a geostationary
sensor to serve the purpose of ground-based
NEXRAD weather radars; a sensor and
supporting algorithms to measure soil
moisture below vegetation canopies; an
advanced sensor and supporting algorithms
to measure ocean ice thickness and snowcover characteristics; and an advanced
precipitation radar antenna and instrument.
Ancillary technology developments have
included lightweight scanning antennas,
high-efficiency transmit/receive modules,
and SAR processing algorithms.
Acknowledgements
The author thanks Peter Johnson and Mike
Hardaway of the Joint Precision Strike
Demonstration Project Office for the SAR
image taken by Sandia National Laboratories for the Rapid Terrain Visualization Advanced Concept Technology Demonstration. The author also thanks Frank
Kantrowitz, Walter Shepherd, and Nick
Marechal of The Aerospace Corporation for
many illustrations used in this article.
Further Reading
W. G. Carrara, R. S. Goodman, and R. M. Majewski, Spotlight Synthetic Aperture Radar,
Signal Processing Algorithms (Artech House,
Boston, 1995).
J. W. Curlander and R. N. McDonough, Synthetic
Aperture Radar Systems and Signal Processing
(John Wiley and Sons, Inc., New York, 1991).
G. W. Stimson, Introduction to Airborne
Radar, Second Edition (Scitech Publishing,
Mendham, NJ, 1998).
Engineering and Simulation of
Electro-Optical
Remote-Sensing Systems
In designing remote-sensing systems,
performance metrics must be linked to
design parameters to flow requirements into
hardware specifications. Aerospace has developed
tools that comprehensively model the complex interaction
of these metrics and parameters.
Stephen Cota
E
lectro-optical remote-sensing systems are built to do specific jobs—
for example, to make meteorological measurements, to characterize
Earth’s climate, to track patterns of land
use, or to collect high-quality imagery. It is
the systems engineer’s task to determine
what characteristics a proposed system
must have to fulfill its mission. To do so, the
engineer must “flow down” requirements
from the mission level to the sensor as a
whole, from the sensor to its components,
and from components to subcomponents.
Aerospace has developed tools and expertise to facilitate this complex process.
Performance Goals
In most cases, top-level system performance must be expressed in quantitative
terms in order to flow down requirements.
In the case of a meteorological sensor, performance might be specified in terms of the
desired accuracy of surface reflectance or
surface temperature measurements. For an
imaging sensor, performance might be
specified in terms of a quantitative metric
such as the National Image Interpretability
Rating System (NIIRS), which grades images based on their usefulness in performing analytic tasks.
Once presented with a quantitative toplevel requirement, the systems engineer determines which hardware would be suitable
based on standard performance metrics.
Such metrics are many and varied.
A large class of metrics appropriate to
radiometric and imaging systems are those
related to signal-to-noise ratio, which must
be high enough to confidently distinguish
the lowest signal of interest from spurious
features caused by electronic noise or the
inherent fluctuations of the signal. The
signal-to-noise ratio itself may be the preferred metric, or it may be replaced by a
“noise-equivalent” quantity, such as noiseequivalent delta reflectance or noiseequivalent temperature difference. A noiseequivalent quantity represents the input
signal level required to achieve a signal-tonoise ratio of exactly 1. It is convenient
because it presents the noise in the units of
the signal. All of these metrics set constraints on the hardware. For example, to
maximize the signal, the detectors must be
large (to collect the most light possible) and
the optics must be highly reflective and fast
(i.e., have a low ratio of focal length to
aperture); to minimize noise, the detectors
must be cooled, stray light must be held to a
minimum, and so forth.
Crosslink Summer 2004 • 27
Input Image
High Resolution
High SNR
Apply Modulation
Transfer Function
(MTF)
response or modulation transfer function, the
Another class of metrics are those
optics must be large with minimal obstructions.
related to spatial resolution. For a system
primarily concerned with the collection of
Complex Relationships
radiometric information, often a simple paThe relationship between top-level requirerameter such as ground-sample distance—
ments and the standard performance metthe size of a single pixel projected onto the
rics is seldom as simple as it first appears.
ground—may be adequate. For systems reFor example, a system designed for classifiquiring high image quality, other metrics of
cation of terrain might need to detect a
resolution must be computed, such as the
change of 0.05 in surface reflectance in a
relative edge response or the modulation
given visible spectral band with an accuracy
transfer function. Both metrics characterize
of 10 percent. At first glance, it might seem
how diffraction and other inherent limitasufficient to start with a signal-to-noise ratio
tions of the optical
of 10 and divide that into
system blur sharp
Because of the many
0.05 to derive a required
features in a scene
complications
involved
in
noise-equivalent delta resuch as coastlines or
flectance of 0.005. This value
relating
mission-level
cloud edges. The relperformance to lower-level could then be used in selectative edge response
ing and configuring the harddirectly measures
system parameters, the
ware. In practice, however,
how an infinitely
electro-optical systems
variable atmospheric consharp edge becomes
engineer
must
usually
build
stituents such as water vapor
softened, while the
and aerosols corrupt visiblean
end-to-end
simulation
modulation transfer
band measurements—and
for the sensor.
function is a Fourierbecause the levels of these
domain representaconstituents are not known
tion of how all edges are softened, whether
a
priori,
they
must
be estimated using
inherently sharp or not. Like the signal-tospectral
bands
in
a
water-vapor absorption
noise metrics, resolution metrics also place
region
and
in
the
short-wave
infrared before
constraints on the hardware—and often, the
the
engineer
can
correct
for
them.
Thus, the
constraints imposed by resolution oppose
error
in
the
visible
band
of
interest
becomes
those imposed by the signal-to-noise metrics.
a
function
not
only
of
its
own
noise-equivaFor example, to achieve a certain groundlent delta reflectance but also the noisesample distance (e.g., 0.5 to 1 meter for the
equivalent delta reflectances of those bands
current generation of space-based commerused to determine water vapor and aerosol
cial imagers), the detectors must be made
levels. And this is just one of many error efsmaller or the effective focal length must be
fects. Others include detector response varilengthened, to the detriment of the signalations, miscalibration, and band-to-band
to-noise ratio; to achieve a high relative edge
28 • Crosslink Summer 2004
Resample
Image
misregistration, as well as errors introduced
by the approximations inherent in any practical water-vapor and aerosol retrieval algorithm. All of these can have a bearing on
the ability to detect a change of 0.05 in surface reflectance, and most can’t be related
to one another via closed-form equations.
Similar problems occur in imaging
systems. There are many ways to achieve a
NIIRS rating of 5, for example. The designer might simultaneously vary groundsample distance, relative edge response,
and signal-to-noise ratio—to say nothing of
using sharpening filter coefficients to emphasize edges and contrast—all of which
affect image quality. Variations in detector
response and other artifacts such as spectral
banding (low-frequency variations in spectral response across a detector array) also
affect image quality. For multispectral imagery, band-to-band misregistration and
miscalibration must also be considered.
Again, in most cases, these effects can not
be related to one another via closed-form
equations.
Because of the many complications involved in relating mission-level performance to lower-level system parameters, the
electro-optical systems engineer must usually build an end-to-end simulation for the
sensor. Such simulations typically start with
an image of much higher quality than the
proposed sensor is expected to produce;
they then transform the image by applying
models of the sensor’s modulation transfer
function, noise level, response uniformity
characteristics, and calibration accuracy.
This produces the expected output of the
Add Fixed Pattern
& Temporal Noise
Analog-to-digital
Conversion
Output
Image
The flow of a typical PICASSO simulation with
corresponding images below. Such a flowchart is often called an “imaging chain.”
Because PICASSO is a modular code, it is
possible to omit or reorder the steps, the only
constraint being to maintain an imaging chain
that is physically reasonable.
sensor. Each error source is introduced at
that point in the imaging process where it
would actually arise, thus replicating any
nonlinear interaction between error sources.
Finally, the results of the simulation can be
fed into algorithms used to extract data
from the image to determine actual error
levels. The systems engineer can then modify the electro-optical sensor’s parameters,
either to decrease the error, if the missionlevel specification has not been met, or to
increase it (while reducing mass and power), if
the specification is met with excessive margin.
The Aerospace Corporation has written
and maintains several end-to-end simulations, each tailored for a specific class of
problems. Those having the broadest applicability include the Visible and Infrared
Sensor Trades, Analyses, and Simulations
(VISTAS) package; the Physical Optics
Code for Analysis and Simulation
(PHOCAS); and the Parameterized Image
Chain Analysis and Simulation Software
(PICASSO). Aerospace has used all three
of these tools to complete complex systems
engineering tasks for a variety of remotesensing programs.
PICASSO: A Portrait
PICASSO is the most recent addition to the
Aerospace suite of electro-optical systems
engineering tools. It was designed to be
modular, machine independent, easy to use,
and easy to customize.
A PICASSO simulation begins with a
set of parameters describing the electrooptical sensor to be simulated. A highquality image, taken in the sensor’s spectral
Surface plot of the modulation transfer function (top) and the point-spread function
(bottom) for a perfect circular aperture. The modulation transfer function is a Fourierdomain representation of how edges are softened by a sensor. The point-spread function describes the blur produced by an infinitely sharp point as imaged by a sensor; it
can be computed from the modulation transfer function, which is its Fourier transform.
Crosslink Summer 2004 • 29
Diffraction off the sensor’s primary mirror and other structures lying in
the optical path will cause inherently sharp features to diffuse and blur
when projected onto the focal plane.
band and at the viewing geometry of interest, is used as the starting point. This input
image is meant to represent the real world
as seen through a perfect sensor (that is, one
with infinite resolution and infinite signalto-noise ratio); it therefore should have
much better signal-to-noise ratio, resolution, and sample spacing than an image
from the sensor to be simulated.
In practice, it can be difficult to find
such an image because the sensor to be simulated is often intended to surpass existing
sensors of its class. The PICASSO analyst
can employ a number of strategies to overcome deficiencies in the input imagery. For
example, in simulating a space-based sensor, if no high-resolution imagery from
space can be found, the analyst might substitute an aircraft image and use an atmospheric modeling code to correct for transmission losses and path radiance effects that
would occur between the aircraft’s altitude
and space. Alternatively, the analyst might
take existing space-based imagery of marginally usable resolution and enhance it using standard image restoration techniques.
Synthetic images, produced from firstprinciples physics codes, have effectively
infinite signal-to-noise ratio and high resolution, but sometimes appear unrealistic,
particularly for vegetation or other natural
surface types. Synthetic images have precisely known surface and atmospheric properties, making them attractive for testing
algorithms that try to derive these quantities
(though such tests are seldom definitive because of the approximations and limitations
inherent in the models that translate these
properties into observed radiance).
Some imagery will have the requisite
resolution and signal-to-noise ratio, and yet
be an imperfect match to the spectral passband of the sensor to be simulated. In this
case, an atmospheric modeling code can
again be used to correct the imagery to the
30 • Crosslink Summer 2004
desired passband. Imagery in the desired
passband can also be synthesized via a
weighted sum of hyperspectral images.
Similarly, imagery collected at sun and sensor angles different from those desired can
be converted back to reflectance values and
then translated to the desired geometry by
means of an atmospheric modeling code.
Once a suitable input image has been
selected or produced, PICASSO models
how the physical limitations of the proposed electro-optical system will affect it.
The first step in this process is to degrade
the input image’s resolution until it matches
that of the proposed sensor.
Any practical sensor will have a hard
limit on the resolution of its images imposed by the finite size of its optical system.
Diffraction off the sensor’s primary mirror
and other structures lying in the optical path
will cause inherently sharp features to diffuse and blur when projected onto the focal
plane. The characteristic blur pattern produced by a single, infinitely sharp point as
imaged by the sensor is known as the pointspread function. This point-spread function
can be convolved with the high-quality input image to model the effects of optical
diffraction. Often, it’s easiest to compute the
point-spread function from the modulation
transfer function, which is its Fourier transform.
Optics are not the only sensor elements
that degrade image quality. Resolution is
also lost through the use of focal-plane arrays to record the image. These arrays are
composed of detectors of finite size, and an
infinitely sharp feature on the ground can
generally appear no smaller in the final image than the size of the detector that collects
it. A common type of detector used in
visible imagers is the charge-coupled device
(CCD)—a monolithic array of silicon
detectors, each of which measures light by
collecting the charge produced by incident
photons. In addition to the resolution lost to
the finite size of the CCD detectors, resolution can be lost to the undesired diffusion of
charge between detectors. If the sensor moves
during its integration period—because of
orbital motion, jitter, or scanning motion—
the image will smear, just as it does when
an ordinary photographer with a handheld
camera moves while taking a picture. All of
these effects can be described by their own
unique modulation transfer functions, and
PICASSO accounts for each of them.
After degrading the input image to the
sensor’s resolution, PICASSO resamples
the image, taking a value from the blurred
input image at each location where a detector would reside in the sensor’s focal plane
and mapping it to the output image.
Along with resolution losses, the most
significant source of image degradation is
noise. PICASSO models several classes of
noise. Temporal white noise and flicker are
produced by random (often thermal)
processes in the sensor’s detectors and electronics and by the inherent variability in the
signal itself. In addition, the individual elements of a detector array often vary in their
response to a uniform source of light, giving
rise to fixed-pattern noise. If the sensor employs a two-dimensional focal-plane array,
the fixed-pattern noise will probably appear
random in a single image, but consistent
from one image to the next. If the sensor
employs a one-dimensional focal-plane array that is scanned to produce an image, the
fixed-pattern noise will give rise to streaks,
recognizable as pattern noise even in a single image. Other noise processes include
quantization noise, introduced by digitizing
the analog signal, and signal distortion,
caused by the nonlinearity of the analog-todigital converter.
These steps are usually part of the
PICASSO imaging chain regardless of the
electro-optical system modeled. The final
steps, representing ground processing and
data exploitation, vary considerably, depending on the application at hand. For an
imaging system, PICASSO will proceed to
model various techniques for enhancing
resolution, such as the use of sharpening
filters or nonlinear image restoration. For a
meteorological system attempting to retrieve surface reflectance data, PICASSO
would pass the output imagery to an atmospheric compensation algorithm.
The output of PICASSO is a representative image from the simulated sensor,
along with one or more figures of merit.
The figures of merit—signal-to-noise ratio,
NIIRS rating, relative edge response, error
in retrieved reflectance, etc.—can be compared with the sensor’s mission-level requirements. When they exceed or fall short
of requirements, the electro-optical systems
engineer may vary the sensor parameters
and rerun the simulation, searching for the
optimal parameter set. The number of trials
required to find an optimal design varies
with the complexity of the relationship between the metrics and the sensor parameters
upon which they depend.
Sometimes, the standard metrics do not
tell the whole story. This is because they
measure only particular aspects of sensor
performance, and do not reflect the effect
that some artifacts and distortions have on
performance. Although absent from the
metrics, these artifacts and distortions will
be present in the simulated imagery, allowing the systems engineer to continue with
the optimization process even in cases
where the metrics do not reflect the true
limitations of the system.
numerous remote-sensing programs since
their inception and can be expected to form
the basis of an ongoing robust systems engineering capability.
Conclusion
E. Casey and S. L. Kafesjian, “Infrared Sensor
Modeling for Improved System Design,” SPIE
Vol. 2743: Infrared Imaging Systems: Design,
Analysis, Modeling, and Testing, pp. 23–34 (1996).
S. A. Cota, L. S. Kalman, and R. A. Keller,
“Advanced Sensor Simulation Capability,”
SPIE Vol. 1310: Signal and Image Processing
Systems Performance Evaluation (1990).
D. G. Lawrie and T. S. Lomheim, “SpaceBased Systems for Missile Surveillance,”
Crosslink, Vol. 2, No. 1 (Winter 2000/2001).
J. C. Leachtenauer, “National Imagery Interpretability Rating Scales: Overview and Product Description,’’ ASPRS/ASCM Annual Convention and Exhibition Technical Papers:
Remote Sensing and Photo-grammetry, Vol. 1,
pp. 262–272 (1996).
T. S. Lomheim and E. D. Hernández-Baquero,
“Translation of Spectral Radiance Levels,
Band Choices, and Signal-To-Noise Requirements to Focal Plane Specifications and Design Constraints,” SPIE Vol. 4486 (2001).
T. S. Lomheim, J. D. Kwok, T. E. Dutton, R.
M. Shima, J. F. Johnson, R. H. Boucher, and
C. J. Wrigley, “Imaging Artifacts Due to Pixel
Spatial Sampling Smear and Amplitude Quantization in Two-Dimensional Visible Imaging
Arrays,” SPIE Vol. 3701: Infrared Imaging
Systems: Design, Analysis, Modeling, and
Testing X, pp. 36–60 (1999).
PICASSO and similar end-to-end simulation tools form a vital part of electro-optical
systems engineering at Aerospace. These
tools have been successfully applied to
Target radiance reaching the sensor is corrupted by atmospheric attenuation due to water vapor,
aerosols, and other atmospheric constituents. Measuring the target radiance is further complicated
by unwanted radiance reaching the sensor over many paths: In addition to direct and diffuse sunlight
reflected from the target, the sensor will receive radiance scattered by the atmosphere, as well as
direct and diffuse sunlight first reflected off the target’s surroundings and then scattered by the
atmosphere into the sensor’s field of view. For many applications, it is necessary to correct for both
atmospheric attenuation and these so-called path radiance terms.
Acknowledgements
The end-to-end simulation codes discussed
here are the product of many years of work.
The author would like to acknowledge the
efforts of those who helped create them, especially Jabin Bell, Tim Wilkinson, Robert
A. Keller, Richard Boucher, Linda Kalman,
Mark Vogel, Rose of Sharon Daly, Tom
Trettin, Joe Dworak, Terence S. Lomheim,
and Mark Nelson.
Further Reading
Crosslink Summer 2004 • 31
FOR REMOTE IMAGING SYSTEMS
Remote imaging platforms can generate a huge amount of data. Research at Aerospace has yielded
fast and efficient techniques for reducing image sizes for more efficient processing and transmission.
Timothy S. Wilkinson and Hsieh S. Hou
D
igital cameras for the consumer
market can easily generate several megabytes of data per
photo. Even with a fast computer, files of this size can be difficult to
work with. Various compression techniques
(such as the familiar JPEG standard) have
been developed to reduce these file sizes for
easier manipulation, transmission, and storage. Still, requirements for image compression in the home pale in comparison to
those in remote sensing, where single images can be hundreds of times larger.
The IKONOS commercial imaging
satellite, for example, can collect data simultaneously in red, green, blue, and nearinfrared wavelengths. With a swath width of
7000 pixels and a bit depth of 12 bits per
pixel, a relatively modest 7000-line scan
generates 294 megabytes of data. Moreover,
the data can be collected in less than a
minute, so many such images can be generated fairly quickly. Even with powerful
computers on the ground, data sets of this
size would be of limited use without effective compression techniques.
32 • Crosslink Summer 2004
Image compression techniques fall into
two broad classes: lossless and lossy.
Lossless algorithms reduce file size but
maintain absolute data integrity; when reconstructed, a losslessly compressed image
is identical, bit for bit, to the original. Lossy
techniques, on the other hand, allow some
distortion into the image data; in exchange,
they typically achieve much greater compression than a lossless approach.
As part of its research into advanced
remote imaging systems, Aerospace has
helped develop more powerful methods for
compressing and manipulating vast
amounts of digital data. While different
compression strategies exist for different
sources of imagery, most can be analyzed in
terms of just a few building blocks. The first
step is typically to transform the image to
eliminate the redundancy that is inherent in
any digital image. The transformed representation can then be quantized to better organize and prioritize the data. The quantized data can then be coded to reduce the
overall length of the representation. These
steps are perhaps easiest to explain in reverse, beginning with the coding process.
Coding
Coding is the process of converting one set
of symbols into another. Morse code, for
example, is a lossless mapping of the English alphabet into dashes and dots. A well
designed code will usually consider the
probabilities of occurrence of the source
symbols before assigning the output symbols. The most probable symbol is ascribed
the shortest code, while less probable
source symbols are assigned a longer code.
So for example, in converting a photo of
Mars into binary code, a red pixel might be
rendered as 01 and a green pixel as 100101;
a photo of a golf course might use just the
opposite conversion. The Huffman code is
the most common example of this type of
lossless scheme.
In the late ’80s, a technique known as
arithmetic coding developed as an alternative to the Huffman code. The basic principle is to represent a series of source symbols as a single symbol of much shorter
length. The process is more complicated
because it requires computation by both the
encoder and the decoder (as opposed to
simple substitutions, as in Huffman
coding); however, arithmetic coders typically achieve better results.
Both Huffman and arithmetic codes
generally handle only one source symbol at
a time. Better results can be obtained by
considering repetition of symbols as well.
To characterize sequential occurrences of a
single symbol, a technique known as runlength encoding is especially useful. Runlength encoding replaces a series of consecutive identical symbols by a code for that
symbol along with a code representing the
number of occurrences (e.g., something like
“b20” for a series of 20 blue pixels). To
characterize repeated occurrences of multisymbol patterns, substitution and dictionary
codes are helpful. Such codes begin by
compiling a table of repeated symbol sequences and assigning one or more codewords to represent each one.
Lossless codes can be applied directly
to an input image and will achieve some
(modest) amount of compression. Most
often, though, their role is to shorten the
amount of information that must be carried
to represent the series of symbols produced
through quantization, the previous step in
the image-compression chain.
Quantization
Quantization is the process of limiting the
number of possible values that a variable
may assume. In its most general form, a
quantizer can be thought of as a curve that
specifies for every possible input value one
of a smaller set of output values. For example, ten similar shades of green might be
restricted to just one.
Output
Quantization can fulfill several useful
roles in an image-compression algorithm.
Many coders require a finite range of possible source-symbol values for efficient operation. This is especially important where
values passed to the quantizer are more or
less continuous, as they might be when
floating-point arithmetic is used in the transform. In lossless data compression, input to
the coder must be in the form of integers,
because there is no way to assign a finitelength binary code to a real or floating-point
number. Quantization also allows some
measure of importance (or weight) to be
assigned to different types of symbols. For
example, suppose that the data to be sent to
the coder consists not of a raw image but its
Fourier transform. The human eye is relatively insensitive to changes in content at
high spatial frequencies (so a grassy lawn
appears relatively uniform to the casual
eye). An efficient quantizer might therefore
try to represent the low-frequency coefficients of the transform (e.g., the gradual
“digital imaging”
Fixed-length letter code
Variable-length letter code
d
000
d
011
i
001
i
11
g
010
g
10
t
011
t
0011
a
100
a
010
l
101
l
0010
m
110
m
0001
n
111
n
0000
Total code bits = 42
Total code bits = 39
A variable-length code to reduce the number of bits required to represent a set of symbols. In this case,
the symbols to be coded are the letters in the words, “digital imaging.” In the first case, because there
are eight different letters, a fixed-length code requiring three bits per letter is used. With fourteen total
letters, the code requires 42 bits for the representation. When it is recognized that some of the symbols occur more frequently than others (“i” occurs four times and “g” occurs three times, for example),
the code length can be varied to shorten the overall number of bits in the representation. In this case,
a Huffman code was used to shorten that length from 42 to 39 bits.
Output
Output
4
3
2
Input
1
–2 –1
–1
1
2
3
4
Input
Input
–2
A quantizer that implements the “floor” function
with clipping. For inputs from zero to four, the
output of the quantizer is the largest integer not
greater than the input. Inputs less than zero are
assigned a value of zero, while inputs greater
than or equal to four are assigned a value of
three. Thus, a possibly infinite range of input
values is restricted to one of just four possibilities.
Fine quantizer
(low spatial frequencies)
Coarse quantizer
(high spatial frequencies)
Different quantizers can be applied selectively to emphasize or ignore certain types of data. In this
case, the quantizer at the left has relatively small input increments and a relatively large number of
possible output values. Such a quantizer might be appropriate for important data, such as lowspatial-frequency information derived from an image transform. The quantizer at right has fewer possible output values corresponding to larger input ranges. Relatively unimportant image data can be
retained at low fidelity with such a quantizer.
Crosslink Summer 2004 • 33
color variations in the lawn) with greater
fidelity than the high-frequency coefficients,
possibly with different quantizers. Thus, the
less important data at high spatial frequencies might be permitted a smaller set of possible output values than the more important
data at low spatial frequencies.
Quantization involves some information loss (except in the case of a “null”
quantizer, which maps every input value to
that same output value); therefore, a quantizer applied directly to an image can
achieve some compression. For example,
suppose that an image with a 12-bit range
of possible values, 0–4095, is divided by
16—that is, the quantizer takes the integer
portion of the input when divided by 16.
The effect is to reduce the range of possible
output values to 0–255, which can be represented with just 8 bits. Compression has
been achieved, but at the expense of some
distortion. Most commonly, however, quantizers are not used directly for image compression. Instead, they are used to restrict or
weight the range of values produced by a
transform.
Transformation
Transformation is the process of decomposing an image to identify redundant and irrelevant information (e.g., pattern repetition,
contextual information). The primary
source of redundant information is the local
correlation or similarity of neighboring
pixel values. Several phenomena give rise to
such correlation. First, the world around us
is correlated. Over short distances, colors
tend to be uniform and textures appear regular. Abrupt changes in intensity and pattern
are typically perceived as sharp edges. Second, the process of collecting an image produces correlation. Even diffraction-limited
optics impose a blur on the scene being imaged; such a blur blends neighboring scene
values and increases pixel-to-pixel similarity.
Several different transforms take advantage of spatial correlation for image
compression. Some of the more common
include differential pulse code modulation,
discrete cosine transformation, and waveletbased transformation.
Differential Pulse Code
Modulation
The spatial correlation in digital images implies a high degree of pixel-to-pixel predictability. A tool known as differential
pulse code modulation or predictive coding
takes advantage of this phenomenon to
compress image data.
34 • Crosslink Summer 2004
R(n)
+
X(n)
Delay
X(n–1)
DPCM
encoded
data
Quantize
and code
–1
X(n) = Image sample, n = 0, 1, ....
X(–1) = Initial prediction = 0
R(n) = Residual prediction error
A simple differential pulse code modulation compression scheme. The current input image sample
and the previous sample are compared to produce a residual error. The residual error is then passed
on to the quantization and coding functions to achieve image compression.
c(0,0)x
+
I(m,n)
I(0,0) I(1,0)
I(2,0) I(3,0)
I(0,1) I(1,1)
I(2,1) I(3,1)
I(0,2) I(1,2)
I(2,2) I(3,2)
I(0,3) I(1,3)
I(2,3) I(3,3)
c(1,0)x
c(0,1)x
+
=
+
c(1,1)x
+
c(0,2)x
+
c(2,1)x
c(1,2)x
+
+
+
c(3,2)x
+
c(2,3)x
+
=1
c(3,1)x
c(2,2)x
c(1,3)x
c(0,3)x
c(3,0)x
+
+
+
+
c(2,0)x
c(3,3)x
+
= –1
The basis patterns for a 4 4 pixel discrete cosine transform can be weighted to represent any possible input block of 4 4 pixels. The transform of an image or image block can be computed efficiently. The resulting coefficients are loosely ordered by spatial frequency, with low-spatial-frequency
patterns at the upper left and high-spatial-frequency patterns at the lower right.
In its simplest form, the predictive coding transform processes pixels sequentially
and uses the value of one pixel to predict
the value of the next. The difference between the predicted and actual value—the
residual—is then quantized and coded. The
advantage of operating on the residuals, as
opposed to the original image samples, is
that small residuals occur more often than
large ones. Thus, a quantizer/coder combination can take advantage of this distribution of information to achieve significant
rate reduction.
In more complicated systems, the predictor involves several surrounding pixels
instead of a single pixel. Even greater efficiency can be obtained by allowing the predictor to vary as a function of local content.
In this way, the prediction residual can be
consistently minimized but with some expense incurred to communicate the state of
the local predictor.
The prediction residual can be coded
losslessly or quantized prior to coding, in
which case some loss is possible. The Rice
algorithm is probably the best known of the
lossless predictive coding schemes. In its
basic form, it uses a single pixel difference
predictor along with a null quantizer and a
set of adaptive Huffman codes to achieve
compression.
Discrete Cosine Transform
An important characteristic of an imaging
system is its frequency response, essentially
the input-to-output ratio for signals across a
frequency range. The frequency response
often functions as a blurring mechanism or
low-pass filter. When this response is superimposed on an image whose power spectral
density falls off exponentially, the resulting
digital image content is dominated by low
and middle spatial frequencies. Compression based on the discrete cosine transform,
as exemplified by the JPEG algorithm,
HPF
HPF
x_2
HH
LPF
x_2
HL
HPF
x_2
LH
LPF
x_2
LL
x_2
Input image
LPF
x_2
HPF = High-pass filter
= Horizontal operation
LPF = Low-pass filter
= Vertical operation
x_2 = Subsample by a factor of 2
The basic flow of the single stage of a wavelet transform requires a low-pass and high-pass filter pair.
Operations are performed separably in both horizontal and vertical directions. All possible combinations of filter and orientation are represented, with a subsampling by a factor of 2 in each direction. The result is four sub-images, each of which is 1/4 the size of the input image. The sub-images
passing through one or more high-pass filters retain edge content at the particular spatial frequency
represented at this stage of the transform. The sub-image resulting from exclusive application of lowpass filters is a low-resolution version of the starting image and retains much of the pixel-to-pixel spatial correlation that was originally present.
takes advantage of this phenomenon by
decomposing the image into a series of basis patterns of increasing spatial frequency.
For example, a 4 4 pixel region of an
image can be represented by 16 basis patterns, ranging from a completely uniform
block (the lowest spatial frequency) to a
checkerboard pattern (the highest spatial
frequency). An appropriately weighted sum
of these patterns can be made to represent
any selected region. Moreover, the
weights—that is, the transform coefficients—can be efficiently computed. The
coefficients provide a measure of the energy
present at different spatial frequencies in the
region. As the coefficient indices increase,
so do the spatial frequencies represented by
the associated basis patterns.
When the input region contains image
data, the low-frequency coefficients will
generally be of higher amplitude than the
high-frequency coefficients. In discrete cosine transform compression, the image is
first divided into regular blocks—8 8 pixels is a typical choice. Within each block,
the transform coefficients are typically
arranged according to frequency. A quantizer is then applied to the transform coefficients to limit their range of possible value
prior to lossless coding. For low spatial frequencies, which represent most of the energy in an image block, the quantization is
rather fine so that minimal distortions are
introduced. At higher spatial frequencies,
where there is less image energy and less
visual sensitivity to distortions, quantization
may be rather coarse.
Wavelets
One of the drawbacks of discrete cosine
transformation stems from the use of
transform blocks in which to compute the
frequency content of the image. When large
blocks are used, the transform coefficients
fail to adapt to local variations in the image.
When small blocks are used (as in JPEG),
the image can exhibit blocking artifacts,
which impart a mosaic appearance. Compression researchers and mathematicians
therefore sought a transform that would
provide both reasonable spatial frequency
isolation and reasonable spatial localization.
The result was the so-called “wavelet transform.”
Wavelets are functions that obey certain orthogonality, smoothness, and selfsimilarity criteria that mathematically are
rather esoteric. Those properties are significant for image compression, however, because when wavelets are used as the basis
of an image transform, the resulting function is highly localized both in spatial and
frequency content.
The wavelet transform can be thought
of as a filter bank through which the original image is passed. The filters in the bank,
which are computed from the selected
wavelet function, are either high-pass or
low-pass filters. The image is first filtered in
the horizontal direction by both the lowpass and high-pass filters. Each of the resulting images is then downsampled by a
factor of two—that is, alternate samples are
deleted—in the horizontal direction. These
images are then filtered vertically by both
the low-pass and high-pass filters. The resulting four images are then downsampled
in the vertical direction by a factor of two.
Thus, the transform produces four images:
one that has been high-pass filtered in both
directions, one that has been high-pass filtered horizontally and low-pass filtered vertically, one that has been low-pass filtered
horizontally and high-pass filtered vertically, and one that has been low-pass filtered in both directions.
The high-pass filtered images appear
much like edge maps. They typically have
strong responses only where significant image variation exists in the direction of the
high-pass filter. The image that has been
low-pass filtered in both directions appears
much like the original—in fact, it’s simply a
reduced-resolution version. As such, it
shares many of the essential statistical properties of the original. In a typical wavelet
transform, the double-lowpass image at
each stage is decomposed one more time,
giving rise to a multiresolution pyramid representation of the original image. At each
level of the pyramid, a different spatial frequency is represented.
The transform result can then be quantized and coded to reduce the overall
amount of information. Low-spatialfrequency data are retained with greatest fidelity, and higher spatial frequencies can be
selectively deemphasized or discarded to
minimize the overall visual distortion introduced in the compression process.
Aerospace Research
Aerospace researchers were among the first
to examine wavelet image compression.
One early goal was a tool for progressive
image transmission. Developers envisioned
a server with a compressed image in its
database. A client could gain access to a
thumbnail or low-resolution version of the
image by requesting an appropriate level of
the wavelet pyramid from the server. The
client could use the spatial localization
properties of the transform to isolate a particular region of interest and incrementally
improve its quality. The tool never advanced
Crosslink Summer 2004 • 35
SNR progressive 0.12 bits
Resolution progressive 0.12 bits
SNR progressive 0.38 bits
Resolution progressive 0.38 bits
beyond the prototype stage, but did provide
a vision for the level of image interactivity
that wavelets might provide.
In the mid-1990s, the popularity of
wavelets expanded into government circles.
Encouraged by the increasing availability of
commercial JPEG products and excited by
the prospect of increased compression
efficiency, the government sponsored various efforts geared toward establishing a
standard based on wavelets. Aerospace began participating in the U.S. committee that
worked under the aegis of ISO (the International Organization for Standardization) to
develop a standard that would improve
JPEG, primarily by offering superior image
quality at low bit rates.
JPEG2000
SNR progressive 1.25 bits
Final image 3.9 bits
36 • Crosslink Summer 2004
Resolution progressive 1.25 bits
An image compressed using JPEG2000 but
with two different data orderings: “progressive
by SNR” and “progressive by resolution.” Either
of these orders can be produced without expanding and recompressing the data: Once the
data are coded, a parsing application can reorder the data by moving coded pieces and altering a small number of header values
appearing in the code stream. Final quality can
be controlled by a simple truncation of the resulting code stream.
In 1997, ISO issued a call for proposals for
the standard that would become known as
JPEG2000. The algorithm that was ultimately selected involves applying a spatial
wavelet transform to an image, quantizing
the resulting transform coefficients, and
grouping them by resolution level and spatial location for arithmetic coding. During
JPEG2000 development, Aerospace made
several contributions with an eye toward
maximizing the standard’s utility for its
government customer. For example, to enable future expansion of the standard while
maintaining backward compatibility, Aerospace helped develop an extensible code
stream syntax. This syntax also allowed the
standard to be split into two parts. Part I
contains the basic JPEG2000 algorithm, of
interest to the vast majority of users. Part II
contains extensions to Part I that in most
cases are significantly more complex and
may be tailored to specific groups of users.
Of particular interest to the remote-sensing
community is the ability to handle multispectral and hyperspectral images that may
contain tens or hundreds of correlated
bands. In fact, Aerospace led development
of a Part II extension that adds the ability to
implement a spectral transform in addition
to the spatial wavelet transform to provide
increased flexibility and compression efficiency for these potentially huge images.
The major advantage of JPEG2000
over other image coding systems is its flexibility in handling compressed data. When
JPEG2000 is used to organize and code a
wavelet-transformed image, the resolution
levels within the wavelet pyramid, the spatial frequency directions within a level, and
the spatial regions themselves are split into
code blocks that are all coded independently.
Within any code block, data are encoded
from most significant bit to least significant
bit. A construct known as layering allows a
certain number of significant bits from all
of the code blocks to be indexed together.
These groupings provide great flexibility in
accessing and ordering data for specific
purposes.
For example, one common implementation, known as “progressive by SNR,” orders the coded data within a file so that the
signal-to-noise ratio of the resulting image
is improved as rapidly as possible. Highamplitude features, such as sharp edges and
high-contrast regions, stand out earliest—
that is, at the lowest bit rates. Such a representation might be useful to someone who
wants to identify major features in an image, regardless of their size or extent, as
quickly as possible. Another configuration,
known as “progressive by resolution,” organizes coded data by resolution level, with
lowest resolution first and highest resolution last. In this case, the ordering is most
useful for someone who needs “the big picture” first. It can be thought of as zooming
in on the image instead of building it up
from its prominent components. Either of
these configurations can be produced without expanding and recompressing the data;
once the data are coded, a parsing application can reorder the data by moving coded
pieces and altering a small number of
header values appearing in the code stream.
Final quality can be controlled by a simple
truncation of the resulting code stream.
Such flexibility brings home the promise of JPEG2000’s future—compress only
once, but support a wide range of users.
Granted, JPEG2000 compression is more
complicated than other methods—it requires 3–10 times more operations than
JPEG, for example. But many useful data
orderings become possible using relatively
simple parsing applications. As a result,
highly interactive client/server sessions involving imagery are enabled. In fact, an additional part of the JPEG2000 standard,
known as the JPEG2000 Interactive Protocol (JPIP), will provide a general syntax for
client/server interactions involving
JPEG2000 images.
JPEG2000 0.25 bit per pixel
JPEG 0.25 bit per pixel
JPEG2000 0.50 bit per pixel
JPEG 0.50 bit per pixel
JPEG2000 1.00 bit per pixel
JPEG 1.00 bit per pixel
Examples of the basic performance of JPEG2000 relative to JPEG. The selected image is a scene
from the NITF test suite that is highly stressing for many image compression algorithms. The image
quality produced by the two algorithms at 1 bit per pixel is similar. For many images, JPEG2000 generally provides slightly better image quality than JPEG when the compressed rates are 1–2 bits per
pixel. At 0.5 bits per pixel, the JPEG2000 image is a distinct improvement, particularly with respect
to the transform block boundaries that appear in JPEG. At 0.25 bits per pixel, the JPEG image begins to look like a mosaic, whereas JPEG2000 provides a more gracefully degrading blur across the
scene. The improvement doesn’t come free, however; JPEG2000 implementation is roughly 3–10
times more complex than JPEG.
The Fast Lifting Scheme
The process of computing a wavelet transform by passing the input pixels through filters followed by subsampling is generally
inefficient for large data sets. Aerospace has
Crosslink Summer 2004 • 37
been working with a process known as “lifting,” which can reduce the computation to
an algorithm involving simple multiplication and addition.
The conventional lifting scheme involves factorizing the wavelet transform
matrix into several elementary matrices.
Each elementary matrix results in a lifting
step. The process involves several iterations
of two basic operations. The first is called
the prediction step, which considers a predicted pixel in relation to the weighted average of its neighboring pixels and calculates
the prediction residual. The second is called
the update step, which uses the prediction
residual to update the current pixel so that
the prediction residual becomes smaller in
the next iteration. This lifting factorization
reduces the computational complexity of
the wavelet transform almost by half. It also
allows in-place calculation of the wavelet
transform, which means that no auxiliary
memory is needed for computations. On the
other hand, the number of lifting steps can
affect performance. The fidelity of integerto-integer transforms used in lossless data
compression depends entirely on how well
they approximate their original wavelet
transforms. A wavelet transform with a
large number of lifting steps would have a
greater approximation error, mostly from
rounding off the intermediate real result to
an integer at each lifting step.
Using a different factorization of the
wavelet transform matrix, Aerospace developed a new lifting method that substantially
reduces the number of lifting steps in lossless data compression. Consequently, it significantly improves the overall rounding
errors incurred in converting from real numbers to integers at each lifting step. In addition, the new lifting method can be made to
adapt to local characteristics of the image
with less memory usage and signal delay.
For lossless data compression, it’s almost
twice as fast as the conventional wavelet
transform method and is quite suitable for
fast processing of multidimensional hyperspectral data.
JPEG
MLT
Comparison of image quality derived from modulated lapped transform (MLT) and JPEG using discrete cosine transform (DCT). With the modulated lapped transform, more high-frequency terms are
saved from quantization. Consequently, the quality of the reconstructed image is superior for the same
compression ratio, as is evident when the images are enlarged.
Original
MLT 8 to 1
89% Correct
JPEG DCT 8 to 1
81% Correct
The Modulated Lapped
Transform
Although wavelet transforms do not exhibit
the same blocking artifacts as discrete cosine transforms, they can blur image edges
in lossy compression. A newer approach,
the lapped transform, combines the efficiency of the discrete cosine transform with
38 • Crosslink Summer 2004
Terrain categorization is an important example of machine exploitation using multispectral images.
Based on pattern recognition techniques, the decompressed image pixels are classified into groups
representing different terrain and land covers. This image shows the effects of compression artifacts
on classification errors for a five-band subset of a Landsat scene. Clearly, the image compressed by
modulated lapped transform yields a better categorization score than the JPEG.
the overlapping properties of wavelets. In
recent studies, they have been shown to outperform both techniques in preserving image quality. One particular type of lapped
transform, the modulated lapped transform,
is being investigated at Aerospace.
The modulated lapped transform employs both a discrete cosine transform and a
bell-shaped weighting (window) function,
which resembles the low-pass filter function in a wavelet transform. This window
function and the discrete cosine transform
operate on two adjacent blocks of pixels
successively. The window function modulates the input to the discrete cosine transform, allowing the blocks to overlap prior
to actual transformation. Thus, the modulated lapped transform achieves the high
speed of the discrete cosine transform without the blocking artifacts. The weighting of
the input data by the bell-shaped window
function causes the high-frequency terms to
roll off much faster than they would using a
discrete cosine transform alone. Thus, more
of them are saved from quantization (a
source of loss). Consequently, the quality of
the reconstructed image is better than that
achieved through a discrete cosine transform
for the same compression ratio.
Another important point is that the
modulated lapped transform has more than
two channels at the onset of transformation,
whereas the wavelet transform has only two
and needs to progressively split the low-pass
channel as it goes to lower resolution levels.
Thus, the modulated lapped transform operates faster than a wavelet transform.
Most applications of modulated lapped
transform, including the Aerospace-patented
version, are for lossy image compression;
however, using a different formulation and
the fast lifting method, Aerospace researchers
have derived lossless and near-lossless
modulated lapped transform algorithms. In
addition, the lossless compressed data have
excellent resistance to error propagation because of the block structure. Given its error
resistance and fast processing properties, the
Aerospace Contributions to Image Data Compression Techniques
Aerospace had developed several new and alternative high-quality image-data
compression techniques during the last 20 years, mostly for use with multispectral
and hyperspectral imaging systems. A few highlights include:
1. Split-Radix Discrete Cosine Transform, U.S. Patent 5,408,425, 1995.
2. Modulated Lapped Transform Method, U.S. Patent 5,859,788, 1999.
3. Merge and Split of Fourier Transformed Data, U.S. Patent pending.
4. Merge and Split of Hartley Transformed Data, U.S. Patent pending.
5. Merge and Split of Discrete Cosine Transformed Data, U.S. Patent pending.
6. Merge and Split of Discrete Sine Transformed Data, U.S. Patent pending.
7. Merge and Split of Karhunen-Loeve Transformed Data, U.S. Patent pending.
8. Merge and Split of Generalized Transformed Data, U.S. Patent pending.
9. Multiple Description Transmission and Resolution Conversion of Compressed
Data, Aerospace Invention Disclosure, 2002.
10. Lossless Discrete Cosine Transform with Embedded Haar Wavelet Transform,
Aerospace Invention Disclosure, 2002.
11. Lossless Modulated Lapped Transform with Embedded Haar Wavelet
Transform, Aerospace Invention Disclosure, 2002.
12. Extended Haar Transform and Hybrid Orthogonal Transform, Aerospace
Invention Disclosure, 2003.
13. New Lifting Scheme in Wavelet Transforms, Aerospace Invention Disclosure, 2004.
14. Fast Adaptive Lifting Scheme in Wavelet Transforms, Aerospace Invention Disclosure, 2004.
lossless and near-lossless modulated lapped
transform would be suitable for compressing multidimensional hyperspectral data in
remote-sensing applications.
Conclusion
As both the spatial and spectral resolutions
of remote image sensors increase, the
amount of data they collect continues to
grow. On the other hand, the available
communication channels for faithfully
transmitting the data to ground are becoming scarce. There is therefore a real need to
develop new data-compression techniques
that can support storage and transmission of
images of varying resolution and quality. In
addition, the compression operations must
be fast and require little power for instantaneous processing.
The trend is toward more interactive
manipulation of imagery. The wavelet
transform that underlies the JPEG2000
standard for still images and the transform
that underlies the MPEG4 standard for
video compression share many common
properties. With their successful integration
in the future, new data-compression techniques should be able to process huge datasets with high fidelity and speed.
JPIP will be suitable for use on the
Internet, and interactive and customizable
Web-based applications could begin appearing soon. Such a development will hold
special interest for the remote-sensing community as a tool to minimize dissemination
delays.
An important military standard, the
National Imagery Transmission Format
(NITF), provides a file structure to support
not only image data but also associated support data and other graphical information.
The integration of JPEG into NITF provided government users with increased image-compression capabilities and made it
possible to use commercially available
products. JPEG2000 is being incorporated
into NITF version 2.1 and will significantly
enhance these capabilities.
Ultimately, each imagery source and
provider presents a unique set of requirements and challenges. Through continued
involvement with the JPEG committee,
Aerospace will provide valuable insight
into the effective integration of JPEG2000
into government and military systems. Similarly, further investigation into the fast lifting scheme and the modulated lapped transform will ensure the usability of more
comprehensive remote-imaging systems.
Crosslink Summer 2004 • 39
Detecting
from Space
The wildfires that ravaged southern California in 2003 not only scarred
the landscape but also dumped pollutants into the air. These fires provide
an example of how satellite data can reveal the impact of intense local
sources of air pollution on air quality on a regional or even global scale.
This true-color image was taken by the Moderate Resolution Imaging
Spectroradiometer (MODIS) on NASA’s EOS Aqua satellite and clearly
shows the smoke plumes of ten raging fires. MODIS data can assist in
monitoring the transport of aerosolized pollutants.
40 • Crosslink Summer 2004
The use of satellite data for air-quality
applications has been hindered by a
historical lack of collaboration between
air-quality and satellite scientists.
Aerospace is well positioned to help bridge
the gap between these two
communities.
Leslie Belsma
SeaSpace Corporation
S
atellite data have traditionally been underexploited by the airquality community. The Environmental Protection Agency (EPA),
together with state and regional air-quality agencies, relies instead
on an extensive ground-based network to monitor and predict urban air quality. Recent and planned technological advancements in remote
sensing are demonstrating that space-based measurements can be a valuable tool for forecasting air quality, providing information not available
from traditional monitoring stations. Satellite data can aid in the detection,
tracking, and understanding of pollutant transport by providing observations over large spatial domains and at varying altitudes. Satellites can be
the only data source in rural and remote areas where no ground-based
measurements are taken. Satellite data can be used qualitatively to provide
a regional view of pollutants and to help assess the impact of events such as
biomass burning or dust transport from remote sources. Space-based data
can also be used quantitatively to initialize and validate air-quality models.
The Aerospace Corporation has a long history of support to meteorological satellite programs such as the Defense Meteorological Satellite
System (DMSP), the Polar-orbiting Operational Environment Satellites
(POES), and the Geostationary Operational Environment Satellites
(GOES). More recently, this support has extended to environmental satellite systems such as NASA’s Earth Observing System (EOS) and the future
National Polar-orbiting Operational Environmental Satellite System
(NPOESS), which will merge DMSP and POES weather satellites into an
integrated environmental observation system. These systems could play a
Crosslink Summer 2004 • 41
NCAR/University of Toronto MOPITT
2.1
0.0
The MOPITT sensor (Measurement Of Pollution
In The Troposphere) aboard NASA’s EOS Terra
satellite is designed specifically to measure carbon monoxide profiles and total column
methane (a hydrocarbon). Carbon monoxide,
produced as a result of incomplete combustion
during burning processes, is a good indicator of
atmospheric pollution. This false-color MOPITT
image shows the atmospheric column of carbon
monoxide resulting from the southern California
wildfires of 2003. Yellow and red indicate high
levels of pollution (gray areas show where no
data were taken, probably because of cloud
cover). Pollutants can be seen spreading over
the western states and into the Pacific Ocean.
4.2
Carbon Monoxide Column (x 1018 mol/cm2)
prominent role in forecasting and improving air quality over urban centers.
Federal Regulations
The Clean Air Act gives EPA the authority
to regulate emissions that cause air pollution. Accordingly, the agency sets national
ambient air-quality standards (NAAQS) for
six air pollutants: carbon monoxide, nitrogen dioxide, ozone, lead, sulfur dioxide,
and particulate matter under 10 microns.
The EPA has also set standards recently for
fine particulates under 2.5 microns.
All states must comply with the
NAAQS. Those that do not can be denied
federal funding for highways and other
projects. The EPA requires that each state
have a plan to implement federal smogreduction laws. States therefore need to
model the weather as well as the transport,
dispersion, and chemical and physical
transformation of pollutants to determine
the impact of emission sources and set regulatory policy. The EPA provides guidelines
for regulatory modeling, but while these
models are quite sophisticated, they make
little use of ground or space-based measurements to improve forecast accuracy.
Many air-quality agencies issue continuous operational air-quality forecasts,
which are based on ground-based measurements and predicted weather conditions.
Meteorological satellite data are used in the
generation of weather forecasts, but no
space-based pollution data are used in predicting air quality. Recently, NOAA and
EPA entered an agreement to provide national forecasts of ozone and fine particulates. Under the terms of this agreement,
EPA will model emission sources and
NOAA will run the national weatherforecast and air-quality models continuously. This represents a new era in airquality modeling in which satellite data will
become essential for establishing the background and boundary conditions necessary
to forecast air quality operationally on a
national scale.
The Air We Breathe
The EPA sets national ambient air-quality standards for six air
pollutants: carbon monoxide, nitrogen dioxide, ozone, lead, sulfur dioxide, and particulate matter.
Carbon monoxide is a colorless, odorless, poisonous gas produced through incomplete combustion of burning materials. Nitrogen oxides—a byproduct of burning fuels in vehicles, power
plants, and boilers—are one of the main components (together
with ozone and particulates) of the brownish haze or smog that
forms over congested areas; they also precipitate as acid rain.
Ozone in the stratosphere plays an important role, shielding the
planet from ultraviolet radiation; it’s far less desirable in the troposphere, where it can irritate lungs and impair breathing. Sulfur oxides come from transportation sources as well as from the
burning of fossil fuels in nontransportation processes such as oil
refineries, smelters, paper mills, and coal power plants; they
42 • Crosslink Summer 2004
also contribute to acid rain. Hydrocarbons, also known as
volatile organic compounds (VOCs), contribute to smog; tropospheric ozone forms when oxygen in the air reacts with
VOCs in the presence of sunlight. Particulates—tiny solid or liquid particles suspended as smoke or dust—come mainly from
construction, agriculture, and dusty roads as well as industries
and vehicles that burn fossil fuels.
Mobile sources (cars and trucks) account for more than 70 percent of the emissions that cause smog and 90 percent of the
emissions that lead to higher levels of carbon monoxide. Industrial processes that burn fossil fuels also contribute to air pollution. While electric power plants in the Los Angeles basin are
relatively clean, nationwide, they produce two-thirds of all sulfur
dioxide, more than one-third of nitrogen oxides, and one-third of
the particulate matter released into the air.
15
10
Steve Palm, ICESat/NASA Goddard Space Flight Center
Height (kilometers)
The Geoscience Laser Altimeter System
aboard NASA’s ICESat satellite measures backscattered light to determine
the vertical structure of clouds, pollution,
and smoke plumes in the atmosphere.
The observation here, taken October
28, 2003, shows the thick smoke
plumes emanating from the California
wildfires. The image represents a vertical slice of Earth’s atmosphere along
the satellite path, as shown by the green
line superimposed on a MODIS image
(insert) taken 7 hours earlier. The
zigzag features are the smoke plumes
from the fires rising up as high as 5 kilometers. The thin features toward the
upper right are high-level cirrus clouds.
The large black feature jutting up above
sea level is the mountain range separating Santa Barbara from the San
Joaquin Valley. Note the low-lying pollution over San Joaquin.
5
0
29. 119
30. 119
31. 120
33. 120
34. 120
Latitude, Longitude (North, West)
Low
Space-Based Data Sources
The combination of measurements from
current and planned environmental satellite
sensors that monitor the troposphere will
play an increasingly important role in explaining pollutant chemistry and transport
processes in the lower atmosphere. Satellite-based measurements of ozone, sulfur
dioxide, nitrogen dioxide, carbon monoxide, and aerosols have been compared with
EPA ground-based data to demonstrate the
potential benefit of satellite data in tracking
emissions and their transport.
A European project has demonstrated
the use of Landsat and SPOT satellite data
to provide a relative quantitative scale of urban air pollution—specifically, fine particulates and sulfur dioxide. The third NASA
EOS satellite, Aura, is designed to study
Earth’s ozone, air quality, and climate; it
houses one sensor designed specifically to
measure trace gases in the troposphere.
Numerous satellite sensors can detect at
least some type of aerosols—including smoke
plumes from fires—and can thus provide a basis for deriving emissions estimates. Soil
moisture can be detected from space and is an
essential piece of information for estimating
how much dust (a type of particulate matter) is
contributing to atmospheric haze. Satellite imagery has traditionally been used to characterize land cover to estimate biogenic emissions,
and this imagery is now being used more
directly to derive biogenic emissions through
an inverse analysis retrieval technique.
Density
Several satellite missions designed to
detect stratospheric ozone can also provide
information on tropospheric ozone levels.
There is much potential benefit in combining these initial efforts by scientists to monitor air quality from space with the remotesensing retrieval and calibration
technologies developed at Aerospace to
support defense satellite programs.
NPOESS
NPOESS marks a new era in advanced
environmental monitoring from space.
Though air-quality agencies did not play a
role in defining requirements for the baseline system, many of the eleven baseline
sensors aboard NPOESS will provide data
directly applicable to monitoring air pollutants. For example, the Visible Infrared
Imaging Radiometer Suite will provide accurate aerosol detection. In addition, the
NPOESS mission will include an instrument dedicated to aerosol detection, the
36. 120
High
Aerosol Polarimeter Sensor, which is
scheduled to fly for the first time aboard a
NASA mission in 2007. Thermodynamic
soundings of high spatial resolution will be
provided by the Crosstrack Infrared
Sounder; these soundings can contribute to
the detection of trace-gas concentrations.
The Ozone Mapping and Profiler Suite
consists of a nadir system for both total column ozone and profile ozone observations,
as well as a limb system for profile ozone
observations at high vertical resolution.
In support of the NPOESS program,
Aerospace leads the technical support for
the acquisition of many of these sensors and
the algorithms to retrieve various environmental parameters. The Crosstrack Infrared
Sounder, Visible Infrared Imaging Radiometer Suite, and Ozone Mapping and
Profiler Suite sensors will fly for the first
time in 2006 on the NPOESS Preparatory
Project satellite, an NPOESS risk-reduction
mission that will also provide a bridge
Extreme Living
Even with improvements resulting from regulation, 80 million people in the United
States are still breathing air that does not meet at least one EPA air-quality standard.
The EPA Web site maps noncompliant or “nonattainment” areas by pollutant type
based on the ground-based monitoring network. For example, the Los Angeles region is categorized as “extreme” nonattainment for 1-hour ozone levels (the only
region in the country with an “extreme” designation) and “serious” nonattainment
for carbon monoxide and particulate matter.
Crosslink Summer 2004 • 43
SeaSpace Corporation
The “aerosol optical depth” is one of the many products from the MODIS
sensor aboard the EOS Terra and Aqua satellites. These data can dramatically improve the manual detection of pollution by the air-quality forecaster. This space-based view enables researchers to monitor air pollution
between NASA’s EOS Aqua and Terra and
the first NPOESS satellite in 2009. The
higher resolution and more timely data from
NPOESS will enable more accurate shortterm air-quality forecasts and warnings.
Conclusion
Just as new satellite data have helped advance the science of weather prediction, so
can they assist the science of air-quality
forecasting. The amount of satellite data
available is going to increase substantially
in the coming years, including information
about pollutant concentrations not well
measured previously. Aerospace is working
to help air-quality agencies fully exploit the
wealth of current and planned space-based
44 • Crosslink Summer 2004
events over extended periods and geographical areas. A relationship between MODIS aerosol optical depth and ground-based hourly fine particulate (down to 2.5 microns) allows the MODIS data to be used
qualitatively and quantitatively to estimate EPA air-quality categories.
environmental data to improve air-quality
forecasting.
Further Reading
J. Engel-Cox, A. Haymet, and R. Hoff, “Review and Recommendations for the Integration of Satellite and Ground-based Data for
Urban Air Quality,” Air & Waste Management
Association Annual Conference and Exhibition, San Diego, CA (2003).
J. Fishman, A. E. Wozniak, and J. K. Creilson,
“Global Distribution of Tropospheric Ozone
from Satellite Measurements Using the Empirically Corrected Tropospheric Ozone
Residual Technique: Identification of the Regional Aspects of Air Pollution,” Atmospheric
Chemistry and Physics, 3, 893–907 (2003).
“Mapping of Urban Air Quality,” Centre d’Energétique Web site, http://www-cenerg.
cma.fr/Public/themes_de_recherche/teledetection/title_tele_air/mapping_of_urban_air/view
(accessed May 12, 2004).
D. Neil, J. Fishman, and J. Szykman, “Utilization of NASA Data and Information to Support Emission Inventory Development,”
NARSTO Emission Inventory Workshop: Innovative Methods for Emission Inventory Development and Evaluation, Austin, TX (2003).
“State of the Art in EO Methods Related to
Air Quality Monitoring,” ICAROS (Integrated
Computational Assessment via Remote Observation System) Web site, http://mara.jrc.it/
orstommay.html (accessed May 12, 2004).
SyntheticAperture
Imaging
Ladar
even without specialized training. Second,
optical wavelengths are around 10,000
times shorter than radio-frequency wavelengths and can therefore provide much
finer spatial resolution and much faster
imaging times. Finally, unlike passive imagery, ladar, like radar, provides its own
illumination and can generate imagery day
or night. Despite some early laboratory experiments, optical synthetic-aperture imaging has not been realized before because of
the extreme difficulty of maintaining comparable phase stability and wide bandwidth
waveforms in the optical domain, especially
at the high laser powers required for longrange operation. Advances in both laser
technology and signal processing are now at
a stage where such systems may be realizable.
Aerospace has been developing a remote-sensing technique
that combines ultrawideband coherent laser radar with
synthetic-aperture signal processing. The goal is to achieve
high-resolution two- and three-dimensional imaging at
long range, day or night, with modest aperture diameters.
Walter F. Buell, Nicholas J. Marechal, Joseph R. Buck,
Richard P. Dickinson, David Kozlowski, Timothy J. Wright,
and Steven M. Beck
C
onventional optical imagers, including imaging radars, are limited in spatial resolution by the
diffraction limit of the telescope
aperture. As the aperture size increases, the
resolution improves; as the range increases,
resolution degrades. Thus, high-resolution
imaging at long ranges requires large telescope diameters. Imaging resolution is further dependent on wavelength, with longer
wavelengths producing coarser spatial resolution. Thus, the limitations of diffraction
are most apparent in the radio-frequency
domain (as opposed to the optical domain,
for example).
A technique known as syntheticaperture radar was invented in the 1950s to
overcome this limitation: In simple terms, a
large radar aperture is simulated or synthesized by processing the pulses emitted at
different locations by a radar as it moves,
typically on an airplane or a satellite. The
resulting image resolution is characteristic
of significantly larger systems. For example, the Canadian RadarSat–II, which is
slated to fly at an altitude of about 800 kilometers, has an antenna size of 15 1.5
meters and operates at a wavelength of 5.6
centimeters. Its real-aperture resolution is
on the order of 1 kilometer, while its
synthetic-aperture resolution is as fine as 3
meters. This resolution enhancement is
made possible by recording the phase history of the radar signal as it travels to the
target and returns from various scattering
centers in the scene. The final syntheticaperture radar image is reconstructed from
many pulses transmitted and received during
a synthetic-aperture evolution time using sophisticated signal-processing techniques.
Aerospace is investigating ways to apply the techniques and processing tools of
radio-frequency synthetic-aperture radars to
optical laser radars (or ladars). There are
several motivations for developing such an
approach in the optical or visible domain.
The first is simply that humans are used to
seeing the world at optical wavelengths.
Optical synthetic-aperture imagery would
potentially be easier for humans to interpret,
R
The SAIL concept: A platform with a transmit/
receive module evolves a synthetic aperture by
moving some distance while illuminating a target some distance away and receiving the scattered light. The transmitted light has a widebandwidth waveform imposed upon it. The illuminating spot size at the target is the same as
the diffraction-limited resolution of the transceiver optic. This “real-aperture” resolution is proportional to the wavelength of the transmitted
light and the range to the target, and inversely
proportional to the transceiver optic size. The
synthetic-aperture imaging resolution along the
direction of travel is determined by the diffraction limit of the synthetic aperture and is proportional to change in azimuth angle as seen by an
observer at the target and therefore proportional to the length of the evolved synthetic aperture. The range resolution is the speed of light
divided by twice the transmit bandwidth.
Crosslink Summer 2004 • 45
Moving
transceiver
Waveform generation
Transmit
Laser
Receive
Circulator
Return signal
Local oscillator
A/D
Reference channel
HCN
Signal
processing
Image
formation
Trigger
Wavelength reference and trigger channel
System schematic. The signal is fed into an optical splitter with one percent
directed to a molecular wavelength reference (labeled “HCN,” for
hydrogen cyanide) to ensure that each chirped pulse begins at the same
optical frequency. The remaining signal passes through another optical
splitter with one percent directed to the local oscillators and the reference
arm. The main part of the transmitted light passes through the optical circulator and on to the transceiver optics, which direct the laser beam to the
The Aerospace Approach
The Aerospace experimental approach is
called synthetic-aperture imaging ladar, or
SAIL. The SAIL concept is best envisioned
in terms of a platform with a transmit/
receive module moving on a trajectory, illuminating a target and receiving the scattered
light. The spot size at the target is determined by the diffraction limit of the transceiver optic; this corresponds to the imaging
resolution for a conventional imager. The
imaging resolution in the direction of sensor
motion is determined by the diffraction limit
of the synthetic aperture, a function of the
synthetic-aperture length developed during
a period of flight. The resolution in the
range direction is determined by the bandwidth of the transmitted waveform.
Unlike the resolution of a real-aperture
imager, the attainable resolution of a
synthetic-aperture system is essentially
independent of range. Of course, nothing
comes for free, and SAIL operation at
longer ranges requires greater laser power.
Some of the earliest “synthetic-aperture”
experiments in the optical domain were performed in the late 1960s and demonstrated
inverse synthetic-aperture imaging of a
point target simply swinging on a pendulum
(“inverse” in the sense that the target, rather
than the platform, is in motion.) Recent efforts at MIT’s Lincoln Laboratory have included the use of an Nd:YAG microchip
laser to demonstrate inverse syntheticaperture imaging in one dimension with
46 • Crosslink Summer 2004
target. The reflected signal passes back through the transceiver optics and
an optical circulator and is mixed with the optical local oscillator for
heterodyne conversion at the detector. The electrical signal from the detector is then fed to an analog-to-digital converter and on to the computer.
The transceiver aperture is translated in 50-micron steps with a stepping
translation stage to create the synthetic aperture, and one frequency-swept
pulse is emitted at each transceiver position.
conventional diffraction-limited imaging in
the other dimension (using a high-aspectratio aperture) to produce two-dimensional
images. The Naval Research Laboratory has
also demonstrated fully two-dimensional
inverse SAIL imaging of a translated target
using 10 nanometers of optical bandwidth
at a wavelength of 1.55 micron. The SAIL
images obtained at Aerospace represent the
first true optical synthetic-aperture images
made using a moving transmit/receive aperture, as well as the first SAIL image from a
diffuse scattering target.
Experimental Setup
The Aerospace experiments employ 1.5micron semiconductor and fiber laser transmitters and related components commonly
found in the commercial fiber-optic telecom
industry. This approach was motivated by
several considerations. To begin with, researchers were interested in SAIL system
design, image formation, and phenomenology—not component development. Operation at 1.5-micron wavelengths allows the
researchers to apply both the commercial
telecom component base and expertise
gained in other photonics research activities
at Aerospace. Furthermore, the 1.5-micron
wavelength is in the nominally eyesafe
wavelength regime and relatively close to
the visible region of the spectrum (where
people are used to seeing the world). Finally, researchers sought to maintain scalability to long-range operation without significant technology or design changes, and
1.5-micron fiber laser technology is compact, efficient, relatively robust, and potentially scaleable to high-power operation.
The SAIL image formation process
requires measurement of the phase history
of the returned ladar signals throughout the
synthetic-aperture formation time, just as in
synthetic-aperture radar. This is accomplished using coherent (heterodyne) detection, wherein the return signals are optically
mixed with a stable local oscillator. The
local oscillator acts as an onboard optical
phase reference, and when the return signal
and local oscillator are superimposed on a
photodetector, the resulting electrical signal
contains information about the phase and
frequency difference between them. In
Aerospace experiments, the local oscillator
is derived from the same laser as the transmitted pulses and has the same waveform
imposed upon it.
The first stage of development focused
on a laboratory-scale demonstration, which
permits easy system modifications and
allows researchers to build knowledge of
image formation and phenomenology. Because the ranges involved are fairly short
(roughly 2–3 meters) and the targets quite
small (from a few millimeters to a few centimeters), the laboratory-scale system requires extremely high spatial resolution.
Range resolution is inversely proportional
to the transmission bandwidth, which
means that very large optical bandwidths
are required. In this sense, the lab-scale
work is actually more challenging than
Signal Processing for SAIL
The Aerospace SAIL experiments used a series of pulses in which the
optical frequency was swept quasi-linearly in time over a bandwidth
greater than 1000 gigahertz. The linearity and stability of such
broadly tunable sources is quite poor, leading to significant phase
errors. To handle these errors, Aerospace researchers developed
new digital signal processing techniques for mitigating the waveform
instability problem and applied nonparametric phase gradient techniques for the pulse-to-pulse phase errors.
Next, the sharpness metric curves are averaged over the pulse index
to obtain a composite sharpness metric curve. The peak in the composite curve corresponds to the reference-channel phase-error scale
factor providing the best range focus for all of the pulses in aggregate. For each pulse, the scaled phase-error correction from the reference channel is applied to the target channel, using the scale
factor determined from the composite sharpness metric. At this point,
most of the range-phase error has been removed for each pulse.
First, for each pulse, a Fourier transform is applied to the real values
obtained from both the target and reference channels. The result is
a conjugate symmetric spectrum whose sample index corresponds
to the range or travel time relative to the propagation time through
the “local oscillator” fiber path. Only those frequencies corresponding to echoes from the target and the reference channels are saved.
This is known as a windowing operation. An inverse Fourier transform is applied to convert the windowed data back to the time or
range wave number domain. A sequence of candidate phase-error
corrections are applied to the target channel derived from the reference channel. For each candidate, a range-domain “sharpness”
metric is computed to measure the range focus. The peak of the
sharpness metric curve corresponds to the reference-channel phaseerror scale factor where best focus is observed.
A range Fourier transform is then applied to each phase-errorcorrected pulse from the target channel. At this point, the data is
range compressed. Phase-gradient autofocus techniques are then
applied to the range-compressed target data to obtain a nonparametric estimate of the pulse-to-pulse phase errors. The correction is
then applied to the range-compressed target data. The azimuth
Fourier transform is then applied to the range-compressed data to
obtain a SAIL image. The azimuth focus can now be further refined
via reestimation of the azimuth quadratic phase error. Similarly, the
range focus can be refined via reestimation of the range quadratic
phase error, resulting in the final focused SAIL image.
4 × 104
IPR
Degrees
–40
–60
0
Ideal phase
0
–2 × 104
–4 × 104
Ideal IPR – no
phase errors
–80
Normalized power (decibels)
2 × 104
–20
–6 × 104
0.00
2 × 104
4 × 104
6 × 104
Range/frequency index
0.05
0.10
0.15
0.20
Time (seconds)
0
600
Before IPR
compensation
–10
–20
After IPR
compensation
Spurious
response
–30
400
Degrees
Power (decibels)
0
200
0
A Fourier transform applied to a single
pulse would ideally produce a single narrow peak, the theoretical system impulse response function, as shown by the red curve
(top left). Instead, a broad (highly defocused) curve is obtained (orange curve).
This discrepancy is caused by the many
tens-of-thousands of degrees of phase error
(top right). Applying a single-pulse phaseerror correction algorithm results in a wellfocused one-dimensional “range image,”
as shown in the next figure (bottom left). The
upper curve is the range image before
phase-error correction, and the lower curve
is the well-focused result after. At this point,
although the range is well focused, the
azimuth direction is completely out of focus
because of pulse-to-pulse phase errors.
These errors, as shown in the final figure
(bottom right), can be measured and thus
corrected for, leading to the high-quality
SAIL images on the following two pages.
–200
–400
–40
0
2000
4000
6000
Range index
8000
–600
0
50
100
150
Pulse index
200
Crosslink Summer 2004 • 47
SAIL image of a triangle
cut from a piece of retroreflective material. The target
is tilted away from the
observer at 45 degrees to
create the range depth,
which is why the laser spot
size appears elliptical. The
SAIL image thus displays
range in the vertical
dimension and cross-range
or azimuth in the horizontal
dimension. Inset is a closerange photograph of the target triangle; the white lines on the target do not
have the retroreflective material and appear as dark lines in the SAIL image.
The fuzzy image above the figure is a “beam-scan” image, where the range
has been focused but the azimuth processing has not been performed. The
beam-scan image represents the real-aperture diffraction-limited spot scanned
across the target shape; it gives an impression of the image quality and
resolution that would be obtained in the absence of aperture synthesis. Phasegradient autofocus techniques were used to estimate and remove the pulse-topulse phase errors, which, if not removed, would render the image
unrecognizable. The dark diagonal lines through the target image are less
than a millimeter wide. As a result of “laser speckle,” this image has a signalto-noise ratio of about 1. The eye ignores this speckle noise, and the image is
still highly interpretable. The focus is reasonably good, although the number
of SAIL range resolution elements per illuminating spot size is only about 60.
This result indicates that synthetic-aperture ladar imaging is possible despite the
nonlinearity and instability of the laser waveform.
longer-range applications. The transmitter
for the lab-scale system is a commercial
tunable external-cavity semiconductor laser
capable of producing a nominally linear
frequency-swept waveform with a bandwidth of 30 nanometers (more than 1 terahertz). Compared with the RadarSat-II
waveform mentioned earlier, the system can
achieve range resolution on the order of
10,000 times finer.
The linearity and stability of such
broadly tunable sources is quite poor, leading to significant phase error in the linear
frequency-modulated waveform as well as
residual amplitude modulation. This is a
common dilemma: tunable sources are not
highly stable, and stable sources are not
generally tunable. To overcome this problem, the Aerospace team used a reference
channel to directly monitor optical phase
errors induced during waveform generation.
These measured errors were corrected using
a phase-error compensation algorithm. In
the first implementations, the optical length
of the reference arm was constrained to precisely match the target range. Such a constraint would seriously limit the operational
utility of a SAIL system, because the reference channel would have to be retuned
every time the system pointed to a new
48 • Crosslink Summer 2004
These images demonstrate the possibility of
managing image
focus regardless of
discrepancies between the reference
channel and target
channel ranges. In
the top image, the
reference and target
channels are carefully matched (see
range plot at left). In
the lower image, the
reference channel
path length does not
match the target channel path length (net mismatch approximately
1 meter; range to target is approximately 2–3 meters). A digital
compensation technique involving a special “sharpness” metric
nonetheless produces a well focused image of the triangle target. This
image demonstrates that SAIL can employ a fiber-optic reference
channel to produce focused two-dimensional images even in cases
where the range to target is either uncertain or variable.
target. To remove this constraint, Aerospace
developed new algorithms for intrapulse
phase-error correction to handle arbitrary
mismatch between the reference arm and
the target path length.
Observations
The first experiment generated an image of
a triangle cut from a piece of retroreflective
material. The target was tilted away from
the observing platform at 45 degrees to create the range depth. The resulting SAIL image displays range in the vertical direction
and cross-range or azimuth in the horizontal
dimension. The Aerospace reference-channel
design, coupled with phase-gradient autofocus techniques, helped estimate and
remove the intrapulse and pulse-to-pulse
phase errors, which would render the image
unrecognizable. Despite the incidence of
“laser speckle” (inherent in any coherent
imaging technique), the image was still
highly interpretable to the human eye, and
the focus was reasonably good. This initial
result indicated that synthetic-aperture ladar
imaging is possible despite the instability of
the laser waveform.
The next experiment sought to demonstrate the possibility of achieving focused
imagery in the presence of large waveform
errors regardless of range mismatch in the
reference channel. (Previous work at the
Naval Research Laboratory had demonstrated a SAIL image, but the implementation required exact matching of the reference and target arms.) A discrepancy of
approximately 1 meter between the length
of the reference channel path and the length
of the target channel path was introduced. A
specialized digital compensation technique
resulted in a well-focused image of the triangle target. This phase of the experiment
demonstrated that SAIL can employ a fiberoptic reference channel to produce focused
2-D images even in cases where the range
to target is either uncertain or variable (as in
a system that points to various targets at differing ranges).
In the course of their experiments, the
researchers noticed very faint artifacts in the
imagery. Logarithmic representations of image intensity revealed “ghost” images above
and below the main image. These ghosts
were traced to residual amplitude-modulation ripple in laser intensity. (With the
source of the artifacts identified, researchers
can now develop an algorithm to correct for
them.) These results demonstrated a move
beyond simply “making pictures” to doing
detailed image-quality analysis.
In this image, the triangle target is shown as the
logarithm of the image intensity to bring out very
faint features (and artifacts) in the image. Faintly
visible are two “ghost” images above and
below the main image. (The log-intensity
grayscale has even been slightly saturated to
make the ghosts, which are about 35–40 decibels fainter than the main image.) These ghosts
are caused by residual amplitude-modulation
ripple in the laser intensity, as can be seen in the
inset figures at right. Thus, they can be removed
by applying a suitable correction.
Researchers then experimented with a
larger, more complex target made of the
same patterned retroreflective material as
the triangle target. A transparency was
placed in front of the target to serve as a
crude “phase-screen” between the target
and the transceiver. The new target was also
larger than the diffraction-limited illuminating spot size, so the image had to be formed
by scanning the laser spot in five strips across
the target and tiling the results. The image
quality was somewhat degraded because of
the transparency, but the pattern of the
retroreflective material was clearly visible.
The final SAIL experiment used a target with both a diffuse (non-retroreflecting)
surface and a specular surface. With the target leaning away at 45 degrees, the diffusescattering surface appears bright, but the
specular surface returns very little light to
the receiver because it is reflected away.
The image was reasonably well focused,
and smooth edges in the target were just
visible. This image represented the first optical synthetic-aperture image of a diffusescattering object.
Conclusion
These first laboratory steps demonstrated
the proof of concept for SAIL imagery and
will allow Aerospace to develop refinements
This image shows a more complex target. It’s
made of the same patterned retroreflective material as the triangle targets (also tilted at 45 degrees) placed behind a transparency with the
sailboat image laser-printed on it. The target,
about 1 centimeter tall, is larger than the diffraction-limited illuminating spot size (shown
schematically at right). Thus, the image was
formed by scanning the laser spot in five strips
across the target and tiling the results (only three
strips were used in this example.) The image
quality is somewhat degraded (note the streaks
at the top of the sails) because of the transparency (which can be viewed as a crude
“phase-screen” between the target and the
transceiver); nonetheless, the pattern of the
retroreflective material is clearly visible, as is the
wavy water below the sails.
to the signal processing algorithms. Of
course, many real-world complications will
arise in transferring these techniques to airborne or spaceborne platforms. For example, because SAIL is inherently a narrowfield-of-view technique (like looking down
a soda straw), real-world implementations
will require robust methods for tiling many
small patches to form large composite images. Other concerns include atmospheric
turbulence, unmodeled platform motion,
target motion, and pointing control. The
next step, currently under way, is the development of a rooftop test bed to explore
some of these issues. In conjunction with
the SAIL project, Aerospace has developed
a balanced, phase-quadrature laser vibrometer to monitor line-of-sight optical phase
errors during the SAIL image formation
process.
The Defense Advanced Research Projects Agency and the Air Force Research
Laboratory have initiated a program called
SALTI (synthetic-aperture ladar tactical imaging) aimed at a proof-of-concept airborne
demonstration to generate high-resolution
2-D and 3-D SAIL imagery combining the
interpretability of electro-optical imaging,
the long-range day-or-night access of highaltitude X-band synthetic-aperture radar,
The final SAIL image captures a target with both
a diffuse-scattering surface and a specular surface. A close-range photo of the target is inset.
The diffuse-scattering surface appears bright, but
the specular surface returns very little light to the
receiver because it is reflected away at 45 degrees. The image is reasonably well focused,
and the edge of the smooth metal surface
(somewhat scuffed) surrounding the “circle A” is
also barely visible. This image represents the first
optical synthetic-aperture image of a diffuse
(non-retroreflecting) object.
and the exploitability of 3-D ladar. SAIL
has also been proposed for imaging and
mapping planets such as Mars.
Further Reading
W. F. Buell, N. J. Marechal, R. P. Dickinson, D.
Kozlowski, T. J. Wright, J. R. Buck, and S. M.
Beck, “Synthetic Aperture Imaging Ladar: Lab
Demo and Signal Processing,” Proceedings of
the 2003 Military Sensing Symposia: Active EO
Systems (2003).
W. F. Buell, N. J. Marechal, D. Kozlowski, R. P.
Dickinson, and S. M. Beck, “SAIL: Synthetic Aperture Imaging Ladar,” Proceedings of the 2002 Military Sensing Symposia: Active EO Systems (2002).
T. J. Green et al., “Synthetic Aperture Radar
Imaging with a Solid-State Laser,” Applied Optics, Vol. 34, p. 6941 (1995).
M. Bashkansky et al., “Two-Dimensional Synthetic Aperture Imaging in the Optical Domain,”
Optics Letters, Vol. 27, pp. 1983–1985 (2002).
C. V. Jakowatz, D. E. Wahl, P. H. Eichel, D. C.
Ghiglia, and P. A. Thompson, Spotlight-Mode
Synthetic Aperture Radar: A Signal Processing
Approach (Kluwer Academic Publishers,
Boston, 1996).
A. V. Jelalian, Laser Radar Systems, Artech
House (Boston, 1992).
R. L. Lucke and L. J. Rickard, “Photon-Limited
Synthetic Aperture Imaging for Planet Surface
Studies,” Applied Optics, Vol. 41, pp. 5084–5095
(2002).
Crosslink Summer 2004 • 49
Commercial
Remote Sensing and
National Security
Aerospace helped craft government policy allowing
satellite imaging companies to sell their products and
services to foreign customers—without compromising
national security.
Space Imaging
Dennis Jones
I
n February 2002, Colombian President
Andres Pastrana appeared on his nation’s television and declared an end to
peace negotiations with the Revolutionary Armed Forces of Colombia, an insurgent group that the government had been
fighting for decades. In supporting his decision, Pastrana held up satellite photographs
of clandestine road networks developed in
the demilitarized zone in the south of
Colombia—a violation, he argued, of the
two-year-old peace process. The photos he
50 • Crosslink Summer 2004
held up for his nation and the world to witness were not declassified images from a
Colombian military satellite, nor were they
from any U.S. defense system. They were
purchased from a U.S. commercial satellite
company.
Pastrana’s display of commercial satellite imagery received little notice in the media, which was naturally more concerned
with his policy announcements. It was,
however, one of the most dramatic manifestations of a policy signed by President
Clinton in 1994, Presidential Decision
Directive-23, U.S. Policy on Foreign Access
to U.S. Remote Sensing Capabilities. Aerospace had a role in implementing that policy
and later helped shape the directive that
would succeed it.
A Landmark Directive
Presidential Decision Directive-23 (or
PDD-23, as it is commonly known) had its
roots in the Land Remote Sensing Policy
Act of 1992, which established the terms
for civil and commercial remote sensing in
the U.S. Government. The act designated
the National Oceanic and Atmospheric Administration (NOAA) as the chief regulatory agency for the commercial remotesensing industry and outlined the general
terms and conditions required to obtain a license to operate a remote-sensing satellite
in the United States. These included, for example, the submission of on-orbit technical
characteristics of the proposed system for
NOAA review. The act also stipulated that a
licensee “operate the system in such a manner as to preserve the national security of
the United States and to observe the international obligations of the United States.”
These conditions required the government
to investigate the ambiguous nexus between
technology development and national security and decide on the best course of action.
Accordingly, Aerospace began conducting
research and analysis to assist the investigation and decision-making process.
PDD-23 was in many ways a response
to the end of the Cold War. At that time, major manufacturers of classified satellite systems feared that the elimination of the Soviet Union as an adversary would lead to
reduced government spending for national
technical architectures. These companies
lobbied the administration to permit commercialization of previously classified satellite imaging capabilities as a means to sustain the satellite industrial base and promote
U.S. products and services overseas.
The Clinton directive sought to balance
the need to protect sensitive technology
from proliferating while advancing the fortunes of U.S. companies that desired to enter this new market. The policy tilted, albeit
slightly, toward national security by including provisions for the suspension of commercial operations by the Secretary of
Commerce (in consultation with the Secretaries of Defense and State) when U.S.
troops were at risk or when the nation faced
a clear and present danger. The Clinton policy included general guidelines for licensing
commercial capabilities, supporting the
goal of maintaining the U.S. industry’s lead
over international competitors. The policy
refrained from articulating a clear set of operating capabilities, leaving it to the interagency process to make licensing determinations on a case-by-case basis.
Aerospace would come to play a major
role in facilitating this interagency process.
When the government asked for assistance
in interpreting and implementing PDD-23,
Aerospace supported the drafting of a new
remote-sensing policy for the Director of
Central Intelligence and assisted in the creation and implementation of the CIA’s Remote Sensing Committee, chaired by the
National Reconnaissance Office (NRO) and
the National Geospatial-Intelligence
Agency (formerly the National Imaging
and Mapping Agency, or NIMA). Aerospace also assisted the NRO in fulfilling its
role as the licensing coordinator for the intelligence community and oversaw Department of Defense (DOD) implementation
actions, shepherding numerous licensing
and export control issues through the DOD
clearance process. In addition, Aerospace
analyzed thousands of space technology
export license requests and dozens of commercial operating license actions and provided timely, in-depth analysis of foreign
remote-sensing capabilities to ensure the
balance between commercial competitiveness and national security protection was
maintained.
U.S. commercial remote sensing in April
2003. This policy provided a strong government rationale for procuring high-resolution
imagery from U.S. providers, established a
framework for international access to highresolution remote-sensing technology, encouraged civil departments and agencies to
integrate high-resolution data into daily operations, and more clearly delineated U.S.
Government roles and responsibilities regarding the commercial industry. This was
the president’s first major policy directive
under the auspices of a comprehensive National Security Council review of space policy matters. Aerospace played a key role in
supporting the DOD, NRO, and intelligence
community in assisting the National Security Council in the drafting, coordination,
and eventual approval of the new directive.
The Bush administration’s policy, like
that of the Clinton administration, sought
both to advance and protect U.S. national
security and foreign policy interests
“through maintaining the nation’s leadership in remote sensing space capabilities.”
A New Directive
However, the Bush directive went further
The Clinton policy was a watershed for the
by suggesting that “sustaining and enhancremote-sensing community. It allowed U.S.
ing the U.S. remote sensing industry”
companies to build, launch, and operate
would also help achieve that goal. In other
high-resolution satellites with frequentwords, a strong U.S. commercial remoterevisit, high-data-rate collection and, in
sensing industry could be good for business
some cases, regional tasking, downlinking,
and good for national security.
and nearly instantaneous processing. The
The Bush administration’s policy also
policy heralded a new era in which almost
offered a more aggressive U.S. Government
any consumer with available resources—
approach to commercial
from governments to
remote sensing by definThe policy heralded a new
private citizens—could
ing what role commercial
purchase high-resoluera in which almost any
imagery would play in
tion images of almost
consumer with available
satisfying government reany point on Earth.
resources—from governments quirements. The most
Still, the commerfundamental shift was in
to private citizens—could
cial market for such
mandating that the govpurchase high-resolution
imagery—both domesernment “rely to the maxtic and international—
images of almost any point
imum practical extent on
did not materialize as
on Earth.
U.S. commercial remote
rapidly or as broadly as
sensing space capabilities
anticipated. Pastrana’s display highlighted
for filling imagery and geo-spatial needs for
both the promise and frustrations of the burmilitary, intelligence, foreign policy, homegeoning industry: The images and their deland security, and civil users.”
rived products had immediate applications,
With that role for commercial imagery
but the widespread adoption by repeat or
delineated, the policy established how the
large-volume customers was slow to degovernment would realign its own imagery
velop. Thus, the sustainability of the comcollection efforts to meet national needs—
mercial satellite imaging industry still was
for example, by focusing on more challengless than certain nearly a decade after its ining intelligence and defense requirements
ception.
and advanced technology solutions. PDDThe Bush administration sought to
23 never specified a defined mission for
change that with the approval of a new NaU.S. commercial imagery. Interestingly, the
tional Security Presidential Directive on
Crosslink Summer 2004 • 51
Space Imaging
This commercial 1-meter resolution satellite image of the Kandahar
airfield in Afghanistan was collected on April 23, 2001, by Space
Imaging’s IKONOS satellite. Military aircraft parked in revetments
are visible off the west end of the runway (image is displayed with
north up). A commercial aircraft is visible parked near the terminal.
Space Imaging
This commercial 1-meter resolution satellite image of the Kandahar airfield in Afghanistan
was collected on Oct. 10, 2001, by Space Imaging’s IKONOS satellite. The damage visible to the airfield's runway, taxiway and revetments is evidence of the precision delivery
of the coalition ordnance. There is no visual evidence of damage to noncritical areas. Ordnance impacts are especially evident when compared to a “before” image of the same
airfield taken by the IKONOS satellite on April 23, 2001. The Kandahar airfield is located
southeast of the city of Kandahar. IKONOS travels 680 kilometers above Earth’s surface
at a speed of more than 28,000 kilometers per hour. It’s the world’s first commercial highresolution remote sensing satellite. Image is displayed with north up.
52 • Crosslink Summer 2004
Space Imaging
Space Imaging
Space Imaging
Satellite image showing where Saddam Hussein was captured at Ad Dawr in Iraq. The
location of the inset is in the upper left hand corner of the larger image.
Satellite image of the area where the U.S. forces struck on the first night of the Iraq war (Dora Farm), believed to be Saddam Hussein’s bunker that night.
Crosslink Summer 2004 • 53
The Government Role
According to the Bush administration’s policy on commercial remote sensing, the U.S. Government will:
• Rely to the maximum practical extent on U.S. commercial remote sensing space capabilities for filling imagery and
geospatial needs for military, intelligence, foreign policy, homeland security, and civil users;
• Focus U.S. Government remote sensing space systems on meeting needs that cannot be effectively, affordably,
and reliably satisfied by commercial providers because of economic factors, civil mission needs, national security concerns,
or foreign policy concerns;
• Develop a long-term, sustainable relationship between the U.S. Government and the U.S. commercial remote
sensing space industry;
• Provide a timely and responsive regulatory environment for licensing the operations and exports of commercial remote
sensing space systems; and
• Enable U.S. industry to compete successfully as a provider of remote sensing space capabilities for foreign governments
and foreign commercial users, while ensuring appropriate measures are implemented to protect national security and
foreign policy.
Clinton directive contained no references to
which will provide the State Department, in
“geo-spatial” needs at all. The world of
its lead role for export control, with comcommercial remote sensing and the U.S.
prehensive information about space techGovernment’s adoption of it as a critical
nologies and serve as a guide for deciding
source of information had clearly evolved.
export licenses for space systems, compoAerospace personnel supported the denents, and technologies. Aerospace continvelopment of the Bush administration’s Naues to support the government in the negotitional Security Presidential Directive from
ation and implementation of governmentits inception through approval, including
to-government agreements and other interinteragency debate and coordination. Even
national frameworks concerning the transnow, Aerospace personnel are assisting in
fer of sensitive space technology. The Aerothe policy’s implemenspace team also supports
tation. The same AeroThe policy established how U.S. delegations in their
space offices that
the government would realign consultations with forhelped implement
eign allies on the adopits own imagery collection
PDD-23 were tapped
tion of effective national
once again for their un- efforts to meet national needs. policies and regulatory
derstanding of that diregimes to manage the
rective as well as for their insight into foroperation and possible proliferation of
eign governmental and commercial
space technology.
technology developments in the remoteAerospace helped the NRO and Nasensing marketplace. The Aerospace policy
tional Geospatial-Intelligence Agency decadre assisted the government in ensuring
velop a strategy in 1999 that would intethat the new policy was cognizant of all
grate commercial imagery into current and
aspects of commercial remote-sensing polfuture architectures. In 2001, Aerospace
icy history and lessons learned. Aerospace
again assisted with further policy research
personnel also ensured that policy makers
and technical analysis to support an update
possessed a strong appreciation for the techof the strategy; both versions called for signical aspects and national security and comnificant top-line funding increases for commercial implications of the new remotemercial imagery purchases, integration, and
sensing policy.
readiness. These efforts culminated in the
Aerospace is leading a major effort to
ClearView and NextView programs. Under
develop the Sensitive Technologies List,
the ClearView program, the National
54 • Crosslink Summer 2004
Geospatial-Intelligence Agency agreed to
purchase a minimum level of imagery data
over a five-year period; several contracts
have been awarded for satellite imagery
with nominal ground sampling distances of
one meter or less. NextView moves beyond
the commodity-based approach of commercial imagery acquisition and seeks to ensure
access, priority tasking rights, area coverage, and broad licensing for sharing imagery with all potential mission partners.
Initial contracts will provide ground sampling distance down to half a meter.
Conclusion
Developments in commercial remote sensing have required Aerospace to adapt its traditional strengths to assist the U.S. Government in crafting and implementing sound
policy for the benefit of the national security and commercial space communities.
The National Geospatial-Intelligence
Agency, for example, has asked Aerospace
to help its Commercial Imagery Center
manage the NextView program and provide
advice and guidance in the creation of a
branch office to manage commercial imagery policy, plans, and strategy. Through
these and other efforts, Aerospace will continue to help U.S. defense intelligence agencies define, communicate, and fulfill their
critical geospatial imaging needs.
Bookmarks Recent Publications and Patents by the Technical Staff
Publications
S. Alfano, “Determining Probability Upper
Bounds for NEO Close Approaches,”
2004 Planetary Defense Conference:
Protecting Earth from Asteroids (Orange
County, CA, Feb. 23–26, 2004), AIAA
Paper 2004-1478.
P. E. Andersen, L. Thrane, H. T. Yura, A. Tycho, and T. M. Jorgensen, “Modeling the
Optical Coherence Tomography Geometry Using the Extended Huygens–Fresnel
Principle and Monte Carlo Simulations,”
Saratov Fall Meeting 2002: Optical
Technologies in Biophysics and Medicine
IV (Oct. 14, 2003), SPIE, Vol. 5068, pp.
170–181.
M. J. Barrera, “Conceptual Design of an Asteroid Interceptor for a Nuclear Deflection Mission,” 2004 Planetary Defense
Conference: Protecting Earth from Asteroids (Orange County, CA, Feb. 23–26,
2004), AIAA Paper 2004-1481.
J. D. Barrie, P. D. Fuqua, B. L. Jones, and N.
Presser, “Demonstration of the Stierwalt
Effect Caused by Scatter from Induced
Coating Defects in Multilayer Dielectric
Filters,” Thin Solid Films, Vol. 447–448,
pp. 1–6 (Jan. 30, 2004).
J. Camparo, “Fluorescence Fluctuations
from a Multilevel Atom in a Nonstationary Phase-Diffusion Field: Deterministic Frequency Modulation,” Physical Review A: Atomic, Molecular, and
Optical Physics, Vol. 69, No. 1, pp.
013802/1–11 (2004).
E. T. Campbell and L. E. Speckman, “Preliminary Design of Feasible Athos Intercept Trajectories,” 2004 Planetary Defense Conference: Protecting Earth from
Asteroids (Orange County, CA, Feb.
23–26, 2004), AIAA Paper 2004-1454.
V. Chobotov, “The Space Elevator Concept
as a Launching Platform for Earth and
Interplanetary Missions,” 2004 Planetary
Defense Conference: Protecting Earth
from Asteroids (Orange County, CA, Feb.
23–26, 2004), AIAA Paper 2004-1482.
J. G. Coffer, B. Sickmiller, and J. C. Camparo, “Cavity-Q Aging Observed Via an
Atomic-Candle Signal,” IEEE Transactions on Ultrasonics, Ferroelectrics and
Frequency Control, Vol. 51, No. 2, pp.
139–145 (Feb. 2004).
D. DeAtkine, J. McLeroy, and J. Steele,
“Department of Defense Experiments on
the International Space Station Express
Pallet,” 42nd AIAA Aerospace Sciences
Meeting and Exhibit (Reno, NV, Jan.
5–8, 2004), AIAA Paper 2004-0442.
M. El-Alaoui, R. L. Richard, M. Ashour-Abdalla, and M. W. Chen, “Low Mach
Number Bow Shock Locations During a
Magnetic Cloud Event: Observations and
Magnetohydrodynamic Simulations,”
Geophysical Research Letters, Vol. 31, p.
L03813 (Feb. 4, 2004).
J. S. George, R. Koga, K. B. Crawford, P.
Yu, S. H. Crain, and V. T. Tran, “SEE
Sensitivity Trends in Non-hardened High
Density SRAMs with Sub-micron Feature Sizes,” IEEE Radiation Effects Data
Workshop Record (Monterey, CA, Jul.
21–25, 2003), pp. 83–88.
D. L. Glackin, J. D. Cunningham, and C. S.
Nelson, “Earth Remote Sensing with
NPOESS: Instruments and Environmental Data Products,” Proceedings of the
SPIE—The International Society for Optical Engineering, Vol. 5234, No. 1, pp.
123–131 (2004).
T. J. Grycewicz, T. S. Lomheim, P. W. Marshall, and P. LeVan, “The 2002 SEEWG
FPA Roadmap,” Focal Plane Arrays for
Space Telescopes (San Diego, CA, Jan.
12, 2004), SPIE, Vol. 5167, pp. 31–37.
S. G. Hanson and H. T. Yura, “Complex
ABCD Matrices: an Analytical Tool for
Analyzing Light Propagation Involving
Stochastic Processes,” 19th Congress of
the International Commission for Optics:
Optics for the Quality of Life (Nov. 18,
2003), SPIE, Vol. 4829, pp. 592–593.
C. B. Harris, P. Szymanski, S. Garrett-Roe,
A. D. Miller, K. J. Gaffney, S. H. Liu,
and I. Bezel, “Electron Solvation and Localization at Interfaces,” Physical Chemistry of Interfaces and Nanomaterials II
(San Diego, CA, Dec. 8, 2003), SPIE,
Vol. 5223, pp. 159–168.
A. R. Hopkins, R. A. Lipeles, and W. H.
Kao, “Electrically Conducting Polyaniline Microtube Blends,” Thin Solid
Films, Vol. 447–448, pp. 474–480 (Jan.
30, 2004).
J. Huang, S. Virji, B. H. Weiller, and R. B.
Kaner, “Nanostructured Polyaniline
Sensors,” Chemistry, A European Journal, Vol. 10, pp. 1314–1319 (2004).
D. M. Kalitan, M. J. A. Rickard, J. M. Hall,
and E. L. Petersen, “Ignition Measurements of Ethylene-Oxygen-Diluent Mixtures with and without Silane Addition,”
42nd AIAA Aerospace Sciences Meeting
and Exhibit (Reno, NV, Jan. 5–8, 2004),
AIAA Paper 2004-1323.
H. I. Kim, P. P. Frantz, S. V. Didziulis, L. C.
Fernandez-Torres, and S. S. Perry, “Reaction of Trimethylphosphate with TiC and
VC(100) Surfaces,” Surface Science, Vol.
543, No. 1–3, pp. 103–117 (Oct. 1, 2003).
B. F. Knight and M. K. Hamilton, “A Proposed Multidimensional Analysis Function,” Technologies, Systems, and Architectures for Transnational Defense II
(Orlando, FL, Aug. 12, 2003), SPIE, Vol.
5072, pp. 80–89.
R. Koga, S. H. Crain, J. S. George, S. LaLumondiere, K. B. Crawford, C. S. Yu, and
V. T. Tran, “Variability in Measured SEE
Sensitivity Associated with Design and
Fabrication Iterations,” IEEE Radiation
Effects Data Workshop Record (Monterey, CA, Jul. 21–25, 2003), pp. 77–82.
T. Langley, R. Koga, and T. Morris, “SingleEvent Effects Test Results of 512 MB
SDRAMs,” IEEE Radiation Effects Data
Workshop Record (Monterey, CA, Jul.
21–25, 2003), pp. 98–101.
D. Lynch, “Comet, Asteroid, NEO Deflection Experiment (CANDE)—An
Evolved Mission Concept,” 2004 Planetary Defense Conference: Protecting
Earth from Asteroids (Orange County,
CA, Feb. 23–26, 2004), AIAA Paper
2004-1479.
D. Lynch and G. Peterson, “Athos, Porthos,
Aramis & Dartagnan—Four Planning
Scenarios for Planetary Protection,” 2004
Planetary Defense Conference: Protecting Earth from Asteroids (Orange
County, CA, Feb. 23–26, 2004), AIAA
Paper 2004-1417.
V. N. Mahajan, “Zernike Polynomials and
Aberration Balancing,” Current Developments in Lens Design and Optical Engineering IV (San Diego, CA, Nov. 3,
2003), SPIE, Vol. 5173, pp. 1–17.
C. J. Marshall, S. C. Moss, R. E. Howard, K.
A. LaBel, T. J. Grycewicz, J. L. Barth,
Crosslink Summer 2004 • 55
Bookmarks Continued
and D. Brewer, “Carrier Plus: a Sensor
Payload for Living With a Star Space Environment Testbed (LWS/SET),” Focal
Plane Arrays for Space Telescopes (San
Diego, CA, Jan. 12, 2004), SPIE, Vol.
5167, pp. 216–222.
T. N. Mundhenk, N. Dhavale, S. Marmol, E.
Calleja, V. Navalpakkam, K. Bellman, C.
Landauer, M. A. Arbib, and L. Itti, “Utilization and Viability of Biologically Inspired Algorithms in a Dynamic Multiagent Camera Surveillance System,”
Intelligent Robots and Computer Vision
XXI: Algorithms, Techniques, and Active
Vision (Providence, RI, Oct. 1, 2003),
SPIE, Vol. 5267, pp. 281–292.
D. Pack, B. B. Yoo, E. Tagliaferri, “Satellite
Sensor Detection of a Major Meteor
Event in the United States on 27 March
2003: The Park Forest, Illinois Bolide,”
2004 Planetary Defense Conference:
Protecting Earth from Asteroids (Orange
County, CA, Feb. 23–26, 2004), AIAA
Paper 2004-1407.
G. Peterson, “NEO Orbit Uncertainties and
Their Effect on Risk Assessment,” 2004
Planetary Defense Conference: Protecting Earth from Asteroids (Orange
County, CA, Feb. 23–26, 2004), AIAA
Paper 2004-1420.
G. Peterson, “Delta-V Requirements for
DEFT Scenario Objects (Defined
Threat),” 2004 Planetary Defense Conference: Protecting Earth from Asteroids
(Orange County, CA, Feb. 23–26, 2004),
AIAA Paper 2004-1475.
L. B. Rainey, Space Modeling and Simulation: Roles and Applications Throughout
the System Life Cycle (The Aerospace
Press and American Institute of Aeronautics and Astronautics, Inc., El Segundo,
CA, 2004).
D. D. Sawall, R. M. Villahermosa, R. A.
Lipeles, and A. R. Hopkins, “Interfacial
Polymerization of Polyaniline Nanofibers Grafted to Au Surfaces,” Chemistry of Materials, Vol. 16, No. 9, pp.
1606–1608 (2004).
T. Scalione, H. W. Swenson, F. De Luccia,
C. Schueler, J. E. Clement, and L. Darnton, “Post-CDR NPOESS VIIRS Sensor
Design and Performance,” Sensors, Systems, and Next-Generation Satellites VII
(Barcelona, Spain, Feb. 2, 2004), SPIE
Vol. 5234, pp. 144–155.
56 • Crosslink Summer 2004
S. S. Shen, “Spectral Quality Equation Relating Collection Parameters to Object/
Anomaly Detection Performance,” Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX (Orlando, FL, Sept.
24, 2003), SPIE, Vol. 5093, pp. 29–36.
P. L. Smith, M. J. Barrera, E. T. Campbell,
K. A. Feldman, G. E. Peterson, and G. N.
Smit, “Deflecting a Near-Term Threat—
Mission Design for the All-Out Nuclear
Option,” 2004 Planetary Defense Conference: Protecting Earth from Asteroids
(Orange County, CA, Feb. 23–26, 2004),
AIAA Paper 2004-1447.
R. L. Thornton, R. L. Phillips, and L. C. Andrews, “Laser Communications Utilizing
Molniya Satellite Orbits,” Free-Space
Laser Communication and Active Laser
Illumination III (San Diego, CA, Jan. 27,
2004), SPIE, Vol. 5160, pp. 292–301.
S. Virji, J. Huang, R. B. Kaner, and B. H.
Weiller, “Polyaniline Nanofiber Gas Sensors: Examination of Response Mechanisms,” Nano Letters, Vol. 4, No. 3, pp.
491 (Mar. 1, 2004).
R. L. Walterscheid, G. Schubert, and D. G.
Brinkman, “Acoustic Waves in the Upper
Mesosphere and Lower Thermosphere
Generated by Deep Tropical Convection,” Journal of Geophysical Research
A: Space Physics, Vol. 108, No. A11
(Nov. 2003).
C. C. Wang and D. J. Sklar, “Metric Transformation for a Turbo-Coded DPSK
Waveform,” Journal of Wireless Communication and Mobile Computing, Vol. 3,
No. 5, pp. 609–616 (Aug. 2003).
B. H. Weiller, P. D. Fuqua, and J. V. Osborn,
“Fabrication, Characterization, and Thermal Failure Analysis of a Micro Hot
Plate Chemical Sensor Substrate,” Journal of the Electrochemical Society, Vol.
151, No. 3, pp. H59–H65 (Feb. 5, 2004).
J. E. Wessel, R. W. Farley, and S. M. Beck,
“Lidar for Calibration/Validation of
Microwave Sounding Instruments,” Lidar Remote Sensing for Environmental
Monitoring IV (San Diego, CA, Dec. 29,
2003), SPIE, Vol. 5154, pp. 161–169.
S. Yongkun and N. Presser, “Tunable
InGaAsP/InP DFB Lasers at 1.3 µm
Integrated with Pt Thin Film Heaters
Deposited by Focused Ion Beam,”
Electronics Letters, Vol. 39, No. 25, pp.
1823–1825 (Dec. 11, 2003).
C. C. Yui, G. M. Swift, C. Carmichael, R.
Koga, and J. S. George, “SEU Mitigation
Testing of Xilinx Virtex II FPGAs,”
IEEE Radiation Effects Data Workshop
Record (Monterey, CA, Jul. 21–25,
2003), pp. 92–97.
H. T. Yura and S. G. Hanson, “Variance of
Intensity for Gaussian Statistics and Partially Developed Speckle in Complex
ABCD Optical Systems,” Communications, Vol. 228, No. 4–6, pp. 263–270
(Dec. 15, 2003).
Patents
S. Alfano, F. K. Chan, M. L. Greer, “Eigenvalue Quadric Surface Method for Determining When Two Ellipsoids Share
Common Volume for Use in Spatial Collision Detection and Avoidance,” U.S.
Patent No. 6,694,283, Feb. 2004.
This computationally efficient analytical
method can determine whether two
quadric surfaces have common spatial
points or share the same volume. The
technique can be used to asses the risk of
collision by two orbiting bodies: the future state of each object is represented by
a covariance-based ellipsoid, and these
ellipsoids are then analyzed to see
whether they intersect. If so, then a collision risk is indicated. The method involves adding an extra dimension to the
solution space, providing an extra dimensional product matrix. The eigenvalues
from this matrix are examined to identify
any that are associated with degenerate
quadric surfaces. If any are found, they
are further examined to identify those
that are associated with intersecting degenerate quadric surfaces. The method
provides direct share-volume results
based on comparisons of the eigenvalues,
which can be rapidly computed. The
method can also be used to determine
whether two ellipses only appear to share
the same projected area based on viewing angle.
R. B. Dybdal and D. D. Pidhayny, “Method
of Tracking a Signal from a Moving Signal Source,” U.S. Patent No. 6,731,240,
May 2004.
Signals emanating from a moving source
can be tracked by exploiting estimated
variations in the motion of the source.
Signal strength values are measured at
open-loop-commanded angular offsets
from the a priori estimated signal position and used to correct the antenna’s
alignment. In the case of an orbiting
satellite, the estimated satellite location
is computed from the satellite’s ephemeris. The signal sampling at the angular
offsets varies with the anticipated dynamics of the satellite’s motion as observed from the antenna’s location. The
commanded angular offsets are along
and orthogonal to the direction of the
signal source motion—i.e., in-track and
cross-track. Signal power measurements
are used not only to correct the antenna
direction but also to support decisions on
when to revalidate the step-track
alignment.
S. W. Janson, J. E. Pollard, C-C. Chao,
“Method for Deploying an Orbiting
Sparse Array Antenna,” U.S. Patent No.
6,725,012, April 2004.
A cluster of small, free-flying satellites
can be kept in rigid formation despite
natural perturbing forces. Orbital parameters are chosen so that each satellite occupies a node in a spatial pattern that revolves around a real or fictitious central
satellite in a frozen inclined eccentric
Earth orbit. When the cluster’s plane of
rotation is inclined 60 degrees relative to
the central satellite’s orbit plane, the
cluster appears to rotate like a wheel.
Otherwise, the radial distances between
the center point and the satellites
lengthen and decrease twice per orbit
around Earth, and the cluster moves as a
nonrigid elliptical body. In all cases, the
shape of the formation is maintained and
all satellites return to their initial position
once per revolution around Earth. Fuelefficient microthrusting is all that’s
needed to maintain the formation for
long periods. The technique is useful for
positioning satellites as the elements in a
sparse-aperture array, which can have an
overall dimension from tens of meters to
thousands of kilometers. The satellites
remain spatially fixed with respect to
each other within a fraction of the average interelement spacing, eliminating the
possibility of intersatellite collisions
while providing a slowly changing
antenna sidelobe distribution.
R. Kumar, “Adaptive Smoothing System for
Fading Communication Channels,” U.S.
Patent No. 6,693,979, Feb. 2004.
This adaptive smoother enhances the performance of radio-frequency receivers
despite the amplitude variations caused by
ionospheric scintillation or due to any
other amplitude fading mechanism. The
system can compensate for the loss due to
fading of coherently modulated communication and navigation signals, including
GPS. It employs an adaptive phase-lock
loop based on a Kalman filter to provide
phase estimations, a high-order estimator
to compute rapidly varying dynamic
amplitude, and an adaptive fixed-delay
smoother to provide improved code-delay
and carrier-phase estimates. Simulations
show a performance improvement of 6–8
decibels when the adaptive smoother employs all three components. The simulations show that the adaptive smoother, operating under realistic channel-fade rates,
can compensate for any loss in tracking
performance caused by amplitude fading.
S. S. Osofsky, P. E. Hanson, “Adaptive Interference Cancellation Method,” U.S.
Patent No. 6,724,840, April 2004.
Developed for use as part of a wideband
communications receiver, this adaptive
system isolates and cancels unwanted signals having predetermined frequency, amplitude, and modulation criteria. The system works by continuously scanning a
frequency bandwidth. Detected signals are
parameterized, and these parameters are
compared with the definition of an undesired signal stored in a microcontroller.
When an undesirable signal is detected at
a particular frequency location, a reference path gets tuned to that location. The
reference path serves to isolate the undesired signal, which is then phase-inverted,
amplified, and vector-summed with the input signal stream, which is delayed for coherent nulling. The unwanted signal is
suppressed in the composite signal, leaving only the desired signal. The microcontroller also monitors and adjusts the
reference path to adaptively minimize any
residual interfering signal and respond to
changes in interference. The system operates from 100–160 megahertz and can generate wideband nulls over a 5-megahertz
bandwidth with a 15-decibel attenuation
depth or narrowband nulls of 30 decibels.
R. P. Patera, G. E. Peterson, “Vehicular Trajectory Collision Avoidance Maneuvering Method,” U.S. Patent No. 6,691,034,
Feb. 2004.
Applicable to aircraft, launch vehicles,
satellites, and spacecraft, this analytic
method assesses the risk of an object
colliding with another craft or debris and
determines the optimal avoidance maneuver. Screening out bodies that pose
no risk, the method first determines when
the vehicle will come close enough to a
foreign object to raise the possibility of a
collision. It then determines the probability of collision. If the probability exceeds
a predetermined threshold, the method
then determines an avoidance maneuver,
charting the direction, magnitude, and
time of thrust to bring the probability below the threshold using the least propellant possible. The method uses various
processes, including conjunction determinations through trajectory propagation, collision probability prediction
through coordinate rotation and scaling
based on error-covariance matrices, and
numerical searching for optimal avoidance maneuvers.
J. Penn, “X33 Aeroshell and Bell Nozzle
Rocket Engine Launch Vehicle,” U.S.
Patent No. 6,685,141, Feb. 2004.
This work defines a class of launch vehicles characterized by one or more rocket
stages, each attached to a separate stage
that supplies liquid propellant. The rocket
stages use X33 aeroshell flight-control surfaces and can be equipped with three, four,
or five bell-nozzle engines. Each rocket
stage can be a booster or an orbiter with a
payload bay. The feeding stage can be an
external tank (no engine) or a core stage
with two bell-nozzle engines and a payload
bay. One possible configuration would be a
launch vehicle having a single orbiter with
five engines attached to an external tank.
Another version would be a three-engine
orbiter with a four-engine booster, both
attached to an external tank; in this case,
the four-engine booster augments both the
thrust and the propellant-load capability of
the system, thereby increasing payload
capacity. In a third form, a four-engine orbiter with a four-engine booster is attached
to an external tank. Alternatively, two X33
four-engine boosters can be attached to a
core stage to provide ultraheavy lift.
Crosslink Summer 2004 • 57
Contributors
Earth Remote Sensing: An Overview
David L. Glackin is Senior Engineering Specialist in the Sensing and Exploitation Department, where he specializes in remote
sensing science and technology and in solar astronomy.
He came to Aerospace from JPL in 1986 and has supported a range of NASA, JPL, NOAA, DOD, and White
House programs. These include DMSP, NPOESS, and
the Interagency Working Group on Earth Observation.
He holds an M.S. in astrogeophysics from the University of Colorado. He is the author of Civil, Commercial,
and International Remote Sensing Systems and Geoprocessing, published
by The Aerospace Press/AIAA ([email protected]).
The Infrared Background Signature Survey
Frederick S. Simmons retired from Aerospace in 1998, after 50 years in
the industry. He continues as a consultant for the SpaceBased Infrared Systems programs. Since joining Aerospace in 1971, he served as principal investigator for a
program of missile observations from a U-2 aircraft and
project engineer for Project Chaser and the Multispectral Measurement Program. As an advisor to DARPA,
he coordinated studies under the Plume Physics and
Early Warning Programs of the Strategic Technology
Office and served as consultant for the Teal Ruby. As an advisor to SDIO
and BMDO, he served on the Phenomenology Steering and Analysis
Group and was principal investigator for several experiments involving
infrared observations. He is the author of Rocket Exhaust Plume Phenomenology, published by The Aerospace Press/AIAA. He received The Aerospace Corporation President’s Award in 1983 (frederick.s.simmons@
aero.org).
Engineering and Simulation of Electro-Optical RemoteSensing Systems
Steve Cota, Senior Project Leader, Sensor Engineering and Exploitation Department, is responsible for assessing sensor performance
for civil and national security programs. He has led the
PICASSO project since its inception and has been active
in applying atmospheric modeling codes to sensor performance problems. He served as an advisor during the
source selection for the NPOESS Visible/Infrared Imager/
Radiometer Suite and Aerosol Polarimetry Sensor. He
joined Aerospace in 1987 and worked in the area of sensor
performance modeling and image exploitation until 1990. After a brief term
at Martin Marietta Astronautics, he returned to Aerospace in 1992 to support
the Systems Planning and Development Department. From 1994 until 1998,
he supported the Air Force Program Executive Officer for Space. He has a
Ph.D. in astronomy from Ohio State University ([email protected]).
Active Microwave Remote Sensing
Daniel D. Evans, Senior Engineering Specialist, Radar and Signal Systems
Department, has more than 30 years of experience in radar
phenomenology, radar processing, radar mode design, and
radar systems. He joined Aerospace in 1997 and received a
Corporate Individual Achievement Award in 2002 for development of a detection algorithm in support of the corporation’s national security mission. Evans often serves as
an independent reviewer for NASA technology development programs. He has a Ph.D. in mathematics from
UCLA and an M.B.A. from California State University, Los Angeles (daniel.
[email protected]).
Detecting Air Pollution from Space
Lindsay Tilney, Senior Project Engineer, is responsible for developing
technical courses for members of technical staff and
their primary customers. She has more than 19 years of
experience in the aerospace industry, including satellite
software design and analysis, flight planning for space
shuttle payloads, ground system design, on-orbit testing,
and satellite system software modeling and simulation.
She holds a B.S. in mathematics and computer science
from UCLA. She joined Aerospace in 1986 (lindsay.
[email protected]).
Thomas Hayhurst is Director of the Sensing and Exploitation Department in the Electronic Systems Division, which supports the development of electro-optical remote-sensing
payloads with end-to-end system performance modeling
and engineering trade studies. He joined the Aerospace
Chemistry and Physics Laboratory in 1982 and began
by studying phenomena that produce infrared emissions
in space and their effects on space surveillance systems.
In 1991, he joined the Sensing and Exploitation Department and shifted focus toward electro-optical sensor design and system
performance issues. He has a Ph.D. in physics from the University of California at Berkeley ([email protected]).
58 • Crosslink Summer 2004
Leslie O. Belsma of Space Support Division supports both the DMSP and
NPOESS program offices. She also manages an Internal
Research and Development project to demonstrate the use
of satellite data to improve high-resolution weather forecasting in support of air-quality and homeland-security applications. She promotes the use of satellite data for airquality applications through presentations to the civil
air-quality community. A retired Air Force weather officer
with an M.S. in aeronomy from the University of Michigan, she joined Aerospace in 1999 ([email protected]).
Commercial Remote Sensing and National Security
Dennis Jones is Director, Center for Space Policy and Strategy, in Aerospace’s Rosslyn office. Prior to joining Aerospace, he
served as imagery analyst in the Central Intelligence
Agency and served on the White House Drug Policy Office’s National Security staff. From 1994 to 2000, he supported the NRO Office of Policy in the execution of its international and commercial responsibilities, including
commercial remote-sensing policy development and implementation. He has also worked for a commercial
remote-sensing company as well as for U.S. defense and intelligence programs. He holds a Master of Governmental Administration degree from the
University of Pennsylvania ([email protected]).
Synthetic-Aperture Imaging Ladar
Walter F. Buell, Manager of the Lidar and Atomic Clocks Section of the
Photonics Technology Department, is Principal Investigator for Synthetic Aperture Ladar programs at Aerospace. His research interests also include laser cooling
and trapping of atoms, atomic clocks, laser remote
sensing, and quantum information physics. He has
published more than 25 papers in atomic, molecular,
and optical physics and holds three patents. He has a
Ph.D. in physics from the University of Texas at Austin
([email protected]).
Nick Marechal, Senior Project Leader, Radar and Signals Systems Department, has 17 years of experience in syntheticaperture radar. His experience includes signal processing, system performance predictions, and topographic
mapping. He has worked in the area of moving-target
indication and has authored the risk-mitigation plan for
the Space Based Radar Program in the area of topographic mapping. Working in the field of launch vehicle debris detection and characterization, he developed
signal processing techniques to minimize Doppler ambiguity artifacts associated with radars having low pulse repetition frequency. He holds a
Ph.D. in mathematics from UCLA and is a Senior Member of IEEE. He
joined Aerospace in 1988 ([email protected]).
Richard Dickinson is Senior Engineering Specialist, Radar and Signal
Systems Department, Sensor Systems Subdivision. His
primary research includes synthetic-aperture radar
(SAR) system and image-quality analysis, SAR processing, digital signal processing, and data analysis.
He joined Aerospace in 1989 to work in the Image Exploitation Department. He has been involved in sensor
system modeling, radar system performance analysis,
and associated signal processing tasks. He has an M.A.
in mathematics from UCLA ([email protected]).
Data Compression for Remote Imaging Systems
Timothy S. Wilkinson, Senior Engineering Specialist, Sensing and Exploitation Department, joined Aerospace in 1990. He
has worked on many aspects of end-to-end sensor simulation, including sensor modeling, on-board compression, exploitation algorithm development and analysis,
and compression for distribution to primary and secondary users. He is vice-chair of the U.S. Joint Photographic Experts Group committee. He is involved in
analysis of ground processing algorithms for several
remote-sensing systems. He received a Ph.D. in electrical engineering
from Stanford University ([email protected]).
Steven Beck, Director of the Photonics Technology Department, has been
with Aerospace for 20 years. He was responsible for development of a mobile lidar system for application to Air
Force and Aerospace needs. In 1998, he was named Senior
Scientist and served on the staff of the Senior Vice President in charge of Engineering and Technology, where he
administered the corporate R&D program. He has a Ph.D.
in chemical physics from Rice University (steven.beck@
aero.org).
David Kozlowski is a Research Scientist in the Photonics Technology Department. His research activities involve high-speed photonic components for both analog and digital applications.
He has worked on fiber-optic memory loops, secure communications, and optical synthetic-aperture imaging ladar.
He has a Ph.D. in electrical engineering from Lancaster
University ([email protected]).
Timothy J. Wright is an Associate Member of the Technical Staff in the
Photonics Technology Department. He has experience in
programming data acquisition and instrument control systems and digital signal analysis, as well as an interest in robotics. He has a B.S. in computer science and computational physics from St. Bonaventure University (timothy.j.
[email protected]).
Joseph R. Buck joined Aerospace as a Member of the Technical Staff in the
Photonics Technology Department in 2003. His research
interests include quantum optics, quantum information theory, and microsphere optical resonators. His research activities involve synthetic-aperture lidar, laser vibrometry, and
quantum limits and effects in laser remote sensing. He has
a Ph.D. in physics from California Institute of Technology
([email protected]).
Hsieh S. Hou is Senior Engineering Specialist in the Sensing and Exploitation Department. He has more than 30 years of experience
in the research and development of digital image processing systems and is internationally known for contributions
in digital image scaling and fast transforms. Since joining
the Sensor Systems Subdivision in 1984, he has led independent analyses and development efforts in the areas of
image data compression and onboard signal processing for
many satellite ground support systems, including DSP,
DMSP, and NPOESS. He has consulted for NASA, NOAA, and ESA on similar projects and has served as referee for the National Science Foundation.
He has a Ph.D. in electrical engineering from the University of Southern California and holds six patents. He is a fellow of SPIE and a Life Member of
IEEE ([email protected]).
Crosslink Summer 2004 • 59
The Back Page
Jupiter’s Newest Satellite
JPL
P
eering through his homemade telescope nearly 400 years ago, Galileo first laid
eyes on the four largest moons of Jupiter, now known as Io, Europa,
Ganymede, and Callisto. His observations caused a notorious stir among
his contemporaries, forcing a profound shift in the accepted model of
the cosmos.
It’s only fitting, then, that Galileo’s namesake spacecraft should cause
an equal sensation by indicating that these moons might hold vast saltwater oceans beneath their icy surfaces. Indeed, data from the Galileo
craft suggest that liquid water on Europa made contact with the surface in geologically recent times and may still lie relatively close to
the surface. If so, Europa could potentially harbor life.
Based on this possibility, NASA is developing ambitious
plans for a new mission—the Jupiter Icy Moons Orbiter, or
JIMO—that would orbit Callisto, Ganymede, and Europa to
investigate their makeup, history, and potential for sustaining
life. Sending a spacecraft halfway across the solar system is
hard enough, but getting it into and out of three separate lunar orbits will be a tremendous feat, requiring a significant
amount of energy. Thus, JIMO will be a new type of
spacecraft, driven by nuclear-generated ion propulsion.
The technology will be challenging, but the rewards will
be significant: An onboard reactor could support an impressive suite of instruments far superior to anything
that could be sent using traditional solar and battery
power. It could even be used to beam power to a probe
or lunar lander.
Aerospace has been lending its technical expertise
to the JIMO project. For example, as part of the HighCapability Instrument Concept study, Aerospace
helped develop a baseline design for a suite of instruments that can take advantage of the large power supply
to achieve high sensitivity, spatial resolution, spectral
resolution, duty cycle, and data rates. The candidate instruments included a visible and infrared imaging
spectrometer, a thermal mapper, a laser altimeter, a multispectral laser surfacereflection spectrometer, an interferometric synthetic-aperture radar, a
polarimetric synthetic-aperture
radar, a subsurface radar sounder,
and a radio plasma sounder. In addition to generating basic specifications for each instrument, Aerospace explored a number of design
options to delineate critical tradeoffs. Driving technologies for each
instrument type were identified, as well
as an estimate of the
needed development
time. The laser spectrometer, for example, is
an entirely new instrument, and the multispectral selective-reflection
lidar is based on capabilities that are available in the
industry but do not exist in a single design.
Aerospace also performed the coverage analysis for JIMO, including verification of
maximum revisit times for various inclinations and
altitudes and access coverage for the entire moon of Europa. The Revisit program, a software tool developed by
Aerospace, was used for the visualization and computation. Key results from the study included an analysis of the
fields of view needed to achieve the desired mapping coverage. In some cases, the analysis prompted a change in
sensor configuration to accommodate sunlight constraints.
This analysis also helped define duty cycles that would reduce the amount of data being sent back to Earth without
compromising overall performance.
In a related effort, Aerospace engineers analyzed the
telecommunications needed for the return of data from the
JIMO instruments—and derived a target specification of
roughly 233 megabytes per second. Key considerations included loss of communication due to blockage from
Jupiter, the sun, and the Jovian moons and the enormous
amount of sensor data (even with onboard processing) that
will need to be sent. Aerospace provided three system
options: direct radio-frequency communication using a 3or 5-meter dish at 35 gigahertz; laser communication using
multiple lasers in the terahertz band; and radio-frequency
communication via a relay satellite trailing JIMO.
One particular challenge facing JIMO is the harsh radiation environment. Jupiter has trapped proton and electron belts, much like Earth; however, the Jovian trapped
electron environment is much more severe. Planning for
this environment will require some new approaches because the most problematic particle around Jupiter is the
high-energy electron—not the proton, which is the primary
concern around Earth. Aerospace analyses indicate that the
radiation challenges are not insurmountable: If commercial
integrated circuits continue to evolve at their present rate,
they should allow significant improvements in radiation
hardness and better protection for both analog and digital
flight electronics, including focal planes. Better inherent
radiation resistance, along with
proper shielding design, should allow
JIMO to survive. Still, JIMO will
need to overcome the data corruption
that will occur as sensitive imagers
and spectrometers attempt to collect
data in the midst of this severe
radiation.
As part of the conceptual mission
studies, Aerospace performed independent
cost estimates for various configurations and design iterations. The main trades consisted of varying
power-conversion types, nuclear-reactor types, and power
levels. The cost analysis emphasized technology forecasting, risk, radiation hardening, schedule penalty, calibration
of the primary contractor’s historical programs, safety
specifications, and responsiveness to other program-management and engineering issues. After each design iteration, the Aerospace and contractor teams met to reconcile
their cost estimates. This proved especially valuable because Aerospace was able to influence contractor cost
estimates and, in certain cases, the contractor’s cost
methodology.
NASA hopes to launch JIMO early in the next
decade—and it will probably take another six years to
reach its destination. So, it will take some while before scientists crack the secrets of Jupiter’s frozen moons. In the
meantime, Aerospace will continue to support the program
as needed, joining NASA and other organizations in honoring and advancing
Galileo’s great
legacy.
Crosslink
Summer 2004 Vol. 5 No. 2
Editor in Chief
Board of Trustees
Corporate Officers
Donna J. Born
Bradford W. Parkinson, Chair
Editor
Howell M. Estes III, Vice Chair
William F. Ballhaus Jr.
President and CEO
Gabriel Spera
Guest Editor
Thomas Hayhurst
Contributing Editor
Steven R. Strom
Staff Editor
Jon Bach
William F. Ballhaus Jr.
Joe M. Straus
Richard E. Balzhiser
Executive Vice President
Guion S. Bluford Jr.
Donald L. Cromer
Daniel E. Hastings
Jimmie D. Hill
Art Director
John A. McLuckey
Richard Humphrey
Thomas S. Moorman Jr.
Wanda M. Austin
Stephen E. Burrin
Marlene M. Dennis
Jerry M. Drennan
Lawrence T. Greenberg
Illustrator
Dana M. Muir
Ray F. Johnson
John A. Hoyem
Ruth L. Novak
Gordon J. Louttit
Photographer
Sally K. Ride
John R. Parsons
Mike Morales
Robert R. Shannon
Donald R. Walker
Editorial Board
Donald W. Shepperd
Dale E. Wallis
Malina Hills, Chair
David A. Bearden
Donna J. Born
Linda F. Brill
John E. Clark
David J. Evans
Isaac Ghozeil
Linda F. Halle
David R. Hickman
Michael R. Hilton
John P. Hurrell
William C. Krenz
Mark W. Maier
Mark E. Miller
John W. Murdock
Mabel R. Oshiro
Fredric M. Pollack
Jeffrey H. Smith
John R. Wormington
The Aerospace Corporation
P.O. Box 92957
Los Angeles, CA 90009-2957
K. Anne Street
John H. Tilelli Jr.
Robert S. Walker
Copyright  2004 The Aerospace Corporation. All rights reserved. Permission to copy or
reprint is not required, but appropriate credit must be given to The Aerospace Corporation.
Crosslink (ISSN 1527-5264) is published by The Aerospace Corporation, an independent,
nonprofit corporation dedicated to providing objective technical analyses and assessments
for military, civil, and commercial space programs. Founded in 1960, the corporation operates a federally funded research and development center specializing in space systems architecture, engineering, planning, analysis, and research, predominantly for programs managed
by the Air Force Space and Missile Systems Center and the National Reconnaissance Office.
For more information about Aerospace, visit www.aero.org or write to Corporate Communications, P.O. Box 92957, M1-447, Los Angeles, CA 90009-2957.
For questions about Crosslink, send email to [email protected] or write to The Aerospace Press, P.O. Box 92957, Los Angeles, CA 90009-2957. Visit the Crosslink Web site at
www.aero.org/publications/crosslink.
FIRST CLASS
U.S. POSTAGE
PAID
Permit No. 125
El Segundo, Calif.
Return Service Address