Behind the Scenes at Micro-37 Creating and Protecting Digital

Transcription

Behind the Scenes at Micro-37 Creating and Protecting Digital
Innovative Technology for Computer Professionals
February 2005
Creating and
Protecting
Digital Worlds
Ender’s Game
Redux
TEAM LinG - Live, Informative, Non-cost and Genuine!
h t t p : / / w w w. c o m p u t e r. o r g
Behind the Scenes
at Micro-37
Innovative Technology for Computer Professionals
February 2005,Volume 38, Number 2
COMPUTING PRACTICES
26 Local Search: The Internet Is the Yellow Pages
Marty Himmelstein
The proposed Internet-Derived Yellow Pages provides a framework
for combining Internet-derived content with the trust and fairness
that characterize the printed Yellow Pages.
C O V E R F E AT U R E S
GUEST EDITOR’S INTRODUCTION
36 Nanoscale Design & Test Challenges
Yervant Zorian
The silicon-scaling revolution presents a plethora of challenges as
technology progresses into the nanoscale era.
43 Robust System Design with Built-In Soft-Error
Resilience
Subhasish Mitra, Norbert Seifert, Ming Zhang, Quan Shi,
and Kee Sup Kim
A system’s susceptibility to transient errors increases in advanced
technologies, making the incorporation of effective protection
mechanisms into chip designs essential. A new design paradigm
reuses design-for-testability and debug resources to eliminate such
errors.
53 Transistor-Level Optimization of Digital Designs
with Flex Cells
Cover design and artwork by Dirk Hagner
ABOUT THIS ISSUE
nlike previous generations,
90-nanometer design and
beyond presents new, and
sometimes unforeseen, challenges:
very high design and tooling costs,
high transistor leakage causing
power management issues, and
environmentally induced soft errors.
In this issue, we take a look
at efforts by the design and test
community to answer these
challenges to Moore’s law.
U
Rob Roy, Debashis Bhattacharya, and Vamsi Boppana
The flex-cell approach provides an optimally tuned set of building
blocks for integrated circuit design when optimality is measured
using metrics such as clock speed, die size, and power consumption.
63 Hardware/Software Interface Codesign for
Embedded Systems
Ahmed A. Jerraya and Wayne Wolf
A codesign approach will enable the integration of hardware and
software components in heterogeneous multiprocessors. The
authors analyze the evolution of this approach and define a
long-term roadmap for future success.
R E S E A R C H F E AT U R E
71 A New Framework for Power Estimation of
Embedded Systems
Claudio Talarico, Jerzy W. Rozenblit, Vinod Malhotra,
and Albert Stritter
A proposed modular framework for assessing power consumption
of embedded systems early in the design cycle can be extended to
any performance metric and uses a high level of abstraction, leading
to a faster execution time.
TEAM LinG - Live, Informative, Non-cost and Genuine!
IEEE Computer Society: http://www.computer.org
Computer: http://www.computer.org/computer
[email protected]
IEEE Computer Society Publications Office: +1 714 821 8380
OPINION
8
At Random
Behind the Scenes at Micro-37
Bob Colwell
NEWS
14
Industry Trends
Developments Advance Web Conferencing
David Geer
19
Technology News
Telecom Carriers Actively Pursue Passive Optical Networks
George Lawton
22
News Briefs
One-Handed Keyboard Helps Mobile and Disabled Workers ■
US Increases Quota for Controversial Visas ■ Engineers Begin
Addressing “Talking Spam”
MEMBERSHIP NEWS
81
Computer Society Connection
86
Call and Calendar
COLUMNS
95
Entertainment Computing
Ender’s Game Redux
Michael Macedonia
99
Invisible Computing
Creating and Protecting Digital Worlds
Bill N. Schilit and Roy Want
104
The Profession
NEXT
MONTH:
Smart Things
and Places
The Profession and the Big Picture
Neville Holmes
D E PA R T M E N T S
Membership Magazine
of the
4
Article Summaries
6
Letters
12
32 & 16
40
IEEE Computer Society Membership Application
79
Products
80
Bookshelf
88
Career Opportunities
94
Advertiser/Product Index
COPYRIGHT © 2005 BY THE INSTITUTE OF ELECTRICAL AND ELECTRONICS
ENGINEERS INC. ALL RIGHTS RESERVED. ABSTRACTING IS PERMITTED WITH CREDIT
TO THE SOURCE. LIBRARIES ARE PERMITTED TO PHOTOCOPY BEYOND THE LIMITS OF
US COPYRIGHT LAW FOR PRIVATE USE OF PATRONS: (1) THOSE POST-1977 ARTICLES THAT CARRY A CODE
AT THE BOTTOM OF THE FIRST PAGE, PROVIDED THE PER-COPY FEE INDICATED IN THE CODE IS PAID
THROUGH THE COPYRIGHT CLEARANCE CENTER, 222 ROSEWOOD DR., DANVERS, MA 01923; (2) PRE1978 ARTICLES WITHOUT FEE. FOR OTHER COPYING, REPRINT, OR REPUBLICATION PERMISSION, WRITE
TO COPYRIGHTS AND PERMISSIONS DEPARTMENT, IEEE PUBLICATIONS ADMINISTRATION, 445 HOES
LANE, P.O. BOX 1331, PISCATAWAY, NJ 08855-1331.
TEAM LinG - Live, Informative, Non-cost and Genuine!
Innovative Technology for Computer Professionals
Editor in Chief
Computing Practices
Special Issues
Doris L. Carver
Rohit Kapur
Bill Schilit
Louisiana State University
[email protected]
[email protected]
[email protected]
Associate Editors
in Chief
Perspectives
Web Editor
Bob Colwell
Ron Vetter
[email protected]
[email protected]
Bill N. Schilit
Intel
Research Features
Kathleen Swigger
Kathleen Swigger
University of North Texas
[email protected]
Area Editors
Column Editors
Computer Architectures
Douglas C. Burger
At Random
Bob Colwell
Bookshelf
Michael J. Lutz
University of Texas at Austin
Databases/Software
Michael R. Blaha
University of Maryland
Standards
Jack Cole
OMT Associates Inc.
Graphics and Multimedia
Oliver Bimber
Embedded Computing
Wayne Wolf
Advisory Panel
Bauhaus University Weimar
Princeton University
University of Virginia
Information and
Data Management
Naren Ramakrishnan
Entertainment Computing
Michael R. Macedonia
Thomas Cain
Georgia Tech Research Institute
Virginia Tech
Ralph Cavin
IT Systems Perspectives
Richard G. Mathieu
Semiconductor Research Corp.
IBM Almaden Research Center
Networking
Jonathan Liu
University of Florida
Software
H. Dieter Rombach
Carl K. Chang
[email protected]
CS Publications Board
Security
Bill Arbaugh
Rochester Institute of
Technology
Multimedia
Savitha Srinivasan
2004 IEEE Computer Society
President
US Army Research Laboratory
James H. Aylor
University of Pittsburgh
Michael R. Williams (chair),
Michael R. Blaha, Roger U. Fujii, Sorel
Reisman, Jon Rokne, Bill N. Schilit,
Nigel Shadbolt, Linda Shafer,
Steven L. Tanimoto, Anand Tripathi
CS Magazine Operations
Committee
Bill Schilit (chair), Jean Bacon, Pradip Bose,
Doris L. Carver, Norman Chonacky,
George Cybenko, John C. Dill, Frank E.
Ferrante, Robert E. Filman, Forouzan
Golshani, David Alan Grier, Rajesh Gupta,
Warren Harrison, James Hendler,
M. Satyanarayanan
Ron Hoelzeman
St. Louis University
University of Pittsburgh
Invisible Computing
Bill N. Schilit
Edward A. Parrish
Worcester Polytechnic Institute
Intel
Ron Vetter
The Profession
Neville Holmes
Alf Weaver
University of North Carolina at
Wilmington
University of Tasmania
University of Virginia
AG Software Engineering
Dan Cooke
Texas Tech University
Administrative Staff
Editorial Staff
Scott Hamilton
Chris Nelson
Senior Acquisitions Editor
[email protected]
Associate Editor
Judith Prow
Staff Lead Editor
Managing Editor
[email protected]
Bob Ward
Membership News Editor
[email protected]
James Sanders
Bryan Sallis
Assistant Publisher
Dick Price
Membership & Circulation
Marketing Manager
Georgann Carter
Senior Editor
Mary-Louise G. Piner
Manuscript Assistant
Linda World
Design
Larry Bauer
Dirk Hagner
Production
Larry Bauer
Senior Editor
Lee Garber
Senior News Editor
Executive Director
David W. Hennage
Publisher
Angela Burgess
Business Development Manager
Sandy Brown
Senior Advertising Coordinator
Marian Anderson
Circulation: Computer (ISSN 0018-9162) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 100165997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, PO Box 3014, Los Alamitos, CA 90720-1314; voice +1 714 821 8380; fax +1 714 821 4010;
IEEE Computer Society Headquarters,1730 Massachusetts Ave. NW, Washington, DC 20036-1903. IEEE Computer Society membership includes $19 for a subscription to
Computer magazine. Nonmember subscription rate available upon request. Single-copy prices: members $20.00; nonmembers $94.00.
Postmaster: Send undelivered copies and address changes to Computer, IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid
at New York, New York, and at additional mailing offices. Canadian GST #125634188. Canada Post Corporation (Canadian distribution) publications mail agreement
number 40013885. Return undeliverable Canadian addresses to PO Box 122, Niagara Falls, ON L2E 6S8 Canada. Printed in USA.
Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in Computer does not
necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space.
2
Computer
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
ARTICLE SUMMARIES
Local Search: The Internet
Is the Yellow Pages
pp. 26-34
Marty Himmelstein
T
he Internet is not meeting its potential for delivering geographically
oriented information. Sometimes the
information people seek is on the Internet, but the tools for locating it are inadequate. In other cases, our industry has not
developed the counterparts needed to
replace traditional delivery methods such
as the printed Yellow Pages.
The Internet Yellow Pages, currently
the main source of local content on the
Internet, are reliable, but they are also
shallow, slow to change, centralized, and
expensive. Their primary data sources
are printed telephone directories. They
do not use the Internet’s resources in any
meaningful way.
Geosearch, a geoenabled search engine
that lets people search for Web pages that
contain geographic markers within a
specified geographic area, demonstrates
that the Internet is a rich source of local
content. It also demonstrates the many
advantages that postal addresses have as
a key for accessing this content, especially
when the content pertains to the activities of daily life.
Robust System Design with
Built-In Soft-Error Resilience
pp. 43-52
Subhasish Mitra, Norbert Seifert, Ming
Zhang, Quan Shi, and Kee Sup Kim
S
oft errors, also called single-event
upsets, are radiation-induced transient errors caused by neutrons generated from cosmic rays and alpha particles generated by packaging material.
Traditionally, soft errors were regarded
as a major concern only for space applications. Yet, for designs manufactured at
advanced technology nodes—such as 90
nm or 65 nm—system-level soft errors
occur more frequently than in previous
generations.
Chip designers must address soft
errors very early, starting from the prod-
4
Computer
uct definition phase and continuing
through the architecture planning, circuit
design, logic design, and postlayout
phases. The effects of soft errors in
sequential elements such as flip-flops,
latches, and combinational logic must be
evaluated, and effective protection mechanisms incorporated into the design.
Transistor-Level Optimization of
Digital Designs with Flex Cells
pp. 53-61
Rob Roy, Debashis Bhattacharya, and
Vamsi Boppana
O
ver the years, it has become commonplace to perform various
forms of manual intervention on
designs generated using automated
flows. The quest to overcome the limitations of standard-cell-based design methods leads naturally to the creation of
new design- and context-specific cells—
designated flex cells—during the process
of optimizing a given digital design. Flexcell-based design optimization automates the creation of tactical cells.
The flex-cell approach, either alone or
in combination with standard cells, provides an optimally tuned set of building
blocks for the target IC design, which
measures optimality against accepted and
quantifiably definable metrics such as
clock speed, die size, and power consumption. By allowing manipulation
of the transistor-level structures, flex
cells open up a new dimension in the
optimization of automatically created
designs.
Hardware/Software Interface
Codesign for Embedded Systems
pp. 63-69
Ahmed A. Jerraya and Wayne Wolf
T
echnological evolution—particularly shrinking silicon fabrication
geometries—is enabling the integration of complex platforms in a single
system on chip. In addition to specific
hardware subsystems, a modern SoC
also can include one or several CPU sub-
systems to execute software and sophisticated interconnects.
Mastering the design of these embedded systems challenges both the system
and semiconductor houses that used to
apply a software- or hardware-only strategy. In addition to classic software and
hardware, SoC engineers must design
hardware-dependent software and software-dependent hardware. Codesigning
these HW/SW interfaces requires a new
kind of engineer who understands both
hardware and software design.
Providing SoCs consisting of an assembly of processors executing tasks concurrently will require design methodologies
to focus on selecting and using either programmable or dedicated processors in
place of the gates and arithmetic logic
units that current methods use.
A New Framework for Power
Estimation of Embedded Systems
pp. 71-78
Claudio Talarico, Jerzy W. Rozenblit,
Vinod Malhotra, and Albert Stritter
A
mong the many metrics used to
characterize the quality of an
embedded system-on-chip design,
power consumption has emerged as one
of the most important. This is largely due
to the proliferation of mobile batterypowered computing devices, the increasing speed and density of CMOS (complementary metal-oxide semiconductor)
VLSI (very large-scale integration) circuits, and continuous shrinking of the
transistor feature size of deep-submicron
technologies.
The authors have developed a technique that derives power figures from the
execution of high-level models. This technique makes it possible to assess embedded SoC designs much earlier in the design
cycle, contributing to sounder decisions
throughout the entire development
process and leading to a faster execution
time. To validate their methodology, the
authors applied it to a peripheral core—a
baud rate generator—and compared the
results with those obtained using a gatelevel approach.
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
@
L E T TERS
SYSTEM RECOVERY ALTERNATIVES
The authors of “Recovery-Oriented
Computing: Building Multitier Dependability” (George Candea et al.,
Nov. 2004, pp. 60-67) mentioned several techniques for recovering from
inevitable system failures, including
microreboot and system-level undo.
I agree in principle that “A first-lineof-defense recovery mechanism should
be low cost and low overhead, with a
good probability of repairing the problem…” In the event of a hardware or
software failure, the basic objectives
are getting the system back to normal
operation in the shortest time and
restoring individual files and folders as
quickly as possible.
In some cases, a reboot can remedy
minor problems and serve as “a universal form of recovery for many software
failures, even when the exact causes of
failure are unknown.” However,
unidentified problems frequently recur,
resulting in a complete system malfunction. In this circumstance, rebooting
merely postpones the inevitable.
The traditional tape backup technology is increasingly becoming a secondary option because the stored files
can be corrupt or blank when restored.
Moreover, manually rebuilding and
restoring data from tape backups or
reinstalling it from scratch can take
hours, which is not acceptable for most
businesses these days.
Disk-to-disk backup with system
imaging offers a viable option for
ensuring against system failure.
This technology can store either
individual files or entire directories
without requiring backup software. IT
administrators can use online server
imaging to create a full image of the
system and to perform frequent diskto-disk backup of data files.
With system imaging, backups evolve
from merely being a disaster recovery
option to become an integral part of the
information management process.
Hong-Lok Li
Vancouver, B.C.
[email protected]
6
Computer
The authors respond:
Great observations. Microrebooting
aims to complement—rather than
replace—existing, more expensive
high-availability techniques such as
redundancy and failover. It is not a
cure-all.
Our paper on “crash-only software”
(www.stanford.edu/~candea/papers/)
presents the design of systems optimized for shutdown-by-crashing. In
these systems, the effectiveness of
microrebooting is maximized. Failures
whose root causes recur deterministically—a problem for any form of
recovery—are handled by recognizing
the repeating failure pattern in a recovery manager and employing an alternate form of reovery.
Extensive logging at runtime and
recovery time provides the means to
identify impending larger problems. In
this case, if microrebooting can delay
a problem, operators gain time to prepare for handling it when it strikes.
All these topics and more are covered at the URL provided above.
THE RELEVANCE OF COMPUTER
SCIENCE RESEARCH
Reading the December The Profession
column (Simone Santini, “Determining
Computing Science’s Role,” pp. 128,
126-127) brings up several questions:
Is this column relevant? Is computer science relevant? Are universities relevant?
The author illustrates no reasons for
or benefits from pure computer science
research. He illustrates no accomplishments of this research in the past
half century.
Why should society be obligated to
pay the salaries of pursuers of pure
research? There are nonscientists who
do accomplish things, some of which
have a huge impact on society—not all
of them economic. Since computer science is still a field where a lone
researcher could accomplish something significant, perhaps pure computer science researchers should retreat
into their garages to do their work.
The author bemoans the fact that
economic factors are driving research,
while his essential complaint is that not
enough money is going into pure
research. This seems rather self-contradictory.
He is unhappy about universities
churning out professionals who can
actually contribute to society. God forbid! I’m glad this is happening at long
last, with the captains of academia
reluctantly coming out of their caves
to see the light of reality.
Let’s remember that computer science was born because Turing wanted
to help the war effort, not because he
had too much idle time on his hands.
This column appears to exist either
simply to be a voice for disillusioned
academia or to generate controversy.
If this is indeed a column about “the
profession,” let’s hear from some professionals. Otherwise, print something
else that might be useful because useful information is the only thing I’ll
pay for.
Aditya Garg
[email protected]
The author responds:
In what sense are the terms “relevant” and “significant” to be taken
here? Economically relevant? Industrially significant? Nothing else?
If I agreed on the exclusivity of these
criteria, I could simply resort to the
many examples of technical (and economic) breakthroughs generated by
pure research that probably never
would have happened in the frantically
application-oriented world of industrial development.
To stay on familiar ground: Turing
published his Entscheidungsproblem
(the “Turing machine” paper) article
in 1936, well before World War II. Its
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
origins are indeed in the mathematical
literature—in one of the 23 unsolved
problems that Hilbert outlined at the
first mathematical congress in 1900.
Other examples, from Fourier to
Maxwell, abound: Creativity often—
although not always, of course—
comes to life as an epiphenomenon of
unattached culture. Industrial research
is useful and important, but it can’t be
our only model.
But to artificially restrict ourselves
to this monochromatic reading of the
term “relevant” would miss my point:
Computers probably would have been
developed even without Turing and
Von Neumann (another pure mathematician). But the point is that to
reduce the entire human experience to
industrial productivity would be to
deny our intellectual richness.
One reason that people work is to
carve a space in their life in which they
can pursue their own interests.
Education—not just in universities—
should take this essential fact into
account. If industrial ethics is the only
thing that matters, if the only purpose
of education is to make us productive,
then the education system fails to give
us the intellectual tools needed to fully
experience an essential aspect of our
lives.
The idea that production is the
mover of history is a bit Marxist, but
even Marx’s ideal man would work in
a factory in the morning, play violin in
the afternoon, and go to the theater in
the evening. Even the most econocentric view includes a cultural life, in the
discipline of one’s choice.
I don’t complain that not enough
money is going to research; I do complain that not enough public money is
going there. Universities do have too
much private money—witness the
gigantism and scientific poverty of
many a university project.
Finally, one goal of my column was
indeed to generate controversy, for a
discipline without controversy is condemned to wither and die.
We welcome your letters. Send them to
[email protected].
The 2005 IEEE International Conference on Information Reuse and
Integration (IEEE IRI-2005)
Knowledge Acquisition and Management
August 15-17, 2005
Las Vegas Hilton, Las Vegas, Nevada, USA
http://www.cs.fiu.edu/IRI05
Sponsored by the IEEE Systems, Man and Cybernetics Society
This year's conference theme addresses all aspects of Knowledge Acquisition and Management as they relate to the design, implementation, and
maintenance of large-scale systems. This theme was selected to reflect the interdependency among AI, multimedia, networking, software and systems
engineering, telecommunications, etc. within the context of reuse and integration. The IEEE International Conference on Information Reuse and Integration
will feature contributed as well as invited papers. Theoretical and applied papers are both included in this call. The conference program will include special
sessions and open forum workshops.
Instructions for Authors:
Papers reporting original and unpublished research results pertaining to the above and related topics are solicited. Full paper manuscripts must be in English
of length 4 to 6 pages (using the IEEE two-column template). Submissions should include the title, author(s), affiliation(s), e-mail address(es), tel/fax
numbers, abstract, and postal address(es) on the first page. Papers should be submitted at the conference web site: http://www.cs.fiu.edu/IRI05. If web
submission is not possible, manuscripts should be sent as an attachment via email to one of the Program Chairs listed on IRI 2005 web site on or before the
deadline date of March 31, 2005.
The attachment must be in .pdf (preferred) or word.doc format. The subject of the email must be “IEEE IRI 2005 Submission.” Papers will be selected based
on their originality, timeliness, significance, relevance, and clarity of presentation. Authors should certify that their papers represent substantially new work
and are previously unpublished. Organizers of prospective special sessions and panels are invited to submit proposals and should contact one of the Program
Chairs directly as soon as possible, but no later than January 31, 2005. Paper submission implies the intent of at least one of the authors to register and
present the paper, if accepted. Authors of selected papers that are also presented at the conference will be invited to submit expanded versions of their papers
for review for publication in the appropriate IEEE SMC Transactions.
Important Dates:
January 31, 2005
March 31, 2005
May 20, 2005
June 15, 2005
July 1, 2005
August 15-17, 2005
Proposals for special sessions, panels, tutorials, and workshops
Paper submission deadline
Notification of acceptance
Camera-ready due
Conference registration
Conference
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
7
A T
R A N D O M
Behind the Scenes
at Micro-37
Bob Colwell
ver the years, I have
attended or helped organize approximately 17.7
cajillion conferences. Some
were too big, or too commercial, or too impersonal. Others
sported a program that was heavy on
impressive titles and inspired formatting, but unencumbered by actual useful technical content. Some were
incredible: The attendees bounced
from one utterly engrossing technical
discussion to another, with the top
engineers and researchers in the field
expounding in their best take-no-prisoners styles.
Surely that is what conferences are
all about—placing very smart, intense
people in close enough proximity to
generate sparks.
O
BEWARE OF E-MAIL MESSAGES
PUTTING YOU IN CHARGE
The best
conferences
place very smart,
intense people
in close enough
proximity to
generate sparks.
A SPEAKER’S QUANDARY
The Supercomputing 90 conference
in New York City has a special place
in my memory.
I was delivering the last of the papers
I had written at Multiflow—Multiflow
having exited the business a few
months prior. About 10 minutes into
my talk, the doors at the back of the
ballroom burst open, and a New York
City fireman, bedecked in full firefighting regalia replete with hat and
axe, hustled down the main aisle and
stopped. He said, “There is a fire on
the roof of this building. Has anyone
smelled any smoke in this room?”
The mental gap between the intricacies of how Multiflow’s machines
worked and the implications of sitting
8
Computer
When I announced my decision, not
one member of the audience got up
and left.
The day got even stranger about
two hours later, when I entered the
men’s room one floor up, only to find
several policemen in it. Startled, I
asked one of them if there was something I should know about this particular facility. He said, “Nah, there was
a shooting in here about an hour ago,
and we’re just following up on that.
You wouldn’t happen to know anything about that, would you?” Trying
not to look guilty, I said no, and
eagerly volunteered to find another
restroom elsewhere.
in a building that is on fire was too
wide for most of the audience to bridge.
But a few looked at each other, quickly
reached a consensus, and said, “No.”
Whereupon the fireman said, “Well,
you can either evacuate the building or
not—your choice,” and left the room.
This left me, as the guy with the
microphone in his hand, in something
of a quandary. The options seemed to
range from complete denial (“Fireman?
What fireman?”) to tearing off my mic
and racing for the doors.
After a few seconds, I decided that if
we were in any real danger, the fireman
would have been a bit more direct with
his advice, and I opted to carry on.
In January 2003, I got an e-mail
message asking if I’d be willing to be
general co-chair of the 2004 Micro-37
conference, along with Kevin Skadron
of the University of Virginia. I bravely
replied, “Maybe.” After checking
around a bit, and enlisting the formidable organizing skills of my wife,
Ellen, I accepted.
The first question was where to hold
the conference.
When you work for an organization
that sends you to conferences, sometimes they will look askance at the
venue. For example, places where the
postcards have a lot of hula skirts on
them are sometimes deprecated, but if
they agree to send you, your travel
expenses are covered.
If you are an independent consultant
like me, you can readily give yourself
approval for travel, but you end up
footing the bill. Since my wife would
also be attending, the travel costs would
be double.
This line of reasoning strongly suggested that Portland, Oregon, where I
happen to live, was a great place to have
the conference.
Now we needed a hotel. I quickly
ruled out cold-calling all of the local
hotels, and did what I always do when
in doubt: I Googled. It turned out that
the city has an online conference plan-
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
ning utility. I guessed at the number of
attendees, and ended up with a list of
about five hotels that could handle a
conference of our size on the dates we
needed.
Ellen and I visited three of the candidate hotels in the summer of 2003.
It quickly became clear that all three
really wanted our business; apparently
Portland isn’t always the city of choice
for conferences in December.
The hotel we ended up choosing
really put on a full-court press. They
lined up all of the hotel staff for our
visit, each prepared to demonstrate
their unique contributions to our
conference.
We especially enjoyed interrogating
the chef who makes the desserts. We
were very thorough about checking
out the chocolate chip cookies. No
detail is too small if you’re planning a
world-class conclave.
CONFERENCE COMMITTEE
ASSIGNMENTS
The main thing that any conference
must get right is the paper selection
process. This issue was quickly dispensed with by putting John Shen and
Antonio González in charge of it.
Organizing committees always have
a few nerve-wracking months between
issuing the call for papers and receiving
enough submissions to make the event
worth holding. If you’ve ever hosted a
social event, the feeling you have just
after sending out the invitations is the
same: “What if nobody comes?”
Input from the Micro steering committee and good judgment and connections from Kevin Skadron rounded out
the rest of the conference committee.
FINANCES
Among the most important aspects
of any conference are its finances. At
first blush, this would not seem to be
any great mystery: Multiply the number of conference attendees by the registration fee, add in whatever corporate donations have been successfully wheedled, and subtract the
expenses. In fact, that is the right algo-
rithm, but, unfortunately, the data
needed to execute it is not available at
the required times.
Take-away gifts
It is customary to give each conference attendee a take-away gift. This is
often a T-shirt, backpack, laser pointer,
flashlight, coffee mug, and so on,
marked with the conference name and
adorned with the logos of the corporate sponsors.
No detail is too small
if you’re planning a
world-class conclave.
I promised to cajole my network of
corporate friends in our pursuit of an
outstanding conference. In the end,
Intel, Cadence, IBM and Texas Instruments came through for us.
Special thanks go to Intel for their
outstanding support of this and many
other conferences, both directly and
indirectly. Besides contributing many
conference committee members (and
conference attendees), Intel also made
substantial cash donations. Intel even
helped us avoid the odious prospect of
renting four $1,000 VGA projectors at
$300 per day by lending the projectors
as well.
ARTS AND CRAFTS
Our local arrangements chair,
Srikanth Srinivasan, devoted many
hours to investigating the various
options and proposed that we give
each attendee two handouts: a polo
shirt and a collapsible umbrella.
Was he finished with this job?
Definitely not. How many shirts?
What sizes? Embroidered how and
where? What color? Too many shirts,
and it would cost the conference a lot
of money; too few, and late registrants
would pay the full registration fee and
receive no shirt in return.
I was not particularly worried about
having too many shirts. They were not
terribly expensive, and I figured we
could donate any extras to help local
homeless relief efforts. Fortunately,
though, someone pointed out that it
was unclear how having Portland’s
homeless population sporting shirts
that said “Micro-37” would be interpreted by the public or by prospective
future Micro attendees.
Corporate support
One piece of helpful information I
received at the beginning of our conference preparations was that the difference between a successful conference
and an outstanding one depends on the
degree of direct corporate support
available.
As we arrived at the conference for
the first day of workshops, we suddenly realized that we didn’t have any
signs saying “Micro-37.” Ellen and I
took on the challenge of rectifying this
omission.
I first searched the Internet using keywords like signs and banners and found
that I could either buy something from
Microsoft or I could find a font I liked
and just print the letters out one by one
and paste them to something.
For the next few hours, we printed
the letters, cut them out with a razor
blade, and then glued them to a standard foam board from an art store.
It was a surreal experience, staying
up well past midnight, gluing paper letters to a foam core board so the signs
could be used at one of the highest tech
conferences in the world the next day.
DOING THE MATH
Ellen also worked out how many
meals the conference would provide,
what types they would be (buffet versus plated), what food would be available at each, how many types of diets
the food would need to cover, how
many breaks would be catered, and
so on.
Guessing how many people would
be there for lunch at the tutorials was
more of a challenge than we expected.
It’s a complex equation of how many
people have actually registered, how
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
9
At Random
many you think might show up at the
door, how many might go out to lunch
on their own, and how many show up
early for the conference and figure
they’ll just drop in for lunch. Then we
realized that we forgot to include the
organizers in the totals.
Luckily the hotel staff was very experienced, and they always had more
food at the ready. Watch out for what
type of lunch you order, though.
Choosing a particular type of lunch
can cause the hotel staff to assume a
sit-down plated experience, and you
could find them rearranging the main
conference room from classroom seating to round-table restaurant seating
just before the morning’s technical session is about to begin. (I wish that were
only hypothetical.)
tration table and inquired about the
leftover umbrellas, for which we had
lowered the price to $5. She was very
enthused about the umbrellas until she
saw the corporate logos printed on
them.We don’t know which of our
sponsors’ logos had this effect on her,
but when she opened up one of them,
it was clear that the geek factor
exceeded her threshold of tolerance,
and she declined to make a purchase.
Who knew that a
technical conference
needs someone
with retail experience
to plan the
markdowns?
Getting charged for everything
The hotel was very accommodating
but, as with most hotels, they charge
for each thing they do. Need an extension cord for the day? Five dollars,
please. So when I noticed that a
woman who was not associated with
our conference was busy stuffing bottles of Coke into her bag, I marched
over and looked interested in her activities. Without saying a word, she
reversed direction, put the bottles back,
and casually strolled away.
Managing markdowns
During the last days of the conference, we realized we had surplus
umbrellas and polo shirts and decided
to try to “move the merchandise” by
selling them for reduced prices.
The people who attend Micro are
not stupid. One of them said, “First,
the shirts were $20. Now they’re $10.
I’m thinking that tomorrow they’ll be
$5, and you’ll be giving them away at
the end. I’m going to wait until then.”
Who knew that a technical conference really needs someone with retail
experience to plan out when and how
to mark down the leftover merchandise?
One woman who was not associated with Micro came up to the regis10
OUR EXCURSION
There are many nice things to see
and do within a reasonable bus drive
of Portland. We have the beautiful
Oregon coast, Mt. St. Helens, the
Columbia Gorge with its picturesque
waterfalls, Mt. Hood with its hiking
trails and skiing, and the Columbia
and Willamette rivers on which boat
tours are available. The only constraints to planning an excursion for
the conference attendees were that in
December in Portland, it gets dark
early and it rains a lot.
After considering many different
possibilities, Sri and Ellen eventually
settled on having a dinner at the
Evergreen Aviation Museum in
McMinnville, Oregon. The museum is
the home of Howard Hughes’s gigantic wooden airplane, “The Spruce
Goose,” as well as an SR-71 spy plane
and numerous helicopters and fixedwing planes of every size and shape.
After a great deal of research and discussion, we found a caterer and
selected a menu that would be appropriate for a wide range of diners with
different diet requirements, including
vegetarian, vegan, and kosher.
Then we needed to decide on a program for the evening.
Planning the program
The SR-71 is one of my favorite airplanes, and its chief designer, Kelly
Johnson, is one of my engineering
heroes. When we visited the museum
to make arrangements for the excursion, I smiled when I saw how they had
arranged the SR-71: It was nestled
under the right wing of the Spruce
Goose. This was ironic because, as
Johnson states in his autobiography,
he and Hughes were contemporaries.
However, Johnson had a low opinion
of Hughes’s flying skills, and, on at
least one occasion, he attempted to
wrest the controls away from Hughes,
an action Hughes did not appreciate.
Since the excursion was going to be
at the museum, I had the bright idea
that we should hire an SR-71 pilot to
speak to our group about his experiences with that plane.
Discussions with a prospective pilot/
speaker, selected from among several
who advertise their services on the
Internet, made two things apparent:
• Short of a sponsor who would be
willing to directly underwrite a
few thousand more dollars for
such a speaker, we couldn’t afford
this.
• Being in the midst of a war that is
quite unpopular overseas may not
be an appropriate time to subject
international conference attendees
to a lecture by a representative of
the American military.
I reluctantly dropped this idea.
A treasure hunt challenge
Prior to dinner, the museum had
some clue sheets available for attendees
who like the challenge of participating
in a treasure hunt. Ellen bought prizes
at the museum’s gift shop to give to the
first three finishers with the right
answers.
We were a little concerned that more
than three participants might have the
correct answers. But when Ellen
graded the clue sheets, it was clear to
whom the prizes would go: Only three
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
attendees had signed names on their
sheets.
I still think Micro attendees are inordinately intelligent, but their treasurehunt results do not constitute evidence
for this contention.
“Free” entertainment
The after-dinner entertainment that
we finally decided on was free—me,
playing the guitar. If this seems
unlikely, check the Micro-37 Web site
(www.microarch.org/micro37) for
photographic evidence.
Although it was a bit daunting to
play with various aircraft looming over
my shoulder, and the acoustics of the
gigantic hall were such that I really
couldn’t hear myself all that well, the
gig went off just fine. At least it did
until someone decided to play a practical joke in the middle of my solo ver-
sion of Steely Dan’s “Josie.” When
they opened my guitar case and started
throwing coins into it, I found that
“Josie” is even more difficult to play
when I’m laughing.
After the conference was over, the
IEEE informed us that if any music is
played at an IEEE event, either live or
prerecorded, royalties are due to both
BMI and ASCAP, which together hold
nearly all of the music copyrights in
existence. It isn’t a huge amount of
money, and I don’t begrudge the musicians and copyright holders their due,
but it still seems weird that my playing
for free actually cost the conference
some money.
O
verall, Micro-37 went very
smoothly and the organizing
committee got many thanks and
SCHOLARSHIP
MONEY FOR
STUDENT
LEADERS
Student members active in
IEEE Computer Society chapters are eligible
for the Richard E. Merwin Student Scholarship.
Up to ten $4,000 scholarships are available.
Application deadline: 31 May
congratulations. We learned a lot:
Remember the organizers when estimating lunch attendees for the workshops and tutorials. Keep an eye on
your Coke bottles. Get Kevin Skadron
as your co-chair and Sri for local
arrangements if possible. Make sure
Ellen is available to help with organization. Don’t sing, unless you can
prove you wrote the music yourself.
And one more thing: Lock your guitar
case. ■
Bob Colwell was Intel’s chief IA32
architect through the Pentium II, III,
and 4 microprocessors. He is now an
independent consultant. Contact him
at [email protected].
INTERNATIONAL CONFERENCES
SAN DIEGO, USA
JUNE 2005
Call For Papers
now underway
•International Conference on
Computer Science and its
Applications
(June 28-30)
• International Advanced
Database Conference
(June 28-30)
• International Conference on
Data Management for Real-Life
Problems in Biomedicine
(June 27-30)
Investing in Students
Please visit for details
www.conferencehome.com
www.computer.org/students/
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
11
1973 1989 1973 1989
•
3 2 & 16 YEARS AGO
•
FEBRUARY 1973
SOCIAL EFFECTS (p. 8). “The National Science Foundation
(NSF) has announced formation of a Computer Impact on
Society section to support research designed to help better
understand the impact computers have on our way of life.
“The new section, within the Office of Computing
Activities, is headed by Dr. Peter G. Lykos and includes …
[a] Computer Impact on Organizations program … and a
Computer Impact on the Individual program …”
COMPUTER COMMUNICATIONS (p. 13). “The challenge that
confronts us in the 1970’s is not to design and program a
computer system to produce reports that enable individuals
to perform functions, but to design and program a computer-communications network which is characterized by
computer-controlled performance of the functions. The
computer-communications network of the 1970’s should
communicate, monitor, compute, control, and maintain cognizance of all its elements while collecting data on-line for
off-line processing.”
INDUSTRY TREND (p. 19). “The computer industry trend is
definitely towards more data communications. New generation computer systems will incorporate the philosophy of
real-time remote introduction of information and commands,
and the associated responses to the remote terminals.”
COMMUNICATIONS PROTOCOL (p. 31). “Bit-oriented control
procedures offer a number of advantages over character-oriented procedures and can provide a compatible set of procedures for a wide range of communication system applications.
“The Primary/Primary class of procedures is particularly
suited to computer-to-computer applications using both
point-to-point link configurations and network configurations. These procedures are relatively straightforward, yet
provide all of the functionality required for initialization,
data transfer and error recovery. The concept of separate
Primary and Secondary functions in each station is used as
a basis for describing the control procedures. Although system implementation is not necessarily constrained to follow this Primary/Secondary organization, it does provide a
good logical basis for system design.”
HONEYWELL (p. 35). “Two computers in the small-scale
price/performance class have been introduced by Honeywell
Inc., marking the second significant expansion of its Series
2000 computer line since it was announced early in 1972.”
“The new models will compete on a price-performance
basis with computers renting from $2,000 per month and
up, such as the IBM System 3, Burroughs 1700, NCR 50
and Univac 9200, Honeywell said.”
“The Model 2020 central processor is an entry-level computer with a basic main memory of 24,576 characters that
can be increased in six increments to a maximum of 65,536
12
Computer
•
characters. Cycle time is 2.75 microseconds per character.
… Three read/write channels are standard; a fourth is
optional on larger 2020 models.”
“The Model 2030 central processor is a small-scale computer that provides multiprogramming capabilities usually
found on larger, more expensive computer systems. The
processor’s main memory of 40,960 characters can be
expanded in five increments to a maximum of 98,304 characters. Cycle time is 2.0 microseconds per character. Six
read/write channels are standard.”
HITACHI COMPUTER SYSTEM (p. 36). “Hitachi, Ltd. has completed and delivered two unusually large scale computer
systems to Tokyo University. The Model 8800 Systems execute 5 million instructions per second and have a memory
capacity of 8 megabytes. The systems include the following
capabilities:
1. Shared memory multiple processors. …
2. Virtual memory, both segmentation and paging.
3. High speed cache memory of 32 kilobytes.”
FLEXIBLE DISC FILE (pp. 38-39). “Memorex Corporation
has announced the 651 Flexible Disc File for the OEM market. The 651 has a faster access time, increased capacity, a
write protect feature and the highest data transfer rate of
any competitively priced flexible disc file. …”
“An enhanced version of the Memorex 650, the 651 features track-to-track access time of 10 milliseconds with 10
ms settle time. This faster positioning time, along with the
80 ms average latency time, provides improved data
throughput.
“Data can be formatted in either sector or index mode
starting with 132 byte records with 32 records per track (64
tracks) up to 1 record of 4,880 bytes/track making a maximum capacity of 312,500 bytes (2,500,000 bits).”
BRAILLE (p. 39). “A computer at the American Printing
House for the Blind already has translated 1,000 books and
magazines from English into Braille for the nearly half-million sightless persons in the United States.
“Donated by the International Business Machines
Corporation, the computer has produced more than
398,000 Braille plates from which more than 30 million
pages of Braille have been printed.
“Finis Davis, Vice President and General Manager of the
114-year-old, non-profit organization, said, ‘We not only are
producing literature in Braille with the help of the IBM 7040,
but also are conducting intensive research into using it to produce … musical scores and math formulae in Braille, expanding opportunities for sightless persons in these two fields.’”
URBAN GAME (p. 40). “A new Urban Game to be developed
at Carnegie-Mellon University will aim to show future
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
urban managers how a city really works, instead of how it
ought to work.
“Mr. Jacob Belkin, co-principal investigator and project
manager of the Game, says, ‘The effects of new policies—
for example, urban renewal—have frequently been different from those intended because planners and managers
have had to rely on their perception of how an urban system ought to behave, without really understanding how the
system does behave.’”
FEBRUARY 1989
GRAPHICAL USER INTERFACES (p. 8). “Graphical user interfaces for workstation applications are inherently difficult
to build without abstractions that simplify the implementation process. To help programmers create such interfaces,
we considered the following questions: What sort of interfaces should be supported? What constitutes a good set of
programming abstractions for building such interfaces?
How does a programmer build an interface given these
abstractions? Practical experience has guided our efforts to
develop user interface tools that address these questions.”
OPTIMIZATION AND ARCHITECTURE (p. 49). “Novel methods
of code optimization must influence computer architecture
design. From a software viewpoint, an architectural design
approach needs to identify the improvement in performance
due to any new feature. For example, if a software technique reduces the number of loads and stores in an average
program, it alleviates the need for super-fast memory systems in hardware. In a more general vein, if code optimizers work well on simple instructions and poorly on complex
instructions, it becomes hard to justify complex instructions
in hardware.”
ADA COMPILER TECHNOLOGY (p. 52). “Ada is becoming a
language of choice for large software projects, but compilers and other language tools may not have kept pace. This
article discusses the key technical issues involved in producing high-quality Ada compilers and related support
tools. It also addresses some important problems that compiler designers face—for example, determining which deficiencies of existing Ada systems can be attributed to the
language and which are simply hard-to-implement features
or unresolved issues in Ada compiler technology.”
COMPUTING AS A DISCIPLINE (p. 63). “As ACM enters its
42nd year, an old debate continues. Is computer science a science? An engineering discipline? Or merely a technology, an
inventor or purveyor of computing commodities? What is
the intellectual substance of the discipline? Is it lasting or will
it fade within a generation? Do core curricula in computer
science and engineering accurately reflect the field? How can
theory and lab work be integrated in a computing curriculum? Do core curricula foster competence in computing?”
A DISCIPLINE MATURES (p. 72). “At Snowbird 88, the
impression emerged that the discipline of computing was
maturing and coming into balance. Many of the problems
that plagued the discipline in the late 1970s and 1980s had
been solved or alleviated. It was time for the discipline to
cease its largely inward-looking activities and branch outward. As an enabling technology for other disciplines, computing should take a more active role in articulating how
its own needs, concerns, and basic research impact other
disciplines and in collaborating with other disciplines in the
evolution of computing applications.”
STANDARDS (p. 78). “Movement toward worldwide information networking and the vast capabilities it will provide
are being hampered by a lack of agreement on standards
that permit equipment and software to interact, according
to Irwin Dorros of Bellcore.
“‘Without universal internetworking standards, there is
no information age,’” the article quotes Dorros as saying.
“‘Interface standards are essential to control what could be
chaos in a world of many suppliers, many users, many functions, and many technologies.’”
DEC PERSONAL COMPUTERS (p. 96). “Digital Equipment
entered the PC arena with the announcement of a new family of personal computers, the DECstation 210, 316, and
320. The new IBM-compatible computers run MS-DOS
Version 3.3 applications. They come in two configurations:
as a basic system with a 31/2-inch floppy drive and VGA
graphics adapter, or as a system with VGA graphics adapter,
SCSI interface, and hard disk drive.”
KEYSTROKE SECURITY (p. 97). “A software package called
Electronic Signature Lock, produced by a company of the
same name, assigns users electronic signatures based on their
unique keystroke dynamics and typing patterns. It reportedly denies system access to unauthorized users even when
they have the proper passwords.
“The software analyzes the total typing pattern to identify users. It uses a statistical filtering routine to analyze
the patterns and determine the probability of proper identity.”
MEMORY MANAGEMENT (p. 98). “The Advanced Product
Division of Fujitsu Microelectronics offers VLSI implementation of a 25-MHz memory management unit and
floating-point controller for the company’ S-25 (MB86901)
Sparc RISC processor.
“The MMU features 4 Gbytes of virtual address space and
64 Gbytes of physical address space. Up to three levels of
page tables support a 4-Kbyte page size.”
Editor: Neville Holmes; [email protected].
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
13
INDUSTRY TRENDS
Developments
Advance Web
Conferencing
David Geer
W
eb conferencing has become an increasingly desirable option for businesses that don’t want
to spend the time and
money it takes to fly employees
around the world for meetings that
don’t require face-to-face contact.
“The real key is reducing the time it
takes to make a decision,” explained
Gerry Kaufhold, principal analyst with
In-Stat/MDR, a market research firm.
As Figure 1 shows, global revenue
from Web conferencing services increased 70 percent—from $450 million to $765 million—between 2002
and 2004 and will reach $1.5 billion
by 2007, according to Kaufhold.
Traditionally, Web conferencing has
been offered as a service hosted by a
third-party provider. However, three
recent developments are changing the
face of the technology.
Manufacturers such as eDial, Juniper Networks, Nortel Networks, and
Polycom have begun selling Web conferencing server-based software systems that companies can run themselves on their internal network, dedicated servers, or network appliances.
This approach enhances security, control over collaborative operations, and
integration with existing communications infrastructures.
Vendors such as Cisco Systems, with
its MeetingPlace application, are now
combining service- and server-based
conferencing—with some functions
14
Computer
Sockets Layer technology; proprietary
SSL-based approaches; or transportlayer security, a successor to SSL.
Typically, providers such as MeetingBridge and WebEx have offered Web
conferencing as a service. With these
services, users pay providers for each
conference and then call up meetings
via a Web browser. The providers supply the bandwidth, interface, and tools
such as those for coordinating schedules, sending participation instructions,
and checking browser settings for compatibility. They also offer technical support to participants.
Advantages
handled by in-house servers and others by service providers—into hybrid
approaches that try to provide the best
of both worlds.
Meanwhile, the Internet Engineering
Task Force (IETF) is working on Web
conferencing protocols that promise
greater flexibility, interoperability, and
accessibility.
SERVICE-BASED WEB
CONFERENCING
Web conferencing enables audio,
video, document, and image exchanges
among multiple participants over the
Internet via technologies such as Webcams; Internet telephony; desktop sharing, which lets users remotely view and
control one another’s desktops; whiteboarding; text messaging; and chat.
As with instant messaging (IM),
with Web conferencing applications,
users can determine who is online and
available for a meeting via presence
technologies.
Web conferencing provides security—via encryption, digital certificates,
and authentication—using Secure
Participating in a service-based conference costs considerably less than
buying a Web conferencing server.
Third-party services also eliminate the
need for companies to have their own
personnel to run, maintain, and fix the
conferencing infrastructure.
These factors are ideal for companies
that want to test the technology before
buying their own server or that have
small budgets and little or no IT staff.
Because the services are easily scalable, users can buy as much conferencing as they need at any time,
explained Andy Nilssen, senior analyst
with Wainhouse Research, a market
analysis firm.
In addition, services, with their
clearly defined fees, are good for businesses that pass all or some of the conferencing costs on to clients or partners.
Disadvantages
In some cases, frequent users can
spend more paying for conferences one
at a time than buying their own equipment. In addition, Web conferencing
services pose a potential security risk
because they bypass the normal security policies that a company’s IT personnel establish.
Web conferencing services bypass
policies because they require application sharing between participants,
which enables system access from the
outside. Also, conferencing-system
bugs can create security vulnerabilities.
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
SERVER-BASED WEB
CONFERENCING
Avaya’s Web Conferencing Server,
Juniper’s Secure Meeting, and Polycom’s WebOffice, shown in Figure 2,
are examples of server-based Web conferencing systems.
In server-based approaches, companies operate their own Web conferencing systems. Their conferencing servers
create and maintain a tightly coupled,
real-time communications infrastructure under their control.
Tightly coupled systems have a central point of control and authorization
to enforce conference rules about matters such as who has access to applications, explained Alan Johnston, cochair
of the IETF’s Centralized Conferencing
Working Group and an MCI distinguished technical member.
Client software communicates with
the conferencing server in various ways.
For example, said Bernadette Golas,
700
North America
Europe
Asia
Rest of the world
600
Millions of US dollars
Hackers can exploit these problems
to infect systems with malicious software, intercept documents and other
files passed during sessions, or gain
entry to conference participants’ systems and files.
500
400
300
200
100
0
2002
2003
2004
2006
2007
Source: In-Stat/MDR
Figure 1. Revenue from Web conferencing services has increased throughout the world,
particularly in Asia, and will continue doing so, according to In-Stat/MDR.
Shared
documents
Router
Enterprise
Avaya’s director of product marketing
for conferencing solutions, “Users con-
nect to Avaya Web Conferencing
Servers through a Java client that they
download through their Web browser.”
As with service-based systems, servers
automatically check each potential conference participant’s availability, identify those who can take part, and record
the conference for later use by those
who can’t attend.
The servers enable a meeting using
videoconferencing, Internet telephony,
IM, document collaboration, and
other communications tools available
to participants via their browser-based
interface.
2005
WebOffice
server
LAN, WAN,
or Internet
Presenter/
conference
leader
Participant
Participant
Participant
Source: Polycom
Figure 2. With Polycom’s WebOffice server-based conferencing system, a company
running its own equipment can start a session on an internal LAN or WAN with participants
from inside the organization or over the Internet with participants from outside the
enterprise. All participants work over Web browsers. The presenter who is leading the
conference is sharing two pages of a document with other participants.
Advantages
The server-based approach lets companies, not service providers, decide
whether and when to upgrade conferencing systems. Companies also can
integrate their Web conferencing
servers with the rest of their communications infrastructure, which offers
multiple benefits.
For example, users could work with
their company’s single-sign-on capabilities to log in once for multiple network activities, including Web conFebruary 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
15
I n d u s t r y Tr e n d s
ferencing. Integration with a company’s
Web directory lets users easily find and
work with in-house participants.
Participants can use existing network tools, and businesses can avoid
running and managing multiple infrastructures. Integrating Web conferencing into familiar infrastructures
also shortens the IT personnel’s learning curve.
Server-based Web conferencing is
more secure, in part because companies control the entire system. In addition, operations based on a company’s
single system result in fewer potential
vulnerabilities than operations involving both service providers’ and users’
systems.
With server-based Web conferencing, companies are aware of security
breaches in their own systems more
quickly than if third-party providers
operate the conferencing application.
Disadvantages
Companies that run their own conferencing servers must use their own
bandwidth for group meetings, which
can be a problem for small businesses.
Server-based Web conferencing entails software licensing fees and large
equipment costs, including servers; load
balancers, which determine the switches
to which the system should connect new
conference participants to balance the
overall switch load; multipoint control
units to manage multiuser conferences;
and audio bridges for those joining a
session only by phone.
Also, staff members must learn how
to administer and manage server-based
systems.
HYBRID SYSTEMS
Some companies are providing
hybrid Web conferencing systems designed to let users work with the
best of both service- and server-based
approaches.
“A carrier might remotely manage a
company’s onsite server,” said eDial
CEO Scott Petrack. This could enhance
management and keep companies from
having to oversee the system.
16
For example, along with providing
service-based conferencing, Raindance
Communications will manage servers
located in companies’ offices, according
to Wainhouse’s Nilssen.
Some equipment vendors are providing services and servers. For example,
Cisco can manage some of a company’s
conferencing-related servers by moving
them outside security applications,
including the firewall. The customer
manages the servers that are behind its
security applications.
Vendors are working
together on wider
interoperability.
With this approach, people within a
company stay on the secure corporate
network, and people outside the company, who present more of a security
risk, don’t get inside the security applications, explained Troy Trenchard,
Cisco’s director of product management for rich-media communications.
Companies with their own conferencing servers can benefit from working with service providers, who
frequently can better tie in participants
outside the enterprise and also can
offer enhanced reliability and redundancy for some conferencing services.
Along with their advantages, hybrid
services can entail both server-based
Web conferencing’s initial deployment
costs and service-based conferencing’s
ongoing service charges.
STANDARDS
Traditionally, parties to a Web conference have communicated with one
another via proprietary application
program interfaces. To conduct Web
conferences, users on different platforms that don’t share APIs must spend
considerable time and money making
their systems interact.
Providing more interoperability,
which will make conferencing available to larger groups of people regardless of platform, will require standards.
IETF Web conferencing standards
are based on its Session Initiation
Protocol for initiating and managing
interactive communication sessions
involving multimedia elements. SIP, a
software-implemented protocol, sets
up sessions, phone-call routing, authentication, call parameters, and call transfer and termination.
The IETF has formed the Centralized Conferencing (XCON) Working
Group (www.ietf.org/html.charters/
xcon-charter.html)—which includes
companies such as Cisco, Lucent
Technologies, and MCI—to develop a
standardized suite of protocols for
tightly coupled server- or service-based
or hybrid multimedia conferences that
require strong security and authorization. The suite includes the IETF’s
SIMPLE (SIP for Instant Messaging
and Presence Leveraging Extensions)
and IP telephony standards.
XCON will replace many of the
hard-to-use proprietary APIs that currently support multivendor interoperability, said the IETF’s Johnston.
Instead, the standard would create a
few interoperable APIs. In addition to
enhancing interoperability, XCON
would eliminate the time and money
companies spend to develop proprietary APIs.
XCON systems have a standardized
client and conferencing server. The
server enforces and manipulates
media-usage policies, including mediacomposition rules that govern the
union of different media such as voice,
video, and IM during a session,
according to Johnston.
The IETF’s Media Policy Control
Protocol defines the controls available
to participants and the conferenceserver administrator for manipulating
the media policy applicable to a specific session.
The media server includes mixers
that combine and properly mix video,
audio, and other streams and distribute them to participants. This is done
in either high-capacity digital signal
processors or lower-capacity, lowercost software.
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
XCON also works with a conference
policy, a set of rules that control various aspects of a session, including the
access that conference administrators,
participants, and applications have to
various media and the ways that they
can communicate with one another.
For example, for a particular session,
XCON can include a list of users
authorized to participate in a conference. A node can also use XCON protocols to query a conference server to
learn what media types, such as
Internet telephony or video, it supports.
Conference administrators or participants could use the IETF’s Conference Policy Control Protocol to manipulate the conference policy.
The IETF expects to finish the
XCON protocols by later this year or
early next year.
endors have been working
together to offer wider conferencing interoperability. They will
probably adopt XCON within the
next 12 to 18 months, according to the
IETF’s Johnston.
In the future, Wainhouse’s Nilssen
predicted, companies will embed Web
conferencing technology in their workflow applications, such as those used
with sales or customer-relationship
management. This would let users easily initiate conferences if desirable in
conjunction with sales or CRM activities.
“I’m bullish,” said Nilssen. “I’m very
optimistic about Web conferencing as a
technology. I think it will become
embedded in the base fabric of how we
communicate and conduct business.”
However, the way people use Web
V
conferencing could affect its marketplace performance.
Said eDial’s Petrack, “If you ask people in the enterprise to identify the least
productive thing they do, most will
respond ‘going to meetings.’ Any system or service that is centered on meetings, rather than collaborative work,
is always going to be a frustrating
niche product.” ■
David Geer is a freelance technology
writer based in Ashtabula, Ohio. Contact him at [email protected].
Editor: Lee Garber, Computer,
[email protected]
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
17
TECHNOLOGY NEWS
Telecom Carriers
Actively Pursue
Passive Optical
Networks
George Lawton
A
s optical networking has
become more affordable, it
has moved from the network
backbone to wide and metropolitan area networks and
then to the local loop. To take advantage of this, telecommunications
providers have begun experimenting
with various fiber-based technologies
to find ones that offer high performance at low cost. One of the leading
candidates has been touted for years
as the wave of the future: passive optical networks.
PONs—considered “passive” because the network infrastructure
between the carrier backbone and the
customer includes no active electronic
elements—enable the deployment of
relatively inexpensive, high-bandwidth,
point-to-multipoint, voice, video, and
data networks either to or near customer premises. These networks could
be used for Internet access, metropolitan area networks, or corporate LANs.
A PON could serve as a first-mile
technology for connections between
carriers and users. In the short run,
proponents foresee PONs competing
favorably with cable-modem and DSLbased broadband networks, which
also carry voice, video, and data.
The three main PON flavors are
based on asynchronous transfer mode
(ATM), Ethernet, and the Gigabit PON
Encapsulation Method (GEM).
Japan has been leading the way with
significant Ethernet-based deployments, while US carriers, in their early
PON adoption, are focused on ATM.
US companies such as BellSouth,
Southwestern Bell, and Verizon; Asian
providers such as Japan’s NTT; and
European carriers such as British
Telecom are deploying PONs. However, the technology faces significant
challenges to widespread access, such
as the cost of deploying the networks
and uncertainty about how the market
will evolve.
DRIVING DEMAND FOR
PON TECHNOLOGY
British Telecom made the initial PON
trial deployments in the late 1980s.
Raynet began selling a PON-based system in 1989, but the technology lost
Published by the IEEE Computer Society
traction as carriers moved to higherperformance, active, single-mode fiber
approaches, according to Paul
Shumate, a PON pioneer and now
executive director of the IEEE Lasers
and Electro-Optics Society (LEOS).
Fiber-optic networks have since
become easier to install. In addition,
optics’ cost has dropped, making the
technology, formerly practical only for
telecommunications carriers’ backbones, more feasible for deployment to
consumers and companies, said Ed
Szurkowski, director of Lucent Technologies’ Optical Data Networks
Research Department.
For these price-sensitive markets,
lower-cost PONs make more sense than
traditional active optical networks. And
PONs promise much more bandwidth
than DSL or cable-modem technology.
The higher bandwidth would be particularly useful to established telecommunications providers, who want the
new revenue that fast video and data
services could generate. These providers are losing voice-related business
to cell phones and Internet telephony,
said Gary Lee, chair of the PON
Forum and president and CEO of
FlexLight Networks.
However, providers want to offer
their customers attractively priced services, and PONs, by eliminating active
electronic components, are less expensive than other optical technologies.
PONs are also more reliable because
they don’t depend on intervening electronics that could fail, and they are easier to upgrade because there are no
active electronics to replace.
Additionally, regulatory factors are
making PONs more attractive investments for telecommunications carriers.
The US Federal Communications
Commission recently eliminated
requirements that let smaller, newer,
competing carriers inexpensively use
the optical networks that established
regional companies build.
This has given the regional companies
more incentive to invest in fiber, explained Denise Koenig, a spokesperson
for telecommunications carrier SBC.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
19
Te c h n o l o g y N e w s
International regulations are going
even further, noted Craig Easley, chair
of the Ethernet in the First Mile
Alliance (EFMA), which supports and
helps market Ethernet PONs and other
Ethernet-based first-mile technologies,
including those using active-opticaland copper-based approaches. In
Japan and some parts of Europe, he
explained, governments offer carriers
tax credits covering the cost of building
new fiber networks.
HOW PONS WORK
The key to PONs is their ability to
transmit signals over fiber without
using active electronic components.
Active optical networks contain
electronic components such as regenerators—which convert signals from
optical to electronic and then back
to optical—and routers. PONs, on the
other hand, use only optical components.
A PON consists of an optical line
termination, a central distribution
transceiver at the service provider’s
local facilities that serves multiple optical network units. ONUs are transceivers at the user’s home or office.
Both OLTs and ONUs convert binary
data streams into an optical format
that laser beams can carry.
When OLTs send a signal to an
ONU, the latter converts it into separate video, voice, and data streams.
This means the PON doesn’t need the
intelligence to do so, which reduces
network complexity.
OLTs and ONUs can contain components such as analog-digital processors, fiber-optic ports to connect to a
PON system, electronic ports to plug
into an Ethernet adapter for accessing
data from Ethernet systems, software
for managing the flow of traffic
between the PON system and other
networks, and chipsets with lasers for
transmitting optical signals.
Because multiple ONUs share a single OLT port and optical feeder, the
PON system needs a sophisticated time
division multiplexing (TDM) algorithm to separate multiple signals on
20
the same fiber so that they don’t interfere with one another. This eliminates
the traffic collisions that would cause
many applications to fail.
PONs use a splitter to divide a single
optical signal into several signals identical to the original. The system then
passes the signals down fibers to or
near individual user premises. Because
each node receives the entire original
signal, the system uses encryption to
enable a node to decode only the part
it is supposed to receive. Each node
uses its own laser to transmit signals
upstream.
Some PONs can be configured to
change bandwidth allocations to individual ONUs on the fly, depending on
users’ needs. This can enable carriers
to more fully utilize their systems and
also provide bigger users with more
bandwidth, a potentially revenuegenerating service.
As fiber becomes more
prevalent, PONs become
more popular.
PON TYPES
There are two ways to distinguish
PONs: how close they are deployed to
customers and the protocol they use to
carry data.
FTTN versus FTTP
Some telecommunications carriers,
such as BellSouth and SBC, are running
fiber to the neighborhood (FTTN), to
a box near homes or offices, and then
over existing high-speed copper wiring
to individual premises. Other providers,
such as Verizon, are establishing fiber
to the premises (FTTP), particularly for
new homes or offices that don’t already
have copper-wire connections.
Building fiber all the way to premises
is more expensive than building fiber
to a neighborhood. However, because
FTTP networks use only fiber, which
lasts longer than copper wiring, they
promise more long-term reliability and
savings, noted analyst Richard Mack
of KMI Research, an optical-networking consultancy.
PON protocols
The PON provides the physical layer
for transmitting the signals. To carry
the data, the systems use technologies
such as ATM or Ethernet, which offer
various advantages in data-transfer
efficiency, redundancy, quality of service, complexity, and cost.
ATM and broadband PONs. APONs
were the first ATM-based PONs and
provide voice and data, but not video,
services. They offer speeds of 155 to
622 Mbits per second downstream
and 155 Mbps upstream, and support
16 or 32 nodes and dynamic upstream
bandwidth allocation. APON deployment is phasing out as carriers move
to broadband PON systems, which
can handle video.
BPON technology is an APON
extension that moved the wavelength
for sending data and audio from 1,550
to 1,490 nanometers, thereby opening
up the 1,550-nm band for video,
explained Dave Cleary, vice president
of advanced technology at Optical
Solutions, a PON equipment manufacturer.
Transceiver improvements have provided the increased power and sensitivity required for handling video
without significant added cost.
Michael Howard, an analyst with
Infonetics Research, estimated that 75
percent of PON systems in the US,
where most telecommunications carriers already use ATM, are BPON based.
Ethernet PONs. EPONs support data
rates of 1.244 Gbps in each direction
shared among 16 to 256 nodes. The
EFMA’s Easley predicted next-generation EPON equipment will support
data rates up to 10 Gbps. EPONs are
based on the IEEE 802.3ah—Ethernet
in the First Mile—standard.
The main attraction of Ethernet in
PONs is that most companies already
use the technology in their corporate
networks, said Easley.
Howard estimated that 75 percent
of PON systems in Asia, where most
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
PONS’ CONS
The major challenge to PON adoption, particularly for carriers using
copper-based networks, is the significant labor- and engineering-related
deployment costs, according to the
IEEE LEOS’ Shumate.
He said these costs have become a
more significant adoption barrier in
recent years, as competition from
broadband providers has made
telecommunications carriers want to
recover their deployment costs more
quickly than in the past.
Better integration of components and
other manufacturing advances in PON
chipsets have reduced costs by making
lasers and analog-digital processors less
expensive per unit of bandwidth.
Vendors are working on integrating
all PON-transceiver components into
single chipsets, which would significantly reduce systems’ size and cost,
noted Armando Pereira, vice president
of the Optical Business Unit at Centillium Communications, a provider of
broadband network access products
25,000
Houses with FTTP access
FTTP subscribers
20,000
Thousands of people
telecommunications carriers don’t use
ATM, are Ethernet based.
Gigabit PON. GPON supports data
rates of 622 Mbps to 2.488 Gbps
downstream and up to 2.488 Gbps
upstream, shared by 16 to 128 nodes.
The higher speeds enable carriers to
split the bandwidth to serve a greater
number of nodes than APONs or
BPONs.
According to Howard, industry
observers see GPON as the successor
to BPON because it is faster and flexible, supporting ATM, Ethernet, and
TDM on the same network.
With GPON, all services are mapped
onto the PON using either ATM or
GEM, a variant of Sonet’s (synchronous
optical network’s) generic framing procedure. GEM lets a GPON link carry
both Ethernet and TDM traffic and also
adds QoS and recovery capabilities.
Carriers are deploying GPONs
throughout the world. Two major
GPON equipment manufacturers are
FlexLight and Optical Solutions.
15,000
10,000
5,000
0
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
Source: KMI Research
Figure 1. KMI Research, a market analysis firm, projects that the number of US homes
deploying fiber-to-the-premises networks and the number of homes with FTTP access
will grow dramatically during the next few years.
that has already developed an integrated PON chipset.
Carriers will face a challenge in
learning the most efficient and costeffective way to phase out their old
copper networks and phase in PONs,
added Lucent’s Szurkowski.
Provisioning network services will
pose another hurdle for PONs, according to Pereira. He said asymmetric
DSL’s success has been largely due to
customers’ ability to install their own
modems when they subscribe for service. Provisioning fiber-based services,
on the other hand, involves different
tools and more sophisticated skills
than connecting copper wires.
nalysts are expecting strong
growth for fiber deployments.
Infonetics’ Howard said that as
the cost of fiber continues to drop and
gets close to that of copper, it will make
sense to switch to optics. “In 20 to 30
years,” he said, “everything will go to
fiber.”
KMI projects the number of US
homes with access to FTTP-based networks will climb from 2 million in
2004 to 23.5 million in 2009, as Figure
A
1 shows. Many of the new fiber
deployments will involve PONs.
Future wavelength PONs will be
based on dense wavelength division
multiplexing, a technology that puts
data from different sources together on
an optical fiber, with each signal carried on its own light wavelength. These
PONs, which are still a few years off,
would provide a separate wavelength
of light for connections to and from
each node.
Competition from present and
future network-access technologies
could pose the greatest challenge to
PONs’ long-term success, Shumate
said. If PONs can’t compete on price,
potential users will choose cable, wireless, and other broadband technologies
instead.
George Lawton is a freelance technology writer in Brisbane, California.
Contact him at [email protected].
Editor: Lee Garber, Computer,
[email protected]
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
21
NEWS BRIEFS
One-Handed Keyboard
Helps Mobile and
Disabled Workers
company has developed a
keyboard designed to be used
with one hand. The FrogPad
thus could help mobile workers who don’t always have
both hands free or disabled people
who have the use of only one hand.
The 20-key FrogPad, which measures 3.5 × 5.5 inches, was invented
by translator Kenzo Tsubai, who
cofounded FrogPad Inc. and oversees
its R&D.
The FrogPad has 15 full-size keys—
F, A, R, W, P, O, E, H, T, D, U, I, N, S,
and Y—that the company claims cover
86 percent of the English language.
A
There are also space, number, symbol,
enter, and shift keys. FrogPad researchers also considered other languages’ character frequencies when
designing the keyboard.
In deciding on the keyboard’s layout,
explained CEO Linda Marroquin,
researchers incorporated aspects of
Dvorak theory, an alternative to the
principles behind the traditional
QWERTY keyboard (named after the
first six keys from the left in the top
row of letters), and took into account
the ways people naturally use their
hands.
For example, she noted, FrogPad
FrogPad Inc. has developed a keyboard of the same name that can be used with one
hand. The keyboard—designed for use with desktop computers, cell phones, or PDAs—
could help mobile workers who don’t always have both hands free or disabled people
who have the use of only one hand. FrogPad works with various techniques to give its
20 keys all of the functionality of a traditional keyboard and even that of a 10-key phone
or calculator.
22
Computer
places 86 percent of the alphabet under
the middle three fingers. Vowels are
placed under the index finger.
A switch modifies the keys to give
them additional characters and functions, including those of the 10-number keypad used on calculators and
phones.
FrogPad uses a proprietary algorithm to understand commands produced by pressing multiple keys at the
same time or sequentially, Marroquin
noted. Also, each key can produce one
character on the downstroke and
another on the release. These techniques let FrogPad produce many symbols with just 15 character keys.
To help users keep track of their
work, each key’s color changes to
white, green, or yellow depending on
which of its available characters it represents at the time.
Marroquin claims the FrogPad’s
design requires less learning time than
QWERTY keyboards. She said studies
with college students indicate new
users require an average of 10 hours to
learn to input 40 words per minute,
compared to the 56 hours needed with
QWERTY keyboards.
FrogPad can be connected for use
with PCs, Macs, smart phones, or
PDAs, either wirelessly via Bluetooth
or via a Universal Serial Bus cable,
depending on the product. The keyboard is currently in beta testing for
the mobile devices.
Currently, Marroquin said, the keyboard is primarily used as an assistive
technology for people with disabilities.
However, the company says it is
actively pursuing other uses. ■
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
US Increases Quota for Controversial Visas
he US Congress has raised the
quota for the controversial H-1B
visas, which domestic companies
use to hire foreign technology workers, by 20,000. The additional visas
are only for foreign employees with US
graduate degrees.
The Congress granted this one-year
increase to the H-1B quota after
lobbying by technology companies.
Companies received all of the original
65,000 visas allotted for 2005 shortly
after 1 October 2004, the first day
of the current federal fiscal year. The
quota was filled so quickly because
companies could apply for the visas up
to six months in advance.
Chris Bentley, spokesperson for
the US Citizenship and Immigration
Services Bureau, part of the Department of Homeland Security, said his
agency will begin reviewing requests for
the 20,000 new visas on 8 March.
The US grants H-1B visas to skilled
foreign workers, many in the computer-technology field. Under the program, employers must pay foreign
workers the prevailing wage for their
jobs and show that qualified domestic
workers aren’t available.
To remain competitive in the global
marketplace, proponents say, US companies need H-1B visas to hire the most
skilled workers wherever they hail
from, especially when there aren’t
enough native-born workers and university graduates with math, science,
and engineering skills.
“Failure to do so would hamper our
long-term competitiveness and ultimately cost the country jobs,” argued
Jeff Lande, senior vice president of the
Information Technology Association
of America (ITAA), a trade group for
US IT companies. He said the H-1B
program doesn’t threaten US workers
because visa holders represent a very
small percentage of all domestic IT
employees.
One of the chief lobbying groups for
the recent H-1B quota increase was
T
Compete America, a coalition that
includes the ITAA and companies such
as Hewlett-Packard, Microsoft, Motorola, and Texas Instruments.
H-1B opponents say some companies favor the visas because they want
to use them to hire foreign workers for
lower long-term salaries than they
would have to pay equally qualified US
workers, said George McClure, a
retired engineer and former chair of the
IEEE-USA’s Career and Workforce
Disposable Cell Phone Cover Turns into a Sunflower
A Bermuda-based company has developed a clip-on cell phone cover that will
grow a sunflower when thrown out. This biodegradable cover could ease some
of the burden on landfills that cell phones and their accessories create when
thrown away and taken to landfills, said Peter Morris, project manager for
Pvaxx Research & Development.
Pvaxx has been working with Motorola to develop a nontoxic polyvinyl-alcohol plastic polymer that bacteria will consume and that thus biodegrades when
discarded into soil. Polymers are typically resistant to bacteria and thus not
biodegradable, noted Morris.
Researchers at the UK’s University of Warwick figured out how to embed a
sunflower seed in the cover. Morris explained about Pvaxx’s covers, “When they
decompose, they produce ammonia, then nitrates, then nitrites. This is plant
food.” The nitrites serve as fertilizer for the embedded sunflower seed.
According to Morris, the polymer material, which could be rigid or flexible
depending on the application, could be used for various types of electronics and
other products.
Pvaxx is licensing its technology to select manufacturers—including Motorola—
which plan to begin releasing products using the new material this year.
Pvaxx Research & Development, working with the UK’s University of Warwick, has
developed a biodegradable clip-on cell phone cover with a sunflower seed embedded
inside. When users throw away the cover, it biodegrades and produces nitrites,
which feed the seed and turn it into a sunflower. Biodegradable covers could eliminate some mobile-phone-related waste.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
23
News Briefs
Policy Committee. The IEEE-USA,
which represents the career and public
policy interests of the IEEE’s US members, has been a leading H-1B critic.
McClure said US technology companies’ hiring of foreign workers also discourages US students from pursuing
engineering degrees.
During the technology-economy
boom of the 1990s, the computer in-
dustry lobbied for higher H-1B quotas.
Congress passed the American Competitiveness in the 21st Century Act,
which increased the visa cap to
195,000 for fiscal years 2001 through
2003. However, the quota returned to
65,000 for fiscal year 2004. Some businesses said they would cope with lower
H-1B caps by outsourcing work to
other countries.
Lande said the 20,000 additional
H-1B visas will help US companies and
thereby save engineering jobs in the
long run.
McClure, on the other hand, said,
“We object to the graduate-student
exemption. It’s an end run through the
back door to get around the quota.
The exemption disproportionately
harms American engineers.” ■
Engineers Begin Addressing “Talking Spam”
ngineers with Qovia, an Internet
telephony management company,
have demonstrated the feasibility
of sending “talking spam” to thousands of phones and have developed
an application designed to eliminate
the threat. Experts say spam over
Internet telephony, also called SPIT,
could become a problem as Internet
telephony becomes more popular.
To see if the threat was possible,
explained Choon Shim, Qovia’s chief
technology officer and vice president
of engineering, he challenged a
research engineer to initiate a SPIT
attack using a laptop. “In two hours,”
Shim said, “he took down a call server
and filled up its voice mail server. This
is dangerous.”
The use of IP networks—which
spammers already know how to
exploit—and Internet telephony make
it easy and inexpensive to send voice
messages directly to many handsets via
unsolicited bulk calls or directly to
voice mailboxes via unsolicited voice
mails.
Most Internet telephony systems
work with software phones, which
spammers can hack into and use for
sending out SPIT. Spammers can also
develop SPIT transmission code that
appears on the network to be an
authorized phone. They use the code
to send large volumes of unsolicited
mail to Internet phones.
Qovia engineers wrote software
that, in simulations, sent up to 2,000
SPIT messages per minute. For the
E
24
demonstration, Shim said, a spitbot
was used to harvest destination
addresses. “It was easy because these
are normally kept in an unprotected
storage location in a call server,” he
explained. Engineers then wrote a
script using a simple Java Telephony
Application Programming Interface
(JTAPI) to generate calls.
The researchers subsequently developed a prototype application that
could keep Internet telephony users
safe from SPIT.
Blocking SPIT requires more than
reading message content or a subject
line, as is the case with programs that
stop unsolicited e-mail. Qovia’s application identifies SPIT in various ways.
For example, it recognizes when a
source is sending out many transmissions of a fixed length—indicating the
type of prerecorded messages usually
found in Internet telephony spam. It
also flags multiple messages sent out
more quickly than a human could
transmit them—indicating the machinegenerated calling frequently used with
SPIT.
Using TAPI or JTAPI as an interface
to control calls, Qovia’s application
blocks or filters 95 percent of SPIT,
explained Shim, who declined to provide further details. Qovia plans to
incorporate the new technology into its
Internet telephony security software
later this year.
Victor Kouznetsov, senior vice president for mobile solutions at security
vendor McAfee, said experts haven’t
identified any real-world SPIT cases
yet.
This is primarily because there aren’t
enough Internet telephony users to
make SPIT attacks worthwhile, explained senior analyst Joe Laszlo with
Jupiter Research, a market-analysis
firm. However, Internet telephony
usage is increasing rapidly. By the end
of 2004, half a million US households
used the technology, according to
Laszlo. By 2009, Jupiter expects about
12 million US users.
Unlike some other cyberthreats,
such as viruses and worms, spam is driven by economic incentives, Shim said.
It is a cheap and effective marketing
tool, he explained. The industry should
be careful about SPIT because seven
years ago, he noted, some Internet and
e-mail experts thought e-mail spam
was a dubious threat that would never
materialize. ■
Editor: Lee Garber, Computer;
[email protected]
News Briefs written by Linda Dailey
Paulson, a freelance technology writer
based in Ventura, California. Contact
her at [email protected].
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
COMPUTING PRACTICES
Local Search: The Internet
Is the Yellow Pages
The proposed Internet-Derived Yellow Pages
aggregate, annotate, and certify Web content
for use in geographically oriented searching.
The IDYP provides a framework for
combining Internet-derived content with the
trust and fairness that characterize the printed
Yellow Pages, still the predominant source
of consumer-oriented local information.
Marty
Himmelstein
Long Hill
Consulting,
LLC
26
E
very day, millions of people use their
local newspapers, classified ad circulars,
Yellow Pages directories, regional magazines, and the Internet to find information pertaining to the activities of daily
life: nearby places, local merchants and services,
items for sale, and happenings about town.
The Internet is not meeting its potential for delivering this type of geographically oriented information. Sometimes the information that people seek is
on the Internet, but the tools for locating it are inadequate. In other cases, our industry has not developed the counterparts to replace traditional delivery
methods such as the printed Yellow Pages.
The trends that point to the rapid growth of geographically oriented search, known as local search,
are unmistakable. The most important predictor of
the intensity of an individual’s Internet usage is the
availability of a broadband connection. As of early
2004, 55 percent of all US adult Internet users had
access to such a connection.1 Further, the number
of adult Americans who had broadband Internet
connections at home increased 60 percent from the
same time in 2003, to 24 percent.
Broadband access makes the Internet a pervasive,
“always-on information appliance.”2 People with
high-speed access do more things on the Internet,
and they do them more frequently. The Internet has
always been used to support local activities, rang-
Computer
ing from Yellow Pages searches, mapping, and vacation planning to researching products prior to purchasing them in a nearby store. Ubiquitous broadband access will serve to increase users’ expectations
for better support for all types of location-based
computing.
On the search side, a market study of 5,000
online shoppers conducted by TKG and Bizrate.
com found that 25 percent of the responders’
searches were for merchants “near my home or
work.”3 A recent Forrester Group study found that
65 percent of online shoppers researched a product
online before purchasing it offline.4
On the content side, at least 20 percent of Web
pages include one or more easily recognizable and
unambiguous geographic identifiers, such as a
postal address. Many of these Web pages have
locally relevant content; Web authors don’t put
addresses on pages haphazardly. This content is
already on the Internet despite the lack of an overarching mechanism for accessing it.
On the revenue side, US businesses spend $22 billion annually on local advertising, $14 billion of
which is for the Yellow Pages, but only a sliver is
for the Internet. Greg Sterling, managing editor for
the Kelsey Group, a research firm that provides
Yellow Pages metrics, puts the upper limit of advertisers worldwide who purchase paid search slots on
the Internet at 250,000, but few of the slots are for
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
0018-9162/05/$20.00 © 2005 IEEE
local content.5 Contrast this with the more than 12
million small and medium businesses (SMBs) in the
US, and another 20 million or so in other developed countries. The predominant market for SMBs
is local: 60 percent of businesses in the US report
that 75 percent of their customers come from
within a 50-mile radius.5
LOCAL SEARCH TODAY AND TOMORROW
Local search today is discussed in the context of
paid listings—the advertisements that appear near
the algorithmically computed, or natural, results
search engines return in response to user queries.
However, paid listings and their variants are not
the bedrock upon which local search will be built.
To see why, we only need to examine the original
and still predominant local search tool, the printed
Yellow Pages.
The Yellow Pages have many shortcomings, but
they also have two virtues that are indispensable
for local search: They are both trustworthy and
inclusive—they contain at least minimal information on all businesses.
Paid listings do not provide the infrastructure
for replicating these core Yellow Pages virtues—in
fact, the value of paid listings is that they are the
opposite of fair. Rather, to reach the widest audience, paid listings require a stratum of YP-like data
beneath them, and the richer that stratum is the
better.
The challenge for the local search community,
then, is to facilitate the creation of this stratum of
data. It must create better ways to collect and disseminate geographically oriented information
about the activities of daily life. To meet this challenge, local search must supplant both the printed
Yellow Pages and the current generation of Internet
Yellow Pages (IYP)—a transplanted direct-mail
mailing list—as a means for gathering and presenting consumer-oriented business information.
In ways that are readily evident, the Internet can
furnish richer content than the Yellow Pages, but
it cannot yet duplicate its orderliness and fairness.
And fairness is the crucible by which local search
will be judged. If users don’t trust local search, it
won’t matter how much better than the Yellow
Pages it is. People won’t depend on it.
People use the Yellow Pages occasionally, but
they are involved in local activities continually. It is
therefore natural for local search to reflect the range
of activities in which people participate. For example, much of our local activity has a temporal component. The Internet has the potential to provide
access to transient local information more effi-
ciently than older distribution mediums. A
definition of local search that encompasses
The Internet-Derived
its temporal, commercial, and noncommerYellow Pages uses
cial aspects is that “local search tells me what
the Internet for both
is located within 100 miles from here and
content distribution
what is happening within 100 miles from
here.”
and content
Broadly speaking, there are two sources of
aggregation.
local content on the Internet. Offline-derived
local content originates from other, usually
older, sources, but is distributed on the Internet. The IYP is the primary source of offlinederived local content on the Internet.
Internet-derived local content is gathered directly
from the Internet. While many searches return
pages with local content, to date only a few systems have attempted to gather and present content
that is specifically relevant for local search.
Geosearch, a joint project between Vicinity and
Northern Light, was the first large-scale effort to
derive local content directly from the Internet.
Currently, content aggregators, such as the various city and vacation guides that abound on the
Internet, provide some of the best local content.
These aggregators have good information for popular categories, such as lodging and restaurants,
and they rely on the IYP to fill gaps in their coverage. While they are good sources for some types
of content, they do not provide a mechanism for
replacing either the print or Internet Yellow Pages.
The Internet-Derived Yellow Pages provide a
framework for local search that incorporates the
trustworthiness associated with the Yellow Pages
without jettisoning the potential for distributed,
unencumbered content creation that is the
Internet’s inherent strength. The IDYP uses the
Internet for both content distribution and content
aggregation. Aggregation is a more significant challenge than distribution, but one that is not adequately addressed by the local search community.
The IDYP’s ken is wider than commerce, but
local search’s first requirement is to be a better
Yellow Pages.
GEOSPATIAL PROXIMITY SEARCHING
All varieties of local search require the ability to
find information associated with locations within
a given distance of a specified search center, known
as geospatial proximity searching. Preparing data
sources for proximity searching requires several
steps.
The first step is to locate text that the geoenabled
application can map to a physical location. This step
is easy for the IYP because it is a simple structured
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
27
database with defined meanings for each field.
For Internet-derived content, the problem is
Geosearch
trickier because text with geographic signifitransforms location
cance can be anywhere on a page.
information in text
The second step is to transform a location’s
textual designation into physical coordinates
documents into a
on Earth’s surface. As the “Detecting
form that search
Geographic Content in Text Documents”
engines can use for
sidebar describes, the topic of detecting geoefficient proximity
graphic content within text documents has
generated interest in both the commercial and
searching.
research sectors.
In the developed world, a street or postal
address is the most common way to refer to
a location, particularly for local search. Geocoding
applications attempt to resolve a group of tokens
into a pair of geographic coordinates, usually
expressed as latitude and longitude. Along with
each pair of coordinates, a geocoder also returns a
value that represents the quality of the returned
geocode. The best geocodes are accurate to within
a few meters; less specific coordinates usually refer
to the centroid of a larger region. Geocoding databases are large and dynamic, like the street networks they represent.
Efficiently processing proximity queries against
large data sets, such as a nationwide business directory of 14 million businesses, or the Internet,
requires spatial access methods. The basic idea of
SAMs is to map a two-dimensional (or n-dimensional) coordinate system—in this case latitude and
longitude—onto a single-dimensional coordinate
system. By doing so, a region on Earth’s surface can
be denoted with a single attribute, a spatial key,
instead of the four attributes that are necessary to
describe a bounding rectangle: the x and y coordinates of the rectangle’s lower left and upper right
corners.
Spatial keys are computed as the data set to be
geoenabled is being built. In the case of a business
directory, the spatial key for each business is stored
as an additional field along with other fields for the
business. If a search application is indexing unstructured text, it adds the spatial keys as additional
terms to the index it builds for the page. Unstructured text, such as Web pages, could require
several spatial keys because they may contain several addresses.
At search time, to determine the set of businesses
or Web pages that satisfy a proximity query, the
search application maps the user’s search center
and desired search radius to a set of spatial keys
that cover the area to be searched. It then adds these
spatial keys to the user’s nongeographic query
28
terms so that they can be compared to the precomputed spatial keys stored with the dataset being
searched.
The search application refers to the user’s nongeographic terms to determine which data items
within the radius match the user’s main search criteria. Proximity searching algorithms can order
results by distance, so results closer to the search
center are listed before those farther away.
Ordering is routine for IYP applications, but can
be problematic for Web pages because of the potentially large number of pages that may need to be
sorted. An example of a paraphrased Geosearch
query is: “Return Web pages that are about hot-air
balloon rides and which contain postal addresses
or telephone numbers within 100 miles of 10 Main
Street, Poughkeepsie, NY.”
INTERNET-DERIVED LOCAL CONTENT
In 1998, a research group at Vicinity developed
a prototype system to geoenable Web content.
Vicinity modified the spatial access methods it
developed for its IYP and business locator products
to work as a software component in conjunction
with search applications. In 1999, Vicinity teamed
up with Northern Light to broaden its experiment
to include the general Web corpus. Microsoft purchased Vicinity in December 2002.
Geosearch was publicly available from April
2000 until March 2002 from both Northern Light
and Vicinity’s MapBlast property. During this time,
Geosearch processed about 300 million distinct
Web pages.
The experience with Geosearch provided the
basis for two observations:
• the Internet is already a rich source of local
content, and
• local information on the Internet possesses certain characteristics that simplify the job of
aggregating it.
The basic idea of Geosearch is that it transforms
location information in text documents into a form
that search engines can use for efficient proximity
searching. Its first step is to scan documents to recognize text patterns that represent geocodable
entities.
Geosearch avoids semantic text analysis, preferring to leave the determination of a document’s subject matter to the information analysis algorithms
of the search application with which it works.
Geosearch relies on the fact that a significant portion of the content that is valuable for local search
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Detecting Geographic Content in Text Documents
The topic of detecting geographic content within text documents has generated interest in both the commercial and
research sectors.
Commercially available products
Google Local (local.google.com) scans Web pages for US and
Canadian addresses and North American Numbering Plan telephone numbers. Whereas Geosearch used location purely as a
filter, Google adds an extra step of trying to correlate the
address information on Web pages with IYP data.
Most local search offerings combine IYP data with more indepth content from vertical content aggregators, but so far,
Google is the only search engine that uses the Geosearch
approach of deriving local content directly from Web pages.
One sure way to determine whether a search product obtains
local content directly from the Internet is to do an idiosyncratic
search for which there is unlikely to be any IYP data. For example, both Geosearch and Google Local return results for “worm
composting in Thetford, VT”—others do not.
MetaCarta’s Geographic Text Search (www.metacarta.com) is
a commercially available product that uses a place-name directory
in combination with context-based analysis to determine the presence of geographic content. It will, for example, assign a location
to the phrase “three miles south of Kandahar.” GTS is appropriate for corpora that might have geographic content but not the
obvious markers of postal addresses or telephone numbers.
Research
Content-based searching for location information requires
identifying tokens that might have a geographic meaning.
Systems that use place-name directories, called gazetteers, need
to check the gazetteer for every token in a document. A token
that is in the gazetteer must then be disambiguated to see if it
really represents a location, and if so, which one. This process
can be costly.
Systems based on standardized addresses typically look first
for postal codes. Tokens that look like postal codes are rare,
so few trigger additional work. Then, since the sequence of
tokens in an address is rigidly constrained, it is not difficult to
determine if a potential postal code is in fact part of an address.
Efficiency might not be a concern for some document collections, but it is if the collection is the Internet.
Web-A-Where,1 a gazetteer-based system that associates
geography with Web content, uses several techniques to resolve
ambiguities. Ambiguities are classified as geo/geo (Paris, France
or Paris, Texas) or geo/non-geo (Berlin, Germany or Isaiah
Berlin). The system also assigns a geographic focus to each
page—a locality that a page is in some way “about.”
Junyan Ding and coauthors2 analyzed the geographic distribution of hyperlinks to a resource to determine its geographic
scope. As expected, their analysis showed that The New York
Times has a nationwide geographic scope. However, so does
the San Jose Mercury News because readers across the country follow this California newspaper’s technology reports.
These authors also estimated a resource’s geographic scope by
using a gazetteer to examine its content.
In contrast to Ding and coauthors, Geosearch and Google
Local rely on a user-centric approach to determine geographic
scope because they allow users to specify the search radius of
a query.
Kevin McCurley3 discussed using addresses, postal codes,
and telephone numbers to discover geographic context. Remco
Bouckaert4 demonstrated the potential of using the low-level
structure of proximate tokens, such as postal addresses, to perform information extraction tasks.
Organizing Web content for local search
With the exception of the work by Dan Bricklin,5 relatively
little has been written about organizing existing Web content
for local search. Bricklin proposed the small and medium business metadata (SMBmeta) initiative as a way for enterprises to
present essential information about themselves consistently on
the Web. The idea is to create an XML file at the top level of a
domain that contains basic information about the enterprise.
Since SMBmeta files have a consistent location, name, and
structure across Web sites, search applications can easily find
and interpret the files.
In a perfect virtual world—a Web presence for all businesses,
the willing participation of search engines to promulgate the
use of metadata standards, and no spam—the original
SMBmeta initiative would offer a simple way to disseminate
information about local businesses.
In lieu of this, Bricklin proposed the SMBmeta ecosystem,
which sketches some control mechanisms that are similar to
IDYP’s trusted authorities. Upon request, a registry returns a list
of the domains it knows about that have SMBmeta data. A
proxy maintains the equivalent of the smbmeta.xml file for
domains that do not have their own files. An affirmation
authority performs the policing functions.
Meeting the IDYP goal of creating an Internet version of the
printed Yellow Pages requires leveraging the political and organizational infrastructure of trusted authorities. Rather than
replicating the capabilities of the Yellow Pages, SMBmeta’s goal
is to help small and medium businesses establish a Web presence. However, the two share the approach of annotating Web
content with structured information to make it more accessible for various search applications.
References
1. E. Amitay et al., “Web-a-Where: Geotagging Web Content,” Proc.
27th Int’l Conf. Research and Development in Information
Retrieval, ACM Press, 2004, pp. 273-280.
2. J. Ding, L. Gravano, and N. Shivakumar, “Computing Geographical Scopes of Web Resources,” Proc. 26th VLDB Conf., Morgan
Kaufmann, 2000; www1.cs.columbia.edu/~gravano/Papers/
2000/vldb00.pdf.
3. K. McCurley, “Geospatial Mapping and Navigation of the Web,”
Proc. 10th Int’l Conf. WWW, ACM Press, 2001, pp. 221-229.
4. R. Bouckaert, “Low-Level Information Extraction: A Bayesian
Network-Based Approach,” 2002; www-ai.ijs.si/DunjaMladenic/
TextML02/papers/Bouckaert.pdf.
5. D. Bricklin, “The SMBmeta Initiative,” 2004; www.smbmeta.org.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
29
contains well-formed postal addresses, landline telephone numbers, or both.
The presence of one or more addresses is a
hint about a document’s subject that the
designers of a search application’s relevance
ranking algorithms can use as they see fit.
One advantage to this approach is that
Geosearch is portable. It is a software component that is inserted at a convenient point
into a search application’s workflow.
When Web authors
include an address,
they make an effort,
aided by habit, to
include one that is
well formed.
Address recognizers
Geosearch address recognizers detect US-conformant addresses consisting of at least a postal
code and a preceding state, Canadian postal codes
and a preceding province, and North American
Numbering Plan (NANP) telephone numbers (US,
Canada, Caribbean). Canadian postal codes are
particularly well-suited for local search because
they have a short but unique format, and, especially
in urban areas, they map to accurate latitudes and
longitudes.
Geosearch scans all pages for address data. Using
brute force to search for US addresses is justified
by the fact that such addresses or telephone numbers are found on more than 20 percent of pages.
To internationalize Geosearch, it might be necessary to develop heuristics to determine what types
of addresses to look for on a page. Address formats
vary by country, and searching for an exhaustive
set on each page could be too time-consuming.
Upon finding an address, the address recognizer
forwards what it considers the relevant tokens to
the geocoder so that it can assign geographic coordinates to the presumed address. Because geocoding is usually expensive compared to scanning, the
address recognizer works to reduce the number of
false addresses it sends to the geocoder.
Geosearch observations
For the two years that Geosearch was publicly
available, and for the preceding year, Vicinity
researchers used these techniques to closely
observe Internet-derived local search and identify
its strengths, weaknesses, and future opportunities. As a large-scale proof of concept, Geosearch
exceeded their expectations.
Local data permeates the Web. When Vicinity
researchers embarked on developing Geosearch in
1998, they evaluated sets of Web pages provided
by several popular search engines and portals. On
a consistent basis, more than 20 percent of these
pages contained either a well-formed US or
Canadian address or NANP telephone number.
30
This percentage remained constant for the two
years Geosearch was available on the Internet.
Although Geosearch only looked for North
American addresses, the pages it examined were
not restricted to North America. Therefore, the percentage of pages with a well-formed address from
some country is certain to be higher than the 20
percent that Geosearch found.
Well-formed addresses are the rule, not the exception.
The efficiency of the address recognition process
was a concern to all of the engineering groups the
Vicinity researchers worked with. By requiring a
well-formed address, the researchers eliminated
fruitless work examining text around tokens that
marginally looked like part of an address but were
not, such as “Chicago-style pizza.” It turns out that
requiring the combination of a state and postal
code is not much of a sacrifice.
In most cases, addresses on Web pages conform to
the postal standards used for the delivery of land mail.
Occasionally, a postal code that a group of addresses
shared was factored out of individual addresses and
placed at the start of a table. Overwhelmingly, however, when Web authors include an address, they make
the effort, aided by habit, to include one that is well
formed. Thus, by promulgating addressing standards
for the efficient delivery of land mail, national postal
services have made a major contribution toward
geoenabling the Web.
If telephone numbers are excluded, Geosearch
found at least one address on 15 percent of pages.
Some enterprises use a telephone number as the primary contact point. Plumbers, for example, serve a
geographic area, and they rely on a phone number
rather than a storefront to establish a local presence. A counter example is that customer service
phone numbers are probably not interesting for
local search. Nationwide customer support numbers, however, are often toll free, and Geosearch
did not consider these.
Sometimes the address recognizer found a telephone number, but not an accompanying address
that was in fact available. In these cases, the presence of a phone number could trigger more intensive scanning of the surrounding text for an
address. The basic results were so encouraging,
however, that we did not consider additional
work on the address recognizer to be a high priority.
Addresses are keys to rich exogenous content. For
most people, an address provides enough information to build a mental image of a location in a
familiar neighborhood or to use as an index for
finding the location on a map. An address is not
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
directly usable for the distance computations and
the mapping and routing applications that location-based computing requires. This is the job of
geocoding applications that associate an address
with a physical location on Earth’s surface.
The databases that these applications use represent significant intellectual capital. For example,
the US street network product from Geographic
Data Technology, a leading provider of geocoding
databases, contains more than 14 million addressed
street segments, postal and census boundaries,
landmarks, and water features. The company
processes more than one million changes for this
database each month (www.geographic.com/home/
productsandservices.cfm).
An address is the key that associates this rich vein
of exogenous information with Web content.
Addresses are proportionally more valuable for
local search because they are computationally easy
to detect.
Addresses are metadata. The WWW Consortium
defines metadata as machine-understandable information for the Web (www.w3.org/metadata). To
date, attempts to incorporate metadata into search
engine relevancy metrics have not gone well.
HTML metatags are ignored because they are
either misused or used fraudulently, and metadata
standards have no value if they are disregarded. It’s
interesting to envision the semantic Web that metadata enables, but it’s not yet ready for prime time.
These concerns are not persuasive for local
search. Geosearch works because of the anomalous
but fortunate circumstance that the metadata it
depends on is already pervasive on the Internet. An
address is metadata; its definition predates the Web,
but its structure is portable to it.
Pages with addresses tend to be good quality.
Organizations that put postal addresses on Web
pages see the Internet not as a frivolity, but as a
way to convey information. Even when a page
with an address is unappealing, a quick glance at
the site usually leads a user to conclude that the
authors don’t know how to create a good Web
presence, not that they are swindlers or kids with
too much time on their hands.
Local search is about more than commerce. Internet
content reflects what people do—and they do more
than shop: They have hobbies, seek like-minded
individuals, look for support in times of stress.
Sometimes when people do shop, either from preference or necessity, they are not looking for the
closest chain store. They are looking for the practitioner of a local craft—a scrimshaw artist in
Nova Scotia—or for some activity that is not quite
mainstream—worm composting or home
solar power generation.
The data that
The individual constituencies for the activcharacterizes the
ities people pursue on a daily basis might be
Internet Yellow
small, but taken together they comprise much
Pages
is broad,
of the regional information people search for.
Some of the most satisfying Geosearch results
uniform, shallow,
were for idiosyncratic local content: breast
and slow to change.
cancer support groups, bird sightings, firstedition rare books, maple syrup (in
Vermont), Washington Irving (in Tarrytown,
New York).
One hundred years ago the Sears catalog was an
innovation for distributing information about
mainstream consumer goods. Improvements since
then have been around the edges. The overlooked
promise of local search is that it makes niche information not routinely found in mail-order or
Internet catalogs, the Yellow Pages, or on television
or radio, easy to come by. In this it is unrivaled by
previous distribution mediums.
OFFLINE-DERIVED LOCAL CONTENT
In contrast to Internet-derived local content, the
data that characterizes the Internet Yellow Pages is
broad, uniform, shallow, and slow to change. This
data wends a circuitous route from initial compilation to its final destination in IYP listings.
List compilation vendors, whose traditional customers use their products for business mailing lists,
sales lead generation, and other direct mail and telemarketing applications, furnish IYP data. The compilers’ main data sources are printed telephone
directories, which are converted to digital information with OCR devices.
InfoUSA, a leading provider of premium lists,
compiles its US list of 14 million businesses from
5,200 phone directories (www.infousa.com). The
company augments this phone book data with secondary data sources such as annual reports, SEC filings, business registrations, and postal service
change-of-address files. The compilers verify the
information they gather by calling businesses.
InfoUSA makes 17 million such telephone calls
annually.
The IYP is slow to incorporate new and changed
information, a shortcoming that is inherent in the
source of its data, since telephone books are published annually. List vendors do generate periodic
update files, but these updates are not free, and the
effort required to merge them into the IYP is not
trivial.
More fundamentally, staying current is an elusive goal for decentralized information that is comFebruary 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
31
piled centrally. Telephone directories are out
of date even at the moment of publication.
The IDYP is
Then, list vendors must correlate changes
from their incoming data streams—the 5,200
a directory of local
directories, telephone verification calls,
businesses, similar
change-of-address files, and so forth. Each
to the IYP but richer
periodic update includes only a fraction of
in content.
the changes in a vendor’s possession, and it
includes no changes that have occurred but
are still in the pipeline.
Another problem with using compiled lists
as a source for the IYP is that the consumer is not
its primary market. The lists are flat structures without sufficient expressive power to convey the hierarchical and variable structure of many enterprises,
specifically those with multiple external points of
contact.
This missing information corresponds to some
of the most dog-eared pages in printed directories:
individual departments and physicians in hospitals
and medical practices, group practices of all sorts,
and municipal information. Even if this deficiency
were somehow fixed, IYP service providers would
still need to reflect the richer structure in the online
databases they build from the compiled lists.
For all their shortcomings, the compiled lists
from which the IYP is derived are authoritative and
trusted sources of business information—characteristics that are not duplicated elsewhere. The
clerks making those calls provide real value. Even
if the information in the IYP already exists on the
Internet, or will sometime soon, it is in a chaotic
form, and there is no repeatedly reliable way to
access it. The value of the compiled lists is data
aggregation, an area in which local search has not
yet contributed.
DECENTRALIZED DATA GATHERING:
THE INTERNET-DERIVED YELLOW PAGES
The central challenge for local search is to move
the job of aggregating and verifying local information closer to the sources of knowledge about that
information. The human and electronic knowledge
about local information is decentralized—geographically localized—and the Internet is a decentralized medium. Having decentralized tools for
gathering this data is desirable as well.
Trusted authorities
The IDYP is a directory of local businesses, similar to the IYP but richer in content. The essential
difference is that the information in the IDYP is
derived directly from the Internet, not from offline
sources.
32
The IDYP’s viability depends on intermediaries,
trusted authorities who can vouch for the information the IDYP provides and can perform the role
of content aggregator for entities without a direct
Web presence. Organizations that have relationships of trust with both the public and the entities
whose information they are certifying or creating
can perform this gatekeeper role.
Two examples of organizations that can serve as
gatekeepers are those based on geography, such as
a chamber of commerce, and those based on market segment, such as a trade organization. A primary function of both types of representative
organizations is to collate and disseminate accurate information on behalf of their members. Both
types of organizations are often conversant with
Web technology, and they can function as proxies
for constituents who don’t have their own Web
presence. While there are 14 million businesses
in the US—most of them small—chambers of
commerce and trade organizations number in the
thousands.
Preventing fraudulent interlopers from compromising the integrity of their constituents’ information is also in the best interests of these gatekeepers.
For chambers, this is conceptually and practically
as simple as ensuring that each member it verifies
or submits to the IDYP does indeed have a shop on
Main Street.
The first function of a trusted authority is either
to submit information to the IDYP on behalf of a
member or to certify information a member has
directly submitted to the IDYP. The second function, policing, is aimed at minimizing the amount
of fraudulent or misleading data that makes its way
into the IDYP.
Proxy mode. In proxy mode, trusted authorities
are intermediaries for members who want a presence in the IDYP but do not want to interact
directly with the Internet.
For example, a licensed hotdog-stand vendor with
no interest in using the Internet would work with a
representative at the chamber of commerce to get
the right information into the IDYP. A hypothetical
entry for this vendor would indicate that the stand
is open from two hours before an event until one
hour afterwards, provide the stand’s location in the
stadium, and state the type and price of the hotdogs,
drinks, and condiments he sells. If, at the last minute,
the vendor finds he will not be at an event, he can ask
his contact to update his IDYP entry. This example
is contrived, of course—but try finding information
on street vendors in the Yellow Pages.
Authenticate mode. A trusted authority uses a Web
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
interface to help create the structured information
the IDYP requires for the members under its
purview. The trusted authority releases this information to an IDYP server, at which point it
becomes generally available. An entity can directly
submit information to an IDYP server as long as
the submittal refers to at least one valid registration
with at least one trusted authority.
Policing
Unlike a purely virtual search, the subject matter
in local search has a physical existence that can be
confirmed. Therefore, local search is more resistant
to fraud than are purely virtual searches. In the
IDYP model, if no trusted authorities vouch for a
business, it will not be included in the IDYP. Still,
we must assume that efforts will be made to game
the system and that some businesses will be tempted
to misrepresent themselves.
The Internet’s potential to provide assurances
about local enterprises exceeds that of current
directory services. It isn’t possible to rely on the
Yellow Pages to provide guidance about a business’s reputation. The IDYP, however, can augment
its information with various data sources, such as
Better Business Bureaus, independent reviews, and
public data. In addition, the IDYP can use practices
that have become popular on the Internet for rating products, services, and sellers.
IDYP OPERATION MODES
Geosearch found that at least 20 percent of Web
pages include an overt geographic qualifier. Even if
every local enterprise eventually registers with a
trusted authority, the Web will still contain much
local content that is not known to the IDYP.
Geosearch’s strength is that it finds local content
in place, without requiring Web authors to change
their routines for publishing that content. To integrate its content with local content on the Web that
is not part of the IDYP, the IDYP supports two
modes of operation. In one mode it supplies local
search metadata to authorized applications; in the
other, it is a stand-alone directory application.
Local search metadata
In the local search metadata mode, the IDYP
makes its content available to subscribing applications. Subscribers are bound to use IDYP data in
conformance with the policies and standards the
IDYP sets forth. Trusted authorities and individual
businesses can also specify directives on how subscribers use their information.
As a part of the page indexing process, a sub-
scribing search application seeks associated
IDYP information for the page it is indexing.
In the IDYP model,
If the page is authorized for local search, the
if no trusted
search application includes some portion of
the IDYP metadata in the index it builds for
authorities vouch
the page. In this way, IDYP data is incorpofor a business,
rated into the general Web corpus.
it will not be
A URL provides the connection between
included in
IDYP data and data on the Web. When an
the IDYP.
enterprise or its trusted authority creates its
IDYP entry, it specifies a Web page address
with which the IDYP entry is associated. This
is the page to which the search application
adds the IDYP metadata.
For a member who doesn’t have a direct Web
presence, the trusted authority creates one or more
pages that contain formatted content derived from
the member’s IDYP entry. The trusted authority
either establishes a domain for the member or guarantees that the pages it creates for the member have
persistent URLs.
A trusted authority might choose to generate
pages for all its members. This would allow it to
establish a consistent look and feel for its constituents. IDYP pages generated for members that
already have established Web sites would contain
links back to these existing pages.
The IDYP provides an imprimatur for pages that
are relevant for local search. To accommodate
pages with local content unknown to the IDYP,
search applications can support either strict or nonstrict local searches.
In strict mode, the search application only considers pages that are known to the IDYP. In nonstrict mode, the search application uses its own
heuristics for gauging which pages are relevant, and
can return a mixed set of pages, some known to the
IDYP, some not. If the search application tags the
results that are known to the IDYP, users can decide
for themselves how important the IDYP imprimatur is. It will be more valuable for Yellow Pageslike searches, less so for idiosyncratic ones.
Stand-alone local directory
Given the popularity of search engines and portals as user interfaces, observers might anticipate
that the IDYP’s main role is to provide metadata
for these applications. However, as a self-contained
local directory, the IDYP can provide powerful features that are difficult to incorporate into a generalpurpose search engine. Important, too, is that the
IDYP should not depend on any particular search
application for its promulgation. Its status as a
stand-alone application ensures its independence.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
33
Standard data representation. IDYP information is
represented in XML. In addition to a standard core
of attributes, industry groups can define customized extensions—known as XML schemas. An
“hours of operation” attribute, for example, is part
of the standard core, since virtually all businesses
use it—although today’s IYP does not include
even this basic information. The XML Schema for
restaurants should allow queries about the catch
of the day at the local seafood house.
Rich categorization hierarchy. The business categorization schemes used by the print and Internet
Yellow Pages are cursory. The Internet has
spawned much work on commerce-oriented
ontologies and user interfaces that are broadly
applicable to local search.
Local search query language. A rich stratum of
metadata will facilitate the construction of a
local search query language with more expressive power than the Boolean keyword languages
that current search engines use.
The parlance of local search is constrained—a variation of who, what, where, when, and how much:
Who provides what service or product? Where is the
provider located? When can I see the product? How
much does it cost? For example: “Where can I buy
stylish children’s clothing on sale, within 10 miles
of home, open late on Saturday evening?”
Short update latency. The time interval between an
enterprise making a change and having that change
reflected in the IDYP is brief, converging on instantaneous. We can define “the last croissant” heuristic, which states that the IDYP reaches optimal
efficiency when an urban bakery can use it successfully to advertise a sale on its remaining bakery items 30 minutes before closing.
References
G
Acknowledgments
The author thanks the many people involved
with making Geosearch happen, including the following—from Vicinity: Jeff Doyle, Charlie
Goldensher, Jerry Halstead, Dwight Aspinwall,
Dave Goldberg, Faith Alexandre, Darius Martin,
Kavita Desai, and Eric Gottesman; and from
Northern Light: Marc Krellenstein, Mike Mulligan,
Sharon Kass, and Margo Rowley.
eosearch, a geoenabled search engine that
allows people to search for Web pages that
contain geographic markers within a specified geographic area, demonstrates that the
Internet is a rich source of local content. It also
demonstrates the many advantages that postal
addresses have as a key for accessing this content,
especially when the content pertains to the activities of daily life. Postal addresses are ubiquitous,
unambiguous, standardized, computationally easy
to detect, and necessary for accessing the rich and
precise content of geocoding databases.
The Internet Yellow Pages, currently the main
source of local content on the Internet, are reliable,
but they are also shallow, slow to change, centralized, and expensive. Their primary data sources are
34
printed telephone directories. They do not use the
Internet’s resources in any meaningful way.
Local search today provides a poor user experience because it does little more than package old
data for a new medium. The richest source of local
content can and should be the Internet itself, but
marshaling this resource requires developing an
infrastructure such as the Internet-Derived Yellow
Pages to organize and manage its content.
The IDYP is a structured database that relies on
trusted authorities, such as chambers of commerce
or trade associations, to certify the information it
contains. The IDYP can function either as a standalone directory or as a source of metadata for
search applications. A search application uses IDYP
metadata to augment the information it maintains
for Web pages that have local content. In this way,
local search metadata is integrated into the general
Web corpus. ■
1. J. Horrigan, Pew Internet Project, “Pew Internet Project Data Memo,” Apr. 2004; www.pewinternet.
org/pdfs/PIP_Broadband04.DataMemo.pdf.
2. J. Horrigan, L. Rainie, Pew Internet Project, “The
Broadband Difference,” June 2002; www.pewinternet.
org/pdfs/PIP_Broadband_Report.pdf.
3. Kelsey Group & Bizrate.com, “Local Search Now
25% of Internet Commercial Activity,” Feb. 2004;
www.kelseygroup.com/press/pr040211.htm.
4. S. Kerner, “Majority of US Consumers Research
Online, Buy Offline,” Oct. 2004; www.clickz.com/
stats/markets/retailing/article.php/3418001.
5. G. Sterling, “Is 2004 the Year of Local Search?” Dec.
2003; www.imediaconnection.com/content/2343.asp.
Marty Himmelstein is a software consultant. His
interests include database systems, spatial databases, and Web technologies. He received an MS
in computer science from SUNY Binghamton. He
is a member of the IEEE Computer Society and the
ACM. Contact him at [email protected].
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
PURPOSE The IEEE Computer Society is
PUBLICATIONS AND ACTIVITIES
the world’s largest association of computing professionals, and is the leading
provider of technical information in the
field.
Computer. The flaship publication of the
IEEE Computer Society, Computer publishes
peer-reviewed technical content that covers
all aspects of computer science, computer
engineering, technology, and applications.
MEMBERSHIP Members receive the
monthly magazine Computer, discounts,
and opportunities to serve (all activities
are led by volunteer members). Membership is open to all IEEE members, affiliate
society members, and others interested in
the computer field.
COMPUTER SOCIETY WEB SITE
The IEEE Computer Society’s Web site, at
www.computer.org, offers information
and samples from the society’s publications and conferences, as well as a broad
range of information about technical committees, standards, student activities, and
more.
OMBUDSMAN Members experiencing prob-
lems—magazine delivery, membership status, or unresolved complaints—may write to
the ombudsman at the Publications Office or
send an e-mail to [email protected].
CHAPTERS Regular and student chapters
worldwide provide the opportunity to interact with colleagues, hear technical experts,
and serve the local professional community.
Periodicals. The society publishes 15
AVAILABLE INFORMATION
GERALD L. ENGEL*
Computer Science & Engineering
Univ. of Connecticut, Stamford
1 University Place
Stamford, CT 06901-2315
Phone: +1 203 251 8431
Fax: +1 203 251 8592
[email protected]
President-Elect:
DEBORAH M. COOPER*
Past President:
CARL K. CHANG*
Term Expiring 2005: Oscar N. Garcia, Mark A.
Grant, Michel Israel, Rohit Kapur, Stephen B.
Seidman, Kathleen M. Swigger, Makoto Takizawa
Term Expiring 2006: Mark Christensen, Alan
Clements, Annie Combelles, Ann Q. Gates, James
D. Isaak, Susan A. Mengel, Bill N. Schilit
Term Expiring 2007: Jean M. Bacon,
George V. Cybenko, Richard A. Kemmerer, Susan
K. (Kathy) Land, Itaru Mimura, Brian M.
O’Connell, Christina M. Schober
Next Board Meeting: 11 Mar. 2005, Portland, OR
EXECUTIVE
STAFF
Executive Director: DAVID W. HENNAGE
Assoc. Executive Director:
ANNE MARIE KELLY
Publisher: ANGELA BURGESS
Assistant Publisher: DICK PRICE
Director, Administration:
VIOLET S. DOAN
Director, Information Technology & Services:
ROBERT G. CARE
Director, Business & Product Development:
PETER TURNER
The IEEE Computer Society Conference
Publishing Services publishes more than
175 titles every year.
Standards Working Groups. More
than 150 groups produce IEEE standards
used throughout the world.
Technical Committees. TCs provide
professional interaction in over 30 technical
areas and directly influence computer engineering conferences and publications.
Conferences/Education. The society
To check membership status or report a
change of address, call the IEEE toll-free
number, +1 800 678 4333. Direct all other
Computer Society-related questions to the
Publications Office.
VP, Publications:
MICHAEL R. WILLIAMS (1ST VP)*
VP, Electronic Products and
Services:
JAMES W. MOORE (2ND VP)*
VP, Chapters Activities:
CHRISTINA M. SCHOBER*
VP, Conferences and Tutorials:
YERVANT ZORIAN†
VP, Educational Activities:
MURALI VARANASI†
BOARD OF GOVERNORS
Conference Proceedings, Tutorial
Texts, Standards Documents.
• Membership applications
• Publications catalog
• Draft standards and order forms
• Technical committee list
• Technical committee application
• Chapter start-up procedures
• Student scholarship information
• Volunteer leaders/staff directory
• IEEE senior member grade application
(requires 10 years practice and significant performance in five of those 10)
E X E C U T I V E
President:
magazines and 14 research transactions.
Refer to membership application or request
information as noted at left.
To obtain more information on any of the
following, contact the Publications Office:
holds about 150 conferences each year
and sponsors many educational activities,
including computing science accreditation.
C O M M I T T E E
VP, Standards Activities:
SUSAN K. (KATHY) LAND*
2005–2006 IEEE Division VIII
Director:
STEPHEN L. DIAMOND†
2005 IEEE Division V Director-Elect:
OSCAR N. GARCIA*
VP, Technical Activities:
STEPHANIE M. WHITE†
Secretary:
STEPHEN B. SEIDMAN*
Computer Editor in Chief:
DORIS L. CARVER†
Treasurer:
RANGACHAR KASTURI†
Executive Director:
DAVID W. HENNAGE†
2004–2005 IEEE Division V
Director:
GENE F. HOFFNAGLE†
COMPUTER SOCIETY O F F I C E S
Headquarters Office
1730 Massachusetts Ave. NW
Washington, DC 20036-1992
Phone: +1 202 371 0101 • Fax: +1 202 728 9614
E-mail: [email protected]
Publications Office
10662 Los Vaqueros Cir., PO Box 3014
Los Alamitos, CA 90720-1314
Phone:+1 714 821 8380
E-mail: [email protected]
Membership and Publication Orders:
Phone: +1 800 272 6657 Fax: +1 714 821 4641
E-mail: [email protected]
Asia/Pacific Office
Watanabe Building
1-4-2 Minami-Aoyama,Minato-ku,
Tokyo107-0062, Japan
Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553
E-mail: [email protected]
* voting member of the Board of Governors
† nonvoting member of the Board of
Governors
IEEE
OFFICERS
President:
W. CLEON ANDERSON
President-Elect:
MICHAEL R. LIGHTNER
Past President:
ARTHUR W. WINSTON
Executive Director:
TBD
Secretary:
MOHAMED EL-HAWARY
Treasurer:
JOSEPH V. LILLIE
VP, Educational Activities:
MOSHE KAM
VP, Publication Services and Products:
LEAH H. JAMIESON
VP, Regional Activities:
MARC T. APTER
VP, Standards Association:
JAMES T. CARLO
VP, Technical Activities:
RALPH W. WYNDRUM JR.
IEEE Division V Director:
GENE F. HOFFNAGLE
IEEE Division VIII Director:
STEPHEN L. DIAMOND
President, IEEE-USA:
GERARD A. ALPHONSE
TEAM LinG - Live, Informative, Non-cost and Genuine!
GUEST EDITOR’S INTRODUCTION
Nanoscale
Design
& Test
Challenges
The silicon-scaling revolution presents a plethora of
challenges as technology progresses into the nanoscale
era. To meet these challenges, the design and test
community has banded together to improve design
automation and find solutions that will optimize
performance at every level.
Yervant
Zorian
Virage Logic Corp.
36
T
he silicon-scaling revolution is quite real
and persistent. As we move to each new
technology node, we attain a 50 percent
area reduction and a 30 percent performance increase. The continuous scaling
presents a plethora of design challenges as we
progress into the nanoscale era, which imposes the
need for additional design processes, such as design
for manufacturability and power management, and
introduces much higher mask and tooling costs.
Moreover, the bizarre vagaries of nanoscale technologies put a heavy burden on the test community,
as scaling beyond 90 nanometers greatly extends
process complexity and exacerbates leakage faults
and soft errors. At the same time, process variants
include new dielectrics, multiple oxide and metal
layers, multiple voltage thresholds, and smaller
noise margins, so that a product engineer faces serious yield implications.
The increasing number of available transistors is
leading designers to incorporate even more on-chip
functionality in the form of large embedded memories, base I/Os, and a variety of signal and protocol processing blocks. This is far outpacing designer
productivity and is greatly increasing design complexity. Finally, with fabs expected to cost on the
Computer
order of $3.5 billion and with skyrocketing reticle
costs, successful companies must ship in high volumes with increased yields and amortize their
design costs over multiple product lines by adopting, integrating, and reusing silicon-aware intellectual property blocks from qualified vendors.
To this end, the design community is working
together to further design automation and improve
design flows—be it silicon-aware IP design and
delivery or hardware and software automation that
lets designers work with higher-level languages and
abstractions that hide the underlying process complexities and allow performance, power, and area
optimization at every level. Similarly, the test community is looking beyond bolt-on test approaches
to solutions such as infrastructure IP for embedded
test, diagnosis, and repair. To maximize manufacturing yield, the infrastructure IP functions must be
optimally tuned to the design under test.
DESIGN AND TEST COMMUNITY
To face these challenges, the design and test community has organized itself into several professional and business-oriented organizations. As the
“Test Technology Technical Council” sidebar
describes, the TTTC, a professional organization
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
0018-9162/05/$20.00 © 2005 IEEE
sponsored by the IEEE Computer Society, serves
the worldwide test community with a wide range of
activities, including educational programs, conferences, workshops, and standards.
The EDA Consortium is a business-oriented
organization that represents 100 electronic design
automation companies. The consortium seeks to
identify and address issues that are common among
these companies and the customer community they
serve. By focusing on commonality and promoting
cooperation, the consortium augments the effectiveness of design automation tools and services.
Established in 1994, the Fabless Semiconductor
Association serves the design and test community
by supporting the ongoing, symbiotic relationship
between fabless semiconductor companies and
their suppliers, including semiconductor foundries,
IP providers, electronic design automation vendors,
and design service houses. The FSA facilitates productive business partnerships, disseminates relevant data, and promotes the growth of the fabless
business model.
These vibrant organizations are working together
to address the many challenges that face the industry as we move to 90 nanometers and below. We are
proud to be associated with Computer’s Nanoscale
Design & Test issue showcasing some of the exciting ideas that keep our industry ahead of Moore’s
law.
IN THIS ISSUE
This issue features three articles describing various advanced aspects of design and test.
In “Robust System Design with Built-in SoftError Resilience,” Subhasish Mitra and colleagues
address the increasingly prevalent problem of soft
errors or single-event upsets. Transient errors
caused by terrestrial radiation pose a major barrier
to robust system design, especially as chip sizes
shrink and system susceptibility to error increases.
The authors describe a number of soft-error protection techniques, including a strategy for using
on-chip scan design-for-testability resources for
soft-error protection during normal operation.
“Transistor-Level Optimization of Digital
Designs with Flex Cells” by Rob Roy and colleagues explores another extremely important subject: the increasing need to reuse IP in today’s chip
designs. The use of precharacterized and siliconverified standard cells is driven by the need to create and verify large digital circuits without having
to verify the circuit’s behavior at the transistor level,
which is simply too resource intensive to be commercially viable for most designs. On the other
hand, the quality of such automated standard-cellbased designs has been poor at best, running slower
by a factor of 6 and consuming more area by a
factor of 10. The quest to overcome these limitations leads naturally to the creation of new designand context-specific cells—designated flex cells—
during the process of optimizing a given digital
design. Designers then use these cells via a combination of register-transfer-level coding style and
synthesis directives.
Finally, the technical evolution we are witnessing today—particularly shrinking geometries—is
enabling the integration of complex platforms in a
single system on chip, and SoCs with more than
100 processors could become commonplace.
Compared with conventional ASIC design, such a
multiprocessor SoC requires a fundamental change
in chip design. In “Hardware/Software Interface
Codesign for Embedded Systems,” Ahmed A.
Jerraya and Wayne Wolf propose an interfacebased HW/SW codesign methodology that takes
advantage of IP blocks. Working at higher levels of
abstraction, the productivity of a designer who can
generate only 100 lines of Hardware Description
Language code per day is higher if those lines rep-
Test Technology Technical Council
The Test Technology Technical Council is a volunteer professional organization sponsored by
the IEEE Computer Society. The TTTC’s goals
are to contribute to the professional advancement of the test community, help its members solve engineering problems in electronic test,
and help advance the state of the art in test technology.
Through its more than 30 sponsored conferences and workshops,
the TTTC serves as the primary source of knowledge about electronic
test. Other TTTC efforts include its worldwide test technology educational program (TTEP); five geographically distributed regional
groups; and a Web site, newsletter, and electronic broadcasts.
The TTTC is actively involved in identifying emerging topic areas
in test technology. It initiates corresponding technical committees and
nurtures the creation, adoption, and implementation of standards.
Emerging topics that the TTTC currently covers include defect-based
testing; debug and diagnosis; infrastructure IP; testing of FPGAs, memories, analog, and microprocessors; and test technology for embedded
cores, boards, and system-on-chip, system-in-package, and electronic
systems.
TTTC membership is open to all individuals directly or indirectly
involved in test technology at a professional level. TTTC members pay
no dues or fees.
To learn more about TTTC offerings and membership benefits, visit
http://tab.computer.org/tttc.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
37
Signature Conferences
Three signature conferences serve the global
design and test community.
Design, Automation, and Test in Europe
As the most comprehensive
European conference and exhibition event, DATE brings
together academic researchers,
industry specialists, users, and vendors in the fields
of design, automation, and test of electronic circuits and embedded systems.
The endless quest for faster, cheaper, and safer
electronic products that consume less power—particularly for the growing consumer and communications markets—dictates generating increasingly
complex designs in continually shrinking time
scales. Design automation is a strategic technology
for modern electronic systems, ranging from simple ASICs via embedded IP cores to large systems
on chip made of heterogeneous processors communicating via on-chip networks. Testing these
complex electronic systems is becoming a key factor in enabling the final quality goals, and it has an
increasing impact on the overall cost of product
development.
In addition to the regular paper sessions, DATE
organizes interactive presentations for novel
“ongoing work,” offers a Designers’ Forum
devoted to the specific needs of designers, and provides a complete embedded software track.
DATE’s educational day consists of tutorials and
master courses. Five workshops follow the conference’s conclusion. A PCB Symposium, fringe
meetings, a University Booth, and social events
during the conference offer a wide variety of
opportunities to exchange information about relevant issues for the design, automation, and test
communities. Special themes for DATE 05, scheduled for 7-11 March 2005 in Munich, are dedicated to automotive electronics and biochips.
The Executive Track, initiated for the first time
in 2004, offers presentations by CEOs, CTOs, VPs,
and other senior executives from EDA and semiconductor and system houses. Focusing on business and industry, a presentation theater located
on the exhibition floor offers visitors and conference delegates a fresh view of economics, European
strength in the industry, and challenges in the current electronic systems design market.
International Test Conference
The world’s premier conference dedicated to electronic
test, ITC offers design and test
professionals the opportunity
38
to confront the challenges the industry faces and
learn how academia, design tool and equipment
suppliers, designers, and test engineers are combining their efforts to address these challenges.
As the cornerstone of Test Week, ITC offers a
wide variety of technical activities targeted at test
and design theoreticians and practitioners, including
formal paper sessions, tutorials, panel sessions, case
studies, lecture and application series, commercial
exhibits and presentations, and a host of ancillary
professional meetings. Some of the conference’s
most significant papers are now available online.
With the theme “Test: Survival of the Fittest,”
the 2005 conference will focus on evolving new
“out-of-the-box” ideas to meet the tough test challenges presented by very-deep-submicron technologies and the competition for dominance
among alternative solutions. ITC 2005 will be
held 8-10 November in Austin, Texas; www.
itctestweek.org.
IEEE VLSI Test Symposium
The IEEE VLSI Test Symposium
(VTS) explores emerging trends and
novel concepts in testing, circuits,
and systems. The three-day technical
sessions respond to the many trends and challenges
in the semiconductor design and manufacturing
industries, featuring papers covering design validation, debug, test, repair, failure analysis, and
fault tolerance for embedded IP cores, chips,
boards, and systems.
In addition to the technical sessions, VTS features two keynote addresses, panels, embedded
tutorials, hot topic sessions, and an Innovative
Practices track. This track highlights the cuttingedge challenges that test practitioners face and
explores the innovative solutions employed to
address them.
The VTS program addresses a wide range of
interests, including basic and continuing education
for test professionals, the latest research developments, new directions and hot topics in test technology, and expert perspectives on current issues.
Full-day tutorials and two full-day workshops are
also held in conjunction with VTS. The tutorials
are part of the Test Technology Education Program. In addition, VTS hosts a number of standardization Working Groups and IEEE Fringe
Meetings. An exciting social program at VTS provides an opportunity for informal technical discussions among participants.
VTS is sponsored by the TTTC and will take
place 1-5 May 2005 in Palm Springs, Calif.; www.
tttc-vts.org/public_html/VTS05_CFP.pdf.
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
IEEE P1500 Standard for Embedded Core Test
The IEEE Computer Society’s Test Technology Technical Council initiated IEEE
Embedded Core Test
P1500 Standard for Embedded Core Test
(SECT) in 1995 as a Technical Activities Committee to identify the common needs in the system-on-chip test domain. In
1997, the IEEE Standards Board granted permission to start
the IEEE P1500 standards activity. In January 2005, IEEE
P1500 went through a successful recirculation.
Since its inception, P1500 has focused on the critical aspects
of ease of reuse and interoperability with respect to testing when
IP cores originating from distinct core providers come together
in one SoC. P1500 standardizes core test information, model
transfer, and test access for embedded cores, concentrating on
areas that are at the interface between core provider and core
user. As a scalable standard, IEEE P1500 contributes to ease of
plug-and-play for testing, while maintaining the required flexibility to cope with different cores and system chips.
Leading volunteer experts in relevant industry segments, such
as systems companies, EDA vendors, core providers, IC manufacturers, and automated test equipment suppliers, have
IEEE
P1500
actively participated in developing IEEE P1500. The first version of the standard focuses on nonmerged digital logic and
memory cores. In future extensions, P1500 will cover analog
and mixed-signal cores, as well as the design-for-test guidelines
for mergeable cores.
The two main elements of the IEEE P1500 standard are a
scalable core test architecture and an information model. For
the scalable architecture, P1500 does not standardize test pattern sources, sinks, or test access mechanisms—that is, the test
access “highway” from source to core to sink. With respect
to test access for embedded cores, P1500 only standardizes
the test wrapper around the core and its interface to one or
more test access mechanisms. The information model is meant
to standardize the core test knowledge transfer. The P1500
information model is based on IEEE P1450.6, a standard initiated to accommodate specific core test constructs. The IEEE
P1450.6 Working Group collaborates to provide the necessary language for P1500 core test knowledge transfer.
For additional information about this hot topic, visit the
P1500 Web site: http://grouper.ieee.org/groups/1500.
IEEE Design & Test of Computers
IEEE D&T, a bimonthly magazine copublished by the IEEE
Computer Society and the IEEE
Circuits and Systems Society, is specifically directed to design
and test engineers and researchers. D&T features peer-reviewed
original work describing methods and practices used to design
and test electronic product hardware and supportive software
as well as design automation tools and methodologies.
D&T publishes tutorials, perspectives, roundtable discussions, viewpoints, conference reports, panel summaries, and
standards updates contributed by authors working in the
industry.
Paper submission: Submit manuscripts for peer review to
resent large blocks rather than logic gates. The ultimate goal of this methodology is to design both
hardware and software at all abstraction levels.
hese three articles address only a limited subset of the challenges facing the design and test
community. The community regularly conducts conferences, workshops, symposia, and
forums offering opportunities to explore potential
solutions to these challenges. As the accompanying
sidebars describe, key examples of these opportunities include conferences like Design, Automation,
and Test in Europe (DATE), cosponsored by
the TTTC and EDAC; the International Test
Conference (ITC) and VLSI Test Symposium
(VTS), both cosponsored by the TTTC; publica-
T
D&T at [email protected]. Each submitted paper undergoes at least three technical reviews. All submissions must be
original, previously unpublished work. The theme issues for
2005 are Design & Test Methodologies for Scaled Technologies, Configurable Computing, Design for Manufacturability, Nanotechnology, Multiprocessor SoCs and Networks
on Chip, and 3D Integration.
Subscription: IEEE D&T offers full-year and half-year subscriptions for print issues. In addition, it offers electronic subscription options to IEEE CS and CAS members, with full-text
searchable access to all issues from 1995 forward.
Visit D&T’s Web page to access tables of content and article abstracts online at no cost: http://www.computer.org/dt.
tions such as IEEE Design & Test of Computers;
and numerous standards, such as IEEE P1500. You
are encouraged to further explore the exciting challenges in nanoscale design and test by participating in these events or by subscribing to D&T. ■
Yervant Zorian is vice president and chief scientist
of Virage Logic. He received an MSc in computer
engineering from the University of Southern California and a PhD in electrical engineering from
McGill University. Zorian also received an executive
MBA from the Wharton School of the University
of Pennsylvania. He is an IEEE Fellow, serves as
IEEE Computer Society Vice President for Conferences & Tutorials, and is the editor in chief emeritus of IEEE Design & Test of Computers. Contact
him at [email protected].
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
39
COVER FEATURE
Robust System
Design with
Built-In Soft-Error
Resilience
Transient errors caused by terrestrial radiation pose a major barrier to
robust system design. A system’s susceptibility to such errors increases in
advanced technologies, making the incorporation of effective protection
mechanisms into chip designs essential. A new design paradigm reuses
design-for-testability and debug resources to eliminate such errors.
Subhasish
Mitra
Norbert
Seifert
Ming Zhang
Quan Shi
Kee Sup Kim
Intel
0018-9162/05/$20.00 © 2005 IEEE
S
oft errors, also called single-event upsets
(SEUs), are radiation-induced transient
errors caused by neutrons from cosmic
rays and alpha particles from packaging
material.
Traditionally, soft errors were regarded as a
major concern only for space applications. Yet, for
designs manufactured at advanced technology
nodes—such as 90 nm, 65 nm, and onward—system-level soft errors are much more frequent than
in the previous generations.
Further, customers demand stringent limits on
soft-error rates for enterprise servers and networking hardware. All these chips, sometimes hundreds
or thousands of them, must operate correctly, with
very high system data integrity and availability. An
IT executive quoted in Forbes Magazine1 expressed
how customers feel when the hardware fails to meet
expectations: “It’s ridiculous. I’ve got a $300,000
server that doesn’t work. The thing should be bulletproof.” That is why digital-system soft errors
have received significant attention.1,2
The soft-error rate of a system generally is measured in units of Failures in Time, or FIT. A softerror rate of 1 FIT means that the mean time before
an error occurs is a billion device hours. IBM sets
its target for undetected errors caused by SEUs at
114 FITs,3 which would require a mean time before
an SEU causes an undetected error of roughly 1,000
years.
The high data-integrity and availability requirements for servers and networks4 make soft errors
an extremely important design aspect for microprocessors, network processors, high-end routers,
and network storage components. Thus, soft-error
protection is just as important as other product
characteristics such as performance, power consumption, yield, and test quality.
Chip designers must address soft errors very
early, starting from the product definition phase and
continuing through the architecture planning, circuit design, logic design, and postlayout phases.
Designers routinely use well-known techniques such
as error detection and correction to cope with soft
errors in static random access memory. Protecting
SRAMs isn’t enough, however, given the soft-error
rates and customer expectations. Designers must evaluate the effects of soft errors in flip-flops, latches, and
combinational logic, and effective protection mechanisms must be incorporated into the design.
SYSTEM-LEVEL SOFT-ERROR-RATE
ESTIMATION
The soft-error rate (SER) of a design can be
expressed in terms of the nominal soft-error rates of
individual elements such as SRAMs, sequential ele-
Published by the IEEE Computer Society
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
43
Soft-Error Testing: Key Points
Michael Nicolaidis and Damien Chardonnereau, iRoC Technologies
Following a strategy similar to traditional burn-in for generalreliability purposes, soft-error testing seeks to reproduce and then
accelerate the die’s real-life environment. Researchers use a neutron beam accelerator and alpha foils to conduct this testing.
Because each neutron beam has a specific and complex set of
neutron properties, the beams must be carefully qualified to
correlate the resulting data with real-time results. Beam qualification includes factors such as energy, spectrum, fluency, and
tail-effect correction.
Likewise, the actual die tester also must be specifically designed
for portability, ruggedness, flexibility, and dynamic testing.
These issues and the effort required to access a neutron beam
facility have prompted many companies to outsource this work
to a soft-error test consolidator. Doing so gives companies more
test-schedule flexibility, lowers the total costs of soft-error
testing, and strengthens their SER data value through test
independence.
Environmental acceleration
Real-time testing offers another means for accurate soft-error
rate detection. However, given that neither single-event upsets
nor soft-error-induced latch-ups occur frequently, testers
employ environmental acceleration, such as testing at high altitudes where the neutrons’ flux is stronger while the spectrum
remains equal to that at ground level. Table A shows the advantages of accelerated testing over real-time testing.
Consider, for example, the Jungfraujoch lab in Switzerland.
Located at 11,000 feet, the facility can accelerate sea-level test
times by a factor of 11. In testing conducted at this lab, iRoC
Technologies obtained a statistically significant number of soft
errors on different devices over a period of 4 to 6 months. This
test for soft-error rates covers all different phenomena, including multibit upsets.
ence SER, which is statistical in nature.
As processes migrate to nanometer scale, the reduction in activation energies and the increased amount of embedded memory
will cause soft errors to become an issue that designers must
deal with. Even as the per-unit FIT rate stabilizes with advanced
processes, system-level soft errors have been increasing.
iRoC Technologies has performed more than 1,000 SER
analyses on different process nodes and devices. This work has
revealed a clear trend for SRAM/CAM: The average FIT per
megabyte slightly decreases at each process node, through to
130 nm. From that point down to 90 nm, the FIT per megabyte
begins to stabilize.
Even with stabilization, however, researchers must consider
three additional trends:
• Several neutron-induced latch-ups have been observed in
nanometer memory devices.
• Multibit upsets have been observed more frequently.
• SEU-rate dispersion becomes more significant at 90 nm
than at 130 nm, indicating that SER is both a fixed element driven by a process and an element affected by design
methodology.
Silicon test results show that the average soft-error rate hovers
around 1,000 FIT per megabit (neutron + alpha). The small
expected FIT-per-megabit decrease per process node will not counteract the significant amount of memories designers expect to
embed in future SoCs. In addition, as designs move to newer
nodes, the logic elements in the design will become more sensitive.
Techniques must be put into place that will ensure developers take this new sensitivity into consideration.
Michael Nicolaidis is a cofounder of iRoC Technologies
and the company’s chief technology officer.
SER trends
Predicting the soft-error rate and its impact on a specific die
has always challenged physics experts. Many parameters influ-
Damien Chardonnereau is a project leader and product
manager for iRoC Technologies.
Table A. Accelerated testing versus real-time testing.
Test type
Logistics
Time
Accuracy
Devices under test
Accelerated
Complex: Requires qualified beams
access; expert team required
Reasonable
Average: 2 to 3 months
Good
Average: 4 to 6 months
Excellent
Memories, SoC,
FPGA systems level
All types
Real-time
ments such as flip-flops and latches, combinational
logic, and factors that depend on the circuit design
and the microarchitecture,5,6 as follows:
SERdesign = ∑ SER nominal
×
i
i
(Probability error in ith circuit element
produces system-level error)
44
In this expression, SERnominal
refers to the soft-error
i
rate of the ith circuit element—for example, an
SRAM cell, flip-flop, or latch—under static conditions when all inputs and outputs of the element are
constant, independent of the system that uses the
element.
The SERnominal
term is generally estimated using
i
radiation testing and circuit simulation tools. The
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
CLK
Probability error in ith element produces
system-level error = TVFi × AVFi
The TVF of circuit element i, TVFi, also called the
timing derating,5 is defined as the fraction of time
the element is susceptible to SEUs that will cause
an error in that element.
For example, consider the simple D-latch in Figure
1. When the clock input of the D-latch is 1, the
upstream combinational logic drives the latch’s Dinput and writes the corresponding logic value into the
latch. During this time, any SEU that affects the transistors inside the latch has a negligible effect because
the correct value is being driven at the D-input.
However, when the clock input of the D-latch is
0, an SEU affecting transistors, such as those with
drains connected to nodes S and F, can flip the latch
content. Thus, the latch is susceptible to an SEU
that can cause an error during the fraction of the
total clock period for which the clock signal is 0,
which is the TVF of this latch.
If the clock duty cycle is 50 percent for a flip-flopbased design, the TVF of an individual D-latch inside
a flip-flop is 50 percent. A latch’s TVF can be less than
50 percent, however.6 The TVFs of SRAMs are very
close to 1.
A glitch induced in the static combinational logic
S
F
Q
D
“Soft-Error Testing: Key Points” sidebar provides more details about these calibrations. The
timing vulnerability factor (TVFi) and architectural vulnerability factor (AVFi) of circuit element
i determine the probability component in the preceding expression, as follows:
Figure 1. A D-latch.
When the clock
(CLK) input is 0, a
single-event upset
affecting
transistors, such as
those with drains
connected to nodes
S and F, can flip the
D-latch’s content,
causing an error.
by an SEU must arrive at the destination sequential element within its setup and hold time window
to create an error in that sequential element. The
TVF of combinational logic is impacted by the
clock speed and number of gates located between
the node where the glitch is induced and the destination sequential element. Since the setup time and
hold times of a sequential element are independent
of the clock speed, the TVF of static combinational
logic increases with increasing clock frequency.
The architectural vulnerability factor of the ith
circuit element, AVFi, also called logic derating,5 is
the probability that an error in an element results
in a system-level undetected error. AVF values
depend on the design’s architecture and input stimulus. Consider the following two simple examples.
First, suppose that a flip-flop’s content is erroneous. However, if the flip-flop output is ANDed
with another signal whose logic value is 0, the error
will have no effect.
Second, suppose that an error affects a register
holding the operand of an instruction in a microprocessor with speculative execution. If this instruction is executed speculatively and becomes a dead
instruction later, this error will not affect the results
produced by the program the microprocessor executes. Table 1 summarizes various AVF estimation
approaches.
Table 1. Architectural-vulnerability-factor (AVF) estimation approaches.
Approach
Manual
Description
Major issues
–
Fault injection 7,8
Inject error(s) and simulate to see
if injected error(s) causes systemlevel error(s) by comparing the
system response with simulated
fault-free response
Fault-free
simulation5,9
Perform architectural or logic
simulation and identify situations
that do not contribute to systemlevel errors, such as unused
variables and dead instructions
• No systematic analysis
• What inputs to simulate
• How many errors to inject
• Which signals to inject
errors to
• Which signals to use for
comparison
• What inputs to simulate
• How to identify situations
that do not contribute to
system-level errors
Advantages
–
• Applicable to any design
• Easy automation
• Much faster compared to
fault injection
• Easy automation
Disadvantages
• Subjective, error-prone,
time-consuming, difficult
quantitative justification
• Long simulation time
(several days or weeks)
for statistically significant
results
• Dependence on chosen
stimuli
• Applicable to very specific
designs and not general
enough
• Dependence on chosen
stimuli
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
45
Figure 2.
Contributions to the
overall soft-error
rate for a design
manufactured using
state-of-the-art
technology.
Static
combinational
logic 11%
Unprotected
SRAM 40%
Sequential
elements 49%
Figure 2 shows the estimated soft-error-rate contributions of various elements for typical designs
such as microprocessors, network processors, and
network storage controllers. This analysis includes
both the TVFs and AVFs of the individual elements.
The soft-error-rate contribution of combinational logic for state-of-the-art processes is still considerably smaller compared to the contributions of
unprotected SRAMs and sequential elements such
as latches and flip-flops.
Designers routinely use parity or error-correcting codes (ECC) to protect large memories and register files. For applications requiring high data
integrity and availability, the unprotected memories
usually represent a small percentage of total memory
bits. These memories are composed of small memory arrays for which parity or ECC is useful, but
expensive.
For the design used in Figure 2, the combined
soft-error-rate contribution of sequential elements
and combinational logic exceeds that of the unprotected SRAMs. Hence, special attention is required
to develop techniques for protecting nonSRAM
portions of a design from soft errors.
TECHNOLOGY TRENDS
Several experimental and theoretical studies have
demonstrated that the nominal soft-error rate of an
SRAM bit, built with state-of-the-art processes, has
been saturating or even decreasing for both bulk
CMOS and SOI technologies.10,11 For latches and
flip-flops, available data in the literature shows less
consistency than that for SRAMs. Robert Baumann11
observed that the nominal soft-error rates of sequential elements increase with technology scaling. At
Intel, however, we have observed a different trend
for some of our latches. The nominal soft-error rates
for some latches are fairly constant or even decreasing slightly for the 130-nm to 65-nm technologies.10
The AVFs and TVFs do not change significantly
with technology generations.6 As Figure 2 shows,
Soft-Error Protection: Test Results
Michael Nicolaidis and Damien Chardonnereau, iRoC Technologies
iRoC Technologies has optimized, designed, and manufactured different test chips and processor cores to characterize the
tradeoffs between various soft-error protection design schemes.
The company designed 32-bit and 8-bit RISC cores implementing memory-protection and logic-time redundancy techniques. These two silicon test cases validated that logic is
sensitive to soft errors and that the design process can detect,
isolate, and eliminate soft errors.
SPARC efforts
iRoC Technologies has optimized RoC-S81, an example of
soft-error detection based on time redundancy, by inserting
fault-tolerant mechanisms into the European Space Agency’s
B
CIN
LEON SPARC processor design.1 In addition to code-correction techniques implemented in its memory blocks, the processor includes a time-redundancy detection technique for logic
blocks (no correction).
Using radiation testing to compare the RoC-S81 with the
original LEON design showed the RoC-S81’s integrated faulttolerant mechanisms to be efficient, although its logic parts
proved to be sensitive to strikes and propagated transients.
The developers used a dedicated design scheme to estimate
a transient on-chip pulse width versus the particle’s energy, validating the ability to, detect within logic blocks, transient pulse
width. Figure A shows this process in action, as an ion striking
a transistor causes a transient fault to become a soft error.
Transient fault
U5
Soft error
COUT
U2
U7
A
U4
B
U6
CLK
U1
Registers
Figure A. Soft-error chain. An ion striking a transistor causes a transient fault to become a soft error.
46
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Given that transient pulse propagation depends on the technology node and pulse width, understanding what energies
atmospheric neutrons can generate when colliding with a silicon atom becomes essential. Neutrons striking silicon can generate any of more than 100 different nuclear reactions.
Complete knowledge of the various combinations is necessary
to identify the pertinent pulse characteristics and allow accurate fault injection, making an SER logic contribution possible.
Even if protecting the chip’s memories brings a significant
improvement in fault tolerance, time-out or application errors
could still occur in the nonprotected logic blocks, whose contribution to the overall SoC soft-error rate ties directly to the
particle’s energy. An average of 10 calculation errors per test
cycle have been observed in both chips without logic block
correction, only detection.
CoolRISC
Based on the CoolRISC core from CSEM (the Swiss Center
for Electronics and Microtechnology; www.csem.ch), iRoC
Technologies developed and manufactured, for the French
Space Agency (CNES), the RoC-CR11 in 180-nm silicon,
implementing soft-error detection and correction on both the
logic block and memory blocks. The company also manufactured a nonprotected version of CoolRISC.
Both chips integrate an 8-bit logic core block, a memory controller for external and internal memory, embedded program
and data memory blocks, and some external interfaces. After
manufacturing, these two chips were radiation tested to assess
the nonprotected CoolRISC’s sensitivity and the efficiency of
the protection implemented in the RoC-CR11.
During the radiation testing, both the nonprotected
CoolRISC and the protected RoC-CR11 underwent beam radiation at the same time. For a given application test and a fluency of 1.1e7, the CoolRISC’s chip output showed 60 errors.
For the same application test and a fluency of 1.5e6—10 times
more fluency—the RoC-CR11’s chip output showed no errors.
The RoC-CR11 also implemented error detection and uncovered 148 errors in its memories and 9 errors in the logic—all
of which were corrected.
Developers created different applications to run on the two
processors to test both the memory and logic blocks. All tests
showed the same results: The nonprotected CoolRISC showed
a significant number of errors, whereas the RoC-CR11
showed no die output errors.
The time-redundancy implementation resulted in a 90 percent area overhead for achieving both error detection and correction in the logic elements. This compared to a projected
200 percent overhead area penalty using a more traditional
time-redundancy approach. Using optimized ECC protection
for memories and time redundancy for logic blocks showed no
visible performance penalty.
Designers must consider this significant overhead for logic
protection within the overall logic-to-memory ratio in modern
chips, where logic might represent only 20 percent of the die
and the final application—networking, telecom, or consumer
application—doesn’t need 100 percent protection.
Simulating soft errors and pinpointing design hotspots will
optimize soft-error protection to meet end-user reliability
requirements.
Moving forward
Memory blocks protection and test
Soft errors now form part of the design challenge because,
like any other design constraint, there is a tradeoff between this
variable and application requirements. At 90 nm and beyond,
all parts of a SoC are soft-error sensitive. Reaching the 100 FIT
per device target will require an in-depth understanding of the
soft-error chain.
The CoolRISC and the RoC-CR11 contain 200 Kbits of
embedded SRAM. The protection techniques implemented on
the RoC-CR11, based on iRoC’s specific methodology for
error-corrected code, share the correction code among the different 8-bit memory words to save area.
The RoC-CR11 also implemented an error-detection signal to
monitor the error-correction mechanisms. Protecting 100 percent of the memory required a total area overhead of 29 percent;
an ECC solution would have required an overhead of 50 percent.
Both chips underwent static and dynamic tests to measure
the efficiency of iRoC’s soft-error protection techniques.
Among the different tests performed, the RoC-CR11 detected
and corrected all 80 single-bit errors in its memories, while the
unprotected CoolRISC incurred 90 single-bit errors.
As with all other design variables, optimization is essential.
A 100 percent soft-error protection rate is not truly needed
and is too expensive for most ground-level applications.
Making the most efficient tradeoff choices early in the
design phase requires a predictive methodology. An SER
prototyping and optimization tool well integrated in the
current design flow will help designers and business unit
managers make strategic decisions such as library and memory choices or even process or foundry choices.
Logic blocks protection and test
Reference
The CoolRISC and RoC-CR11’s logic blocks are latch-based
designs. This implies that all the registers are implemented as
latches, not flip-flops. This means that the design works by
using two nonoverlapping gated clocks, which provides a
power-efficient implementation.
Developers designed the RoC-CR11’s soft-error detection
based on iRoC’s patented time redundancy schemes. Heavy
ion radiation testing (more stressful than neutron beams)
demonstrated that the implemented protection technique provided 100 percent protection.
1. D. Chardonnereau et al., “32-Bit RISC Processor Implementing
Transient Fault-Tolerant Mechanisms and its Radiation Test Campaign Results,’’ Single-Event Effects Symp., NASA, Apr. 2002.
Michael Nicolaidis is a cofounder of iRoC Technologies
and the company’s chief technology officer.
Damien Chardonnereau is a project leader and product
manager for iRoC Technologies.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
47
Table 2. Comparison of various soft-error protection techniques.
Time redundancy
Circuit-level
hardening
Hardware
redundancy
Technique
description
Special circuit-level
design techniques
to decrease
implemented
circuits’ inherent
vulnerability
to soft errors.12
Undetected
errors
Errors logged
Technology
dependence
Extra effort
for recovery
Integration with
design flow
Area overhead
Performance
overhead
Yes
Classical techniques
such as triple modular
redundancy (TMR) and
concurrent error detection,
such as duplication, parity
prediction, low-cost
techniques for matrix
operations, and lossless
data compression13
Minimal
No
Yes
No
Softwareimplemented
hardware fault
tolerance
Multithreading
techniques
Multistrobe
Program instructions
executed twice and results
compared to detect errors;
program control-flow
errors detected using
special control-flow
checking techniques14,15
Same instruction
sequence executed
using two threads,
then results
compared to detect
any errors16,17
Errors detected and
corrected by strobing
outputs of the same
combinational logic
block multiple times
by delayed clocks18
Minimal
Minimal
Yes
Yes
Very little
Yes
Very little
Yes
Very little
Yes
Yes
Yes
Minimal
Yes, unless
TMR used
Complex,
recovery required
Yes
Minimal
Yes, unless
TMR used
Complex,
recovery required
None
Yes, 40 to 200 percent
Yes, unless
TMR used
Complex,
recovery required
Some
Yes, about
20 to 40 percent
Power overhead
Selective
insertion
Areas protected
Yes
Possible
Yes
Possible
Yes
Difficult
Yes
Difficult
Yes, unless
TMR used
Complex,
recovery required
Yes
Minimal for error
detection, can be
significant for error
correction
Yes
Possible
Mainly sequential
elements
Architectural
impact
Applicability
Minimal
Sequential elements
and combinational
logic
Yes
Sequential elements
and combinational
logic
None
Sequential elements
and combinational
logic
Yes
Sequential elements
and combinational
logic
Yes
Unlimited
Unlimited
Mainly
microprocessors
Mainly
microprocessors
Unlimited
Parameters
Simple
the SER contribution of combinational logic for
state-of-the-art processes is still considerably
smaller compared to contributions of unprotected
SRAMs and sequential elements. Hence, the chiplevel SER trend is dominated by the SER trends of
SRAMs and sequential elements such as latches and
flip-flops.
Even if the SER per SRAM bit or latch remains
constant over technology generations, integration
of more devices in advanced technologies results in
higher chip-level SER. In contrast, customer expectations for SERs will either remain constant or
become more stringent in advanced technologies.
SOFT-ERROR PROTECTION TECHNIQUES
Designers can use several strategies to provide
soft-error protection. These include circuit-level
48
hardening, classical hardware redundancy, and
time redundancy techniques.
The “Soft-Error Protection: Test Results” sidebar discusses radiation testing of some soft-error
protection techniques. Table 2 shows a comparative analysis of these techniques with respect to
several system-level metrics, exploring some variables and factors that determine their applicability to actual designs.
REUSE PARADIGM FOR BUILT-IN
SOFT-ERROR RESILIENCE
A new paradigm that leverages the reuse of onchip resources for multiple functions at various
stages of manufacturing and field use can overcome the drawbacks of existing soft-error protection techniques. For example, designers can reuse
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
SCB
Scan portion
SI
1D
SCA
on-chip scan design-for-testability resources for
soft-error protection during normal operation.
Scan design for testability has become a de facto
test standard because it provides an automated
solution to high-quality production testing. In addition, scan is extremely valuable for postsilicon
debug activities19,20 because it provides access to an
integrated circuit’s internal nodes.
Figure 3 shows a microprocessor scan flip-flop
design20 that comprises two distinct circuits: a system flip-flop and a scan portion. All scan flip-flops
in a design are connected together as one or more
shift registers. The SI input of a scan flip-flop is connected to the SO output of the preceding scan flipflop in the shift register. The SO output of a scan
flip-flop is connected to the SI input of the following scan flip-flop in the shift register. The structure
of the scan portion of Figure 3 is similar to the system flip-flop, with the addition of interface circuits
to move data between the system flip-flop and the
scan portion, as well as shifting the test pattern and
test response, as required by the specific scan architecture.
This design has two operation modes: normal-system operation and test. In the test mode, clocks SCA
and SCB are applied alternately to shift a test pattern
into latches LA and LB. Next, the UPDATE clock is
applied to move the contents of LB to PH1. Thus a
test pattern is written into the system flip-flop.
Next, functional clock CLK is applied, which captures the system response to the test pattern. Finally,
the CAPTURE signal is applied to move the contents of PH1 to LA. The system response is then
1D
C1
Latch
2D LA
CAPTURE
C1
C2
1D
UPDATE
C1
D
1D
C1
CLK
SI
SCA
Latch
PH2
C2
System flip-flop
shifted out by alternately applying clocks SCA and
SCB. During normal system operation, the scan portion is shut off by asserting logic-0 values to the scan
signals (SCA, SCB, UPDATE, and CAPTURE).
There are three basic reasons for using the scan
style of Figure 3: structural testing using automated
test pattern generation tools, functional testing using
signature analysis, and efficient postsilicon debug.18
The opportunity for scan reuse for soft-error protection arises from the redundant scan resources—
latches LA and LB in Figure 3—that are unused
during normal operation, but add to the occupied
area of the chip and the leakage power during normal operation.
Figure 4 shows how reusing the scan flip-flop
design can reduce the impact of soft errors that
affect latches. The flip-flop design’s test mode operation is identical to the design in Figure 3. In normal system operation mode, the scan clocks SCA,
SCB, UPDATE, and TEST are forced low, while the
CAPTURE
Scan portion
1D
C1
Latch
2D LA
1D
C1
Q
Latch
2D PH1
C-element truth table
SCB
SO
Latch
LB
O1
0
1
0
1
O2
0
1
1
0
Q
1
0
Previous value retained
Previous value retained
SO
Latch 02
LB
C-element
Keeper
C2
Figure 3.
Microprocessor
scan cell design.
The design has two
operation modes:
normal-system
operation and test.
Figure 4. Scan
reuse. Soft-errorblocking flip-flop
design with a Celement. Reusing
the scan flip-flop
reduces the impact
of soft errors that
affect the latches by
more than 20 times.
1D
UPDATE
D
CLK
TEST
C1
1D
C1
Latch
PH2
01
Q
Latch
2D PH1
C2
System flip-flop
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
49
Figure 5. Errortrapping scan cell
design. Latches
LA and LB store
redundant copies
of PH2’s and PH1’s
contents,
respectively, during
normal operation.
A soft error in any
latch causes the
error signal (E) to be
1. Once E is 1, the
logic values stored
in LA and LB become
complements of the
contents of PH2 and
PH1, respectively,
and E continues to
be 1, trapping the
error until another
soft error affects
one of the latches,
which rarely occurs.
50
SCB
Scan portion
SI
SCA
XOR2
1D
C1
D1
Latch
2D LA
CAPTURE
1D
C1
SO (Q2)
Latch
LB
C2
XOR1
E
1D
UPDATE
D
CLK
Q
C1
1D
C1
Latch
PH2
Latch
2D PH1
C2
System flip-flop
CAPTURE signal is forced high. This converts the
scan portion into a master-slave flip-flop that operates as a shadow of the system flip-flop.
During normal operation, when the clock signal
CLK is 0, the C-element output drives flip-flop output Q, and the chip transfers the logic value at input
D into latches LA and PH2. During this time,
latches PH1 and LB are susceptible to soft errors
because their clock inputs are 0 and they are holding logic values. If a soft error occurs in PH1 or LB,
the logic value on O1 will not agree with O2. As a
result, the error will not propagate to output Q, and
the keeper will hold the correct logic value at Q. A
soft error in PH2 or LA when CLK = 1 produces
similar results. Depending on the system’s speed and
the leakage current, the keeper in Figure 4 might
not be necessary.
Extensive SER simulations on an advanced
process technology using an internal tool5 show
that this design can reduce the SER by more than
20 times compared to the error rate for an unprotected flip-flop.
Any soft error affecting a single latch inside a
flip-flop is guaranteed to be detected by a selfchecking scan flip-flop that is obtained by removing the C-element and the associated keeper
structure from the design in Figure 4. Various selfchecking scan cells choices are possible.
During normal operation, at least one copy of
correct data exists, under the assumption of a single error in a latch. To perform self-checking, the
approach implements error-detection circuits such
as equality checkers that compare the Q and Q2
outputs of all such flip-flops in a design and indicate
an error each time a mismatch occurs.
A major drawback of such a self-checking
approach is the significant amount of area occupied by the logic network that accumulates the
error signals generated by individual flip-flops and
produces one or more global error signals.
The error-trapping scan cell shown in Figure 5
eliminates this problem. Latches LA and LB store
redundant copies of the PH2 and PH1 content,
respectively, during normal operation. A soft error
in any latch causes the error signal (E) to be 1. This
signal drives the top input of the exclusive-or gate
XOR2 so that when E equals 1, the output of
XOR2 (D1) becomes the complement of D.
Once the error signal E is 1, the logic values
stored in LA and LB become complements of the
contents of PH2 and PH1, respectively, and E continues to be 1. Thus, the error is trapped until
another soft error affects one of the latches of this
flip-flop, which is a rare event.
After a prespecified number of clock cycles, at a
recovery point the system shifts out this trapped
error signal using the existing scan path, which
eliminates the need for global routing of error signals at the cost of error-detection latency. Re-execution then achieves error correction.13
Table 3 shows the results generated by performing circuit simulations on a typical process corner
for an advanced technology to compare the softerror-resilient scan flip-flops and a conventional
scanned flip-flop.
To evaluate the system-level impact of soft-errorresilient scan cell designs, we estimated the chiplevel area and power overheads of new soft-error
resilient scan flip-flop designs in Table 4, assuming
that 25 percent of the flip-flops are protected from
soft errors.8 The results showed that the overall
power and area overheads for all proposed designs
are less than 5 percent and 0.3 percent, respectively.
Such relatively low overheads, combined with the
expected high gain in soft-error resilience, justify
the use of proposed designs in various applications.
Several optimizations are possible to further reduce
the system-level power overhead to 3 percent or less.
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Table 3. Relative cell-level timing, power, area, and soft-error rate comparisons.
Approach
Scannable
D-to-Q
C-to-Q
Power
Area
Global
interconnect
Yes
1.00
1.00
1.00
1.00
None
1.00
Yes
1.00
1.08
2.13
1.08
None
< 0.05
Yes
0.99
0.99
2.02
0.95
Yes
0.97
0.99
2.26
1.24
Several for error
accumulation
Reused from
existing scan path
Master/slave
flip-flop
Error-blocking
design
Self-checking
design
Error-trapping
design
Undetected
soft-error rate
0
0
Table 4. Chip-level power area and performance overhead, by percent.
Approach
Error-blocking design
Self-checking design
Error-trapping design
Power overhead
4.5
4.0
5.0
The reuse paradigm for built-in soft-error
resilience offers the following unique advantages
over existing soft-error protection techniques:
• minimal area overhead because resources
already available for test and debug can be
reused for soft-error resilience;
• minimal routing overhead;
• no major architectural changes required;
• applicability to any design—microprocessors,
network processors, and ASICs; and
• a broad spectrum of design choices with several area, power, performance, and soft-error
rate tradeoffs. For example, the design shown
in Figure 4 can be redesigned to achieve a 50
percent rather than a 20 times reduction in the
SER, with a 30 percent reduction in the celllevel power overhead.
oft-error rates are getting worse for systems
manufactured in advanced technologies with
very high levels of integration. Stringent data
integrity and the availability requirements of enterprise and networking applications demand special
attention to soft errors not only in SRAMs but also
in sequential elements and combinational logic
from the very early phases of product development
forward.
Applying the reuse paradigm for built-in soft-error
protection significantly reduces the system-level
soft-error rate and introduces minimal overhead.
Automated techniques for architectural-vulnerability-factor estimation are required to further reduce
the system-level power, performance, and area overheads of these techniques. ■
S
Area overhead
0.10
-0.06
0.30
Performance overhead
0
0
0
Acknowledgments
For their help with this article, we thank R.
Fuller, J. Maiz, and T.M. Mak of Intel, and E.J.
McCluskey of Stanford University.
References
1. D. Lyons, “Sun Screen,” Forbes Magazine, 2000;
www.forbes.com/forbes/2000/1113/6613068a.html.
2. R. Wilson and D. Lammers, “Soft Errors Become
Hard Truth for Logic,” EE Times, 3 May 2004; www.
eetimes.com/semi/news/showArticle.jhtml?articleID=
19400052.
3. D.C. Bossen, “CMOS Soft Errors and Server Design,”
Workshop on Radiation Induced Soft Errors, Proc.
IEEE Int’l Reliability Physics Symp., IEEE Press,
2002.
4. “Increasing Network Availability”; www.cisco.com.
5. H.T. Nguyen and Y. Yagil, “A Systematic Approach
to SER Estimation and Solutions,” Proc. IEEE Int’l
Reliability Physics Symp., IEEE Press, 2003, pp. 6070.
6. N. Seifert and N. Tam, “Timing Vulnerability Factors of Sequentials,” IEEE Trans. Device and Materials Reliability, Sept. 2004, pp. 516-522.
7. K.K. Goswami, R. Iyer, and L.Y. Young, “DEPEND:
A Simulation-Based Environment for System-Level
Dependability Analysis,” IEEE Trans. Computers,
Jan. 1997, pp. 60-74.
8. N.J. Wang et al., “Characterizing the Effects of Transient Faults on a High-Performance Processor
Pipeline,” Proc. Int’l Conf. Dependable Systems and
Networks, IEEE Press, 2004, pp. 61-70.
9. S.S. Mukherjee et al., “A Systematic Methodology
to Compute the Architectural Vulnerability Factors
for a High-Performance Microprocessor,” Proc. Int’l
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
51
Symp. Microarchitecture, IEEE CS Press, 2003, pp.
29-40.
10. P. Hazucha et al., “Neutron Soft Error Rate Measurements in a 90-nm CMOS Process and Scaling
Trends in SRAM from 0.25-µm to 90-nm Generation,” Proc. Int’l Electron Devices Meeting, 2003,
pp. 21.5.1-21.5.4.
11. R. Baumann, “The Impact of Technology Scaling on
Soft-Error Rate Performance and Limits to the Efficacy of Error Correction,” Proc. IEEE Int’l Electron
Devices Meeting (IEDM02), IEEE Press, 2002, pp.
329-332.
12. P. Hazucha et al., “Measurements and Analysis of
SER-Tolerant Latch in a 90-nm Dual Vt CMOS
Process,” IEEE J. Solid State Circuits, Sept. 2004,
pp. 1536-1543.
13. D.P. Siewiorek and R.S. Swarz, Reliable Computer
Systems Design and Evaluation, 3rd ed., A.K. Peters,
1998.
14. N. Oh, P.P. Shirvani, and E.J. McCluskey, “Error
Detection by Duplicated Instructions in Super-Scalar
Processors,” IEEE Trans. Reliability, Mar. 2002, pp.
63-75.
15. N. Oh, S. Mitra, and E.J. McCluskey, “ED4I: Error
Detection by Diverse Data and Duplicated Instructions,” IEEE Trans. Computers, Feb. 2002, pp. 180199.
16. N.R. Saxena et al., “Dependable Computing and OnLine Testing in Adaptive and Reconfigurable Systems,” IEEE Design and Test of Computers, Jan.Mar. 2000, pp. 29-41.
17. S.S. Mukherjee, M. Kontz, and S. Reinhardt,
“Detailed Design and Evaluation of Redundant Multithreading Alternatives,” Proc. Int’l Symp. Computer Architecture, IEEE CS Press, 2002, pp. 99-110.
Get access
to individual
IEEE Computer Society
documents online.
More than 100,000
articles and conference
papers available!
$9US per article for members
$19US for nonmembers
www.computer.org/
publications/dlib
52
18. M. Nicolaidis, “Time Redundancy-Based Soft-Error
Tolerance to Rescue Nanometer Technologies,” Proc.
IEEE VLSI Test Symp., IEEE Press, 1999, pp. 86-94.
19. A. Carbine and D. Feltham, “Pentium Pro Processor
Design for Test and Debug,” Proc. Int’l Test Conf.,
IEEE Press, 1997, pp. 294-303.
20. R. Kuppuswamy et al., “Full Hold-Scan Systems
in Microprocessors: Cost/Benefit Analysis”; http://
developer.intel.com/technology/itj/2004/volume08
issue01/.
Subhasish Mitra, a senior staff engineer at Intel, is
also a consulting assistant professor in the Electrical Engineering Department at Stanford University
and the associate director of the Stanford Center
for Reliable Computing. His research interests
include robust system design, VLSI design and test,
fault-tolerant computing, and computer architecture. Mitra received a PhD in electrical engineering from Stanford University. Contact him at
[email protected].
Norbert Seifert is a design and reliability engineer
at Intel. His research interests include the interdependence of design and system reliability. Seifert
received a PhD in physics from the Technical University of Vienna, Austria. Contact him at Norbert.
[email protected].
Ming Zhang is an intern at Intel and a PhD candidate in the Department of Electrical and Computer
Engineering at the University of Illinois at UrbanaChampaign. His research interests include design
and modeling of reliable circuits and systems.
Zhang received an MS in electrical engineering
from the University of Illinois at Urbana-Champaign. Contact him at [email protected].
uiuc.edu.
Quan Shi is a senior design engineer at Intel. His
research interests include circuit-hardening techniques, circuit modeling and validation, and asynchronous circuits. Shi received a PhD in electrical
engineering from the University of New Mexico.
Contact him at [email protected].
Kee Sup Kim is the director of DFX—Design for
Test, Reliability, Manufacture, and Debug—for
communications products at Intel. His research
interests include the four DFX areas, especially
structural test, speed-defect coverage, BIST, and
quality risk assessment. Kim received a PhD in electrical engineering from the University of WisconsinMadison. Contact him at [email protected].
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
COVER FEATURE
Transistor-Level
Optimization of
Digital Designs
with Flex Cells
The flex-cell approach, either alone or in combination with standard cells,
provides an optimally tuned set of building blocks for integrated circuit
design when optimality is measured using accepted and quantifiably definable metrics such as clock speed, die size, and power consumption.
Rob Roy
Debashis
Bhattacharya
Vamsi
Boppana
Zenasis
Technologies
0018-9162/05/$20.00 © 2005 IEEE
A
s early as the mid-1980s, researchers
studying the performance gap between
handcrafted custom and standard-cellbased synthesized designs1 found that a
fixed and limited set of library elements
constitute a major bottleneck in achieving target
quality. More recently, researchers have estimated
that as much as 25 percent of the quality gap
between automatically created and handcrafted
designs can be attributed to the fixed set of cells in
a predefined library. These libraries enable quick
generation of a broad range of possible designs,
but are not optimized for the timing context found
in any particular one.
A standard-cell-based automated design flow for
digital circuits offers a mixed blessing. Historically,
the use of precharacterized and silicon-verified standard cells has been driven by the designers’ need to
create and verify large digital circuits without having to verify the circuit’s behavior at the transistor
level. Transistor-level design and verification of
a multimillion-gate digital circuit is simply too
resource-intensive to be commercially viable for
most designs. Standard cells thus provided relatively
fine-grained control over the digital circuit’s structure, yet allowed a team of fewer than 10 engineers
to design complex digital integrated circuits using
automated synthesis tools.
On the other hand, the quality of automated standard-cell-based designs has always ranged from
poor to barely acceptable at best. Researchers have
estimated that designs created using these automated-design flows run slower by at least a factor
of 6 and consume a larger design area by at least a
factor of 10, compared to similar designs created
or optimized manually. Up to one-quarter of this
quality deficiency can be traced to using a fixed, predefined library of standard cells.
Over the years, it has become commonplace to
perform various forms of manual intervention on
designs generated using automated flows. Among
these, using special macrocells and tactical cells has
become virtually routine for all high-performance
designs created using automatic design tools, especially for designs that run into timing-closure problems. These problems occur because of inaccurate
timing estimations during automated design creation. They usually can be linked to incorrect
estimation of the interconnect load and delay characteristics. According to market research surveys
conducted by Collett International, more than 60
percent of all ASIC designs have timing-closure
problems.2
The quest to overcome the limitations of standard-cell-based design methods leads naturally to
the creation of new design- and context-specific
Published by the IEEE Computer Society
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
53
Database
Netlists:
1. Standard cells only
2. Standard + flex cells
Libraries
1. Standard cell
2. Flex cell (including layout)
Cluster
information
New flex cell
Flex cell generation
(mapping to transistors)
Uniquification
(minimal set of flex cells)
Decision diagrams
External constraints
Other data:
timing, area,
extracted parasitics
Flex cell
information
Analysis results
Design
information
Objective analysis
(for example, timing)
Creation of necessary views
and models, including
characterization
Create
flex cell
New
cluster
Flex cell
found/not found
Global optimization
(for example,
buffering and sizing)
Clustering-based local optimization
(driven by static timing analysis)
Constraints
Results
Master optimization control
(driven by static timing analysis)
Reports
Constraints
User interface
Library
1. User commands
2. Parsers
3. External interfaces: flex cell layout, formal
verification, output netlist generation, flex cell library
generation, report generation
Netlist
Physical data
Figure 1. IC design
optimization process
using flex cells.
These design- and
context-specific
cells enhance
system performance
by increasing clock
speed.
New netlist
Driver scripts
cells—designated flex cells—during the process of
optimizing a given digital design.
The design community openly acknowledges that
virtually every high-performance design project that
relies on automated design flows also uses designspecific tactical cells that developers identify and create manually. They then use these cells in the design
via a combination of a register-transfer-level coding
style and synthesis directives. Without the use of
these cells, the gap in quality between automated
and handcrafted designs would be even wider.
From a superficial viewpoint, flex-cell-based
design optimization automates the creation of tactical cells, thereby helping to bridge the quality gap.
However, a deeper examination of the flex-cellbased optimization process makes it amply clear
that the full impact of such optimization goes far
beyond providing a better framework for creating
tactical cells.
FLEX-CELL-BASED OPTIMIZATION
The IC design optimization process shown in
Figure 1 is geared toward enhancing the design’s
performance—specifically, increasing clock speed.
The time-tested manual process of locally optimizing a digital design driven by global analysis
inspired this design optimization approach. In this
design, the global analysis consists of accurate static-timing analysis,3 while local optimization con54
New library
sists of an overall control mechanism employing
two key steps: clustering and mapping.4,5
To optimize performance, the clustering process
identifies the best candidate regions in the design
for local optimization, a search driven by the statictiming analysis’s results. The clustering process
yields a set of clusters—groups of one or more standard cells—that must be replaced by new flex cells
created for their respective timing contexts.
The mapping process takes as inputs the clusters
and their timing contexts, then determines a reasonably small set of best-candidate flex cells that
should be used to replace the clusters. The optimization control process searches through this
set of cells to determine whether replacing one or
more clusters with flex cells will improve the given
design’s overall timing.
The close coupling between the clustering and
mapping processes is key to this technique’s success. Specifically, the mapping process6 is tailored
to choose from a variety of techniques that can be
used to create new flex cells, based on the inputs it
receives from the clustering process.
Such mapping techniques can include time-tested
methods such as gate sizing and continuous transistor sizing, as well as techniques typically found
in custom design flows, such as creating new,
appropriately sized transistor-level implementations
of the function for a given standard-cell cluster.
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Cluster of standard cells, various context-specific constraints
for this cluster, other real-life constraints such as process
Map to transistor-level
candidate netlists for flex cell
Transistor sizing
(No)
Fast (prelayout)
characterization
Create transistor-level
netlist with systematic
redundancy, if permitted
Meets requirements?
Layout synthesis
Postlayout characterization
Detailed characterization
Meets requirements?
(No)
Set of candidate flex cells
Various interfaces to evaluate and fit
flex cell into standard-cell-based design flow
PRACTICAL MAPPING CHOICES
The mapping process can be quite involved. At a
minimum, it includes the following:
• ensuring functional correctness of the resultant transistor-level design;
• meeting design targets, for example, performance (possibly measured using propagation
delay) of the generated flex cells, given the timing contexts for their intended use;
• meeting other implementation constraints,
such as maximum length of N- or P-transistor
chains in the flex cell, the required output drive
strength for the design-specific cell, desired
input capacitive load of the design-specific cell,
and so on;
• minimizing the number of transistors in the
design-specific flex cells, subject to the IC
design’s characteristics; and
• sizing the transistors in the design-specific cells,
as necessary.
Figure 2 shows a more detailed view of the mapping process. The inputs to the process can include
the following:
• a set of structural netlists composed of standard cells, otherwise known as clusters;
• a set of performance constraints for each individual cluster; and
• important process-dependent parameters like
transistor SPICE models (developed originally
at UC Berkeley, SPICE is the most widely used
simulation tool for transistor-level designs).
A clustering process, which precedes the mapping
process shown in Figure 1, identifies the set of clusters and the performance constraints for them. The
clustering step essentially partitions a conventional
logic synthesis tool’s output, using either heuristics
to guide the partitioning or a systematic search procedure such as a branch-and-bound search.
Key steps of the mapping process include the following:
• creation of a transistor-level netlist;
• fast characterization that incorporates the flex
cells’ implementation context;
• transistor sizing;
• accurate but slower characterization of the
final transistor-level netlist;
• optional layout synthesis with transistor sizing, via a layout synthesis tool;
• parasitic extraction and accurate postlayout
characterization if layout synthesis is performed; and
Figure 2. Flex-cell
mapping process.
Tailored to choose
from a variety of
techniques that can
be used to create
new flex cells, this
process ensures
functional
correctness,
meets design
targets and other
implementation
constraints, and
minimizes the
number of
transistors in
design-specific
cells.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
55
Figure 3. Flex-cell
generation. Starting
with (a) the original
cluster of standard
cells, the mapping
process (b) creates
a flex cell that
replaces the cluster
and (c) improves
performance.
c
d
a
ba
a
a
c
b
d
d
a
d
c
b
(a)
(b)
Transition
on input #
a
b
c
d
Original
Rise (ns)
0.29
0.18
0.18
0.18
Fall (ns)
0.34
0.30
0.31
0.27
Optimized
Rise (ns)
0.13
0.17
0.16
0.15
Fall (ns)
0.11
0.13
0.15
0.14
(c)
• generation of views to fit the flex cells into a
standard-cell-based design flow.
This process contrasts sharply with conventionally
automated, transistor-level design optimization
techniques that derive their benefits primarily from
transistor sizing.7
Creation of a transistor-level netlist during mapping includes the four key substeps shown in Figure
2: transistor netlist generation, netlist evaluation,
topology alteration, and sizing. Given the original
cluster, the mapping can use a variety of algorithms
and heuristics to generate a transistor netlist. For
example, several techniques for deriving transistor
netlists use binary decision diagrams as starting
points to represent the cluster’s function. Based on
acyclic directed graphs, BDDs can be used to represent common functions in digital circuits, including transistor netlist structures.8,9
Despite starting with multiple algorithms, the
transistor-netlist-generation process might not yield
topologies that meet the constraints its use context
imposes on the cluster. In such cases, the process
must alter the topology to explore multiple alternative implementations, given the cluster’s functionality. For example, the process can include
using a variable reordering in the cluster’s decision
diagram representations. The topology alteration
process also can use multiple decomposition methods, such as the Boole-Shannon, Kronecker, RothKarp, Positive Davio, Negative Davio, and Ashenhurst techniques.1
Researchers can use various metrics, derived
from the constraints mentioned earlier, to evaluate
the results from the topology alteration process and
obtain a ranked list of flex cells. In this list, a higher
rank indicates greater suitability for use in the given
context.
56
For timing optimization purposes, an appropriate mix of SPICE-like transistor-level timing analysis as well as faster and more approximate
switch-level timing analysis techniques—for example static-timing analysis at the switch and transistor level—is key to ranking the candidate transistor
netlist topologies properly.
Figure 3 shows some results of and uses for the
flex-cell generation process. This diagram shows
the flex cell that results when a portion of an IC
design is mapped to transistors, with the primary
goal being performance optimization. Figure 3a
shows the cluster in question. It has only one critical input, input a. In this context, a critical input is
the delay from this input to the cell’s output, which
limits the cell’s overall performance.
Figure 3b shows a candidate flex cell generated
by the mapping process. Figure 3c shows the performance improvement that would result from
replacing the cluster with the flex cell.
Although this description implicitly focuses on
the static CMOS family of logic circuits, if the target IC design implementation uses another family
of MOS circuit design—including various forms of
dynamic CMOS, a combination of static and
dynamic CMOS, and so on—this mapping process
also could be applied broadly to the creation of
NMOS or PMOS networks for such logic families.
MINIMIZING THE NUMBER OF
NEW FLEX CELLS CREATED
As Figure 1 shows, the flex-cell-based optimization process can invoke a process to minimize the
number of functionally unique flex cells created
when optimizing a given design. In the context of
flex-cell synthesis, this process focuses on identifying the minimum number of unique flex cells
required for optimization and thus differs from the
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Context is not known
Consider all single-input transitions
a
0 1 0 0 1 0 0 0 1 0 0 1 0 1 1 ...
uniquify command found in synthesis tools.
Further, given that developers fully expect a predefined standard-cell library to be available as an
optimization process input, they also can use the
process to identify near or exact matches, depending on the IC or flex-cell design’s tolerance.
As a matter of policy, if transistor-level implementations of the standard cells are available as
optimization process inputs, creating flex cells that
have equivalent or near-equivalent matches in the
available standard-cell library should be avoided,
within limits.
Clearly, practical considerations must drive the
choice to create a new flex cell or not. However,
exceptions to this policy might be necessary when
creating, for example, new flex cells that represent
library cell-sizing variations.
In the context of flex-cell-based synthesis, especially synthesis aimed at enhancing the target
design’s performance, this process must take into
account both functionality and the timing contexts
in which each unique cell will be used.
Thus, in addition to functionality, the matching
target is annotated with timing constraints related
to its intended use in a design. Although here we
focus on timing, in general, constraints specified as
part of the target can be related to various other
metrics such as power, area, noise margins, slew,
input and output capacitances, drive strength, footprint size, and pin placement.
CELL LAYOUT SYNTHESIS
During the mapping process shown in Figure 1,
automated-cell-layout synthesis10,11 plays a key role
in closing the loop with respect to creating actual
layouts of the flex cells designed as transistor-level
netlists.
Layout synthesis takes as input the flex cells’
netlists, various fabrication process technology
parameters—including layout design rules, desired
standard-cell architecture parameters such as cell
height, number of tracks, and implant specifications—and creates the detailed transistor layout.
This layout consists of polygons that eventually will
be fabricated on silicon substrate.
Layout synthesis commonly includes further tuning the transistors’ sizes in the flex cells with the
goal of ensuring that the cells’ timing characteristics, postlayout, closely match the desired timing
characteristic passed to layout synthesis as input.
During the layout synthesis step, designers strive
to achieve compatibility with standard cell library
blocks to seamlessly mix the created flex cells with
the predefined standard cells used in the rest of the
b
0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 ...
Z
f
c
0 0 0 1 1 1 0 0 0 0 1 0 0 0 1 ...
(a)
Context is known
Consider only some input transitions
a
00111
Inputs
.
.
.
b
00011
Z
f
c
01001
. Outputs
.
.
(b)
design. The compatibility of the flex cells and standard cells, at the layout level, ensures that the final
IC design can be highly customized while remaining flexible enough to allow using standard cells
where possible or needed.
PERFORMANCE ENHANCEMENT
As Figure 2 shows, our process repeatedly uses a
fast characterization step during mapping to obtain
estimates of the flex cells’ timing characteristics.
This is possible because the design constraints are
known and have been used as the basis for generating the flex cells. Using appropriately chosen
characterization mechanisms throughout the mapping and optimization process is key to the success
of such flex-cell-based design optimization techniques.
Broadly speaking, characterization mechanisms
used during optimization can be divided into two
phases: prelayout and postlayout.
The prelayout phase includes flex cell characterization at creation, taking the context of use into
account. Using context-dependent information at
this stage allows more accurate characterization
and reduces the resources needed for this function.
Figure 4 shows this process: Cell Z has three
inputs, a, b, and c. Conventional methodology can
instantiate Cell Z in any portion of the design.
Hence, a characterization method that lacks knowledge of the cell’s use context must consider the possibility that signal transition can propagate from
any input to the output, for any possible valid input
combination on the other inputs.
In contrast, in alternative design flows that generate flex cells on the fly, the places the cell instan-
Figure 4. Contextdependent flex-cell
characterization. (a)
A conventional
methodology can
instantiate Cell Z
in any portion of
the design. (b) In
contrast, the
alternative design
flows that generate
flex cells on the fly
allow only a subset
of the possible input
transitions to
be used for
characterization,
which speeds
processing.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
57
Table 1. Propagation delay comparison of gate-level static-timing and
transistor-level-timing analyses.
Drive strength at each of
four stages
1
2
3
0
1
2
4
0
0
1
2
4
1
0
1
2
4
2
Static timing,
in nanoseconds
4
0
1
2
4
4
0.40
0.22
0.15
0.61
0.41
Transistor-level
timing (SPICE),
in nanoseconds
0.186
0.166
0.157
0.581
0.332
tiates in the design define the cell environment.
Hence, certain input value combinations may not
be applicable to Cell Z. In Figure 4b, for example,
the design flow only needs to consider a subset of
the possible input transitions because of Cell Z’s
known design context.
The benefits of prelayout characterization
become more pronounced when applied to a group
of standard or flex cells. Consider, for example, a
simple four-stage chain of NAND gates. Table 1
compares the worst propagation delays through
this gate chain, as determined by both gate-level
static-timing analysis and transistor-level timing
analysis for five circuits. Each row in Table 1 represents one unique circuit configuration.
The first four columns in Table 1 represent the
drive strength of the gates at each stage, where zero
represents the smallest drive strength and four represents the largest. The static timing and transistorlevel timing columns represent the worst propagation delay through the same design, based on the
analysis of representative state-of-the-art processes
and delay models. The same load capacitance and
input slew values were used for each run.
Clearly, transistor-level timing analysis is more
accurate than gate-level timing analysis. The differences in timing analyses can vary dramatically
in certain kinds of configurations, as seen in the first
circuit configuration in Table 1, where each NAND
gate has 0 drive strength.
These differences vary based on factors such as
the delay models being used, circuit topology, the
model extraction technology, the choice of transistor simulation techniques, the targeted fabrication
process, circuit design style, and the drive strengths
of the circuits under consideration. In general, gatelevel static-timing analysis is inherently conservative to account for potential inaccuracies in deriving
the gate-level abstraction from the transistor-level
circuit.
The well-known conservative nature of gate-level
static-timing analysis improves accuracy because it
considers clusters of standard cells and analyzes
and optimizes these clusters at the transistor level.
Physically placing members of such clusters in close
proximity to each other to make a new cell with a
58
predefined shape, then characterizing the entire
group as one entity at the transistor level provides
a more accurate estimate of timing than can be
achieved with gate-level static-timing analysis.
PHYSICAL DESIGN AND OPTIMIZATION
As the minimum feature size of fabrication
processes has decreased to 0.18 µm and smaller, it
has become virtually impossible to create designs,
especially high-performance designs, without incorporating detailed physical design information into
the synthesis and optimization process.
The dominant factor guiding this development is
the greater role that interconnect delays play in
determining a design’s overall critical-path delay.
Static-timing analysis in flex-cell-based design must
take into account actual wire delays, loads, and slew
degradation differences between different parts of
the same interconnect net—or use good estimates
thereof derived from physical design knowledge.
The local optimization steps, including clustering
and mapping, must also consider the impact of these
factors with regard to nets of interest.
Various intermediate steps can be taken, as flexcell-based optimization transitions from traditional
wire-load model-based computation to physical
design-based load computation. The necessary step
of understanding the standard and flex cells and
estimating the wire lengths of individual nets is best
done using well-known parameters such as halfperimeter, number of net terminals, and fraction of
the bounding box covered by cells and occupied by
blockages.12 At high utilization, or in the presence
of severe congestion, more detailed routing information is essential to allow accurate estimation of
the delays and loads that the flex-cell-based optimization tool must take into account at various
stages.
Key to incorporating this data into the optimization process is the use of fast incremental
placement algorithms. These algorithms can be
based on well-known techniques such as quadratic
placement, force-directed placement, and simulated
annealing13—or on some appropriate combination
of multiple placement techniques.
Important issues that require careful attention
for effective use of incremental placement include
the following:
• execution time of the incremental placement
algorithm, and
• quality of result—as measured by correlation
to the final placement that the place-and-route
tool used for the actual layout will generate.
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
a
c
d
a
ba
Figure 5. Structural
comparison of
unoptimized cells
versus an optimized
flex cell.
a
c
a
d
b
d
c
d
b
Table 2. Performance comparison of an unoptimized cell and an optimized flex cell.
A
B
C
D
Original cell, in nanoseconds
Rise
Fall
0.29
0.18
0.18
0.18
A practical solution may require making a variety of tradeoffs to achieve the desired speed, at the
potential cost of some quality degradation. These
tradeoffs might include the following:
• relaxing the requirements to generate legal
placements, thus allowing some design rule
violations like cell overlap;
• invoking incremental placement after a set of
optimization steps complete as opposed to
invoking incremental placement after every
change made during optimization; and
• using simpler algorithms like force-directed
placement, as opposed to more sophisticated
placement techniques.
CASE STUDIES
Results from experimental studies help to
demonstrate the substantial benefits that can be
derived from using crafted flex cells to custombuild a design context. These studies also provide
evidence that applying this methodology in current
designs is feasible.
The first experiment demonstrates the savings
achievable by replacing a set of conventional standard cells with a customized flex-cell implementation. Figure 5 and Table 2 show these results, which
document the structural and timing advantages of
flex cells, respectively.
Figure 5 shows that the conventional standardcell implementation used five cells, 22 transistors,
and nine wires, while the optimized flex-cell implementation reduced to a single cell that consists of
only 13 transistors and zero global wires.
Table 2 shows various characteristics of the new
implementation, including significant improvement
in timing characteristics. In the design context for
Optimized flex cell, in nanoseconds
Rise
Fall
0.34
0.30
0.31
0.27
0.13
0.17
0.16
0.15
0.11
0.13
0.15
0.14
this example, the project team created the flex cell
to optimize the critical path in the design between
the input and output. The worst-case delay for that
critical path improved from 0.31 ns in the conventional implementation to a remarkable 0.13 ns in
the flex-cell implementation.
Consider how the introduction of flex-cell-based
optimization can alter a design’s worst critical
paths. The graph in Figure 6 plots the number of
paths violating a specific timing constraint against
the timing constraint for two versions of another
adder design. A state-of-the-art, commercial, standard-cell synthesis tool produced the first design,
represented by the outer curve. The second design,
represented by the inner curve, was obtained by
applying flex-cell-based optimization to the original design.
Figure 6. Using flex
cells to reduce the
number of paths
that violate timing
constraints.
10,000
4,853
Original
Flex cells
3,273
1,994
1,063
1,000
Number of paths
Transition on input #
463
173
100
37
10
4
1
0.90
0.95
1.00
1.05
1.10
Time (ns)
1.15
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
1.20
1.25
59
0.4
Before
optimization
(ns)
1.67
1.74
1.87
1.89
1.99
2.04
2.18
2.23
Z
ADD
Z
After
optimization
0.4
(ns)
Figure 7. Flex-cellbased optimization
of critical-path
timing. Optimization
improved the
performance of
this timing-path
set by 19 percent.
1.56
1.61
1.71
Figure 7 shows how applying flex-cell-based optimization can improve individual timing paths. Here,
the original critical path ran at 2.23 ns, while flexcell-based optimization reduced the critical path
delay to 1.83 ns, an improvement of 19 percent over
the combinational path delay on this timing path set.
Other data shows the feasibility of using flex cells
in an automatic design optimization procedure. One
potential difficulty with using flex cells is that the
time-consuming process of creating their layout can
interfere with the speed of front-end optimization
and synthesis tools. This can be overcome using
highly accurate estimators of postlayout parasitics,
given a flex cell’s prelayout SPICE netlist and various
details of the target cell architecture and process rules.
Experiments on a large set of flex cells indicate
that using such estimators, even for a characteristic
as sensitive as timing data, can bring prelayout data
to within from 5 to 6 percent of postlayout data, as
shown in Figure 3c. Further, the prelayout data can
be tuned to be slightly pessimistic, thus eliminating
costly optimization and place-and-route iterations.
Finally, we summarize the results of using flexcell-based optimization on some industrial designs,
used in practice, that range from 7,000 placeable
instances and approximately 30,000 gates to
roughly 80,000 placeable instances and 320,000
gates, designated CKT1 through CKT5. Table 4
shows the results of this approach.
1.76
1.81
1.88
Clearly, use of flex-cell-based optimization
methodology can achieve performance enhancements ranging from 10 to 20 percent. As tools
built with flex-cell-based optimization methodology mature, we expect to achieve a performance improvement significantly greater than 20
percent over and above what traditional standard-cell-based design optimization flows can
deliver.
SICs have a place in digital circuit design. In
certain applications, the economics and other
practical considerations make it the best
method. Custom ICs achieve the best performance
and will continue to do so in the foreseeable future.
However, there are some techniques that can be
borrowed from the custom design method—optimization at the transistor level, for example, which
can be automated and thus become a viable extension of current ASIC design methodology.
This is the essence of the flex-cell approach,
which either alone or in combination with standard cells provides an optimally tuned set of building blocks for the target IC design when optimality
is measured using accepted and quantifiably definable metrics such as clock speed, die size, and
power consumption. By allowing manipulation of
the transistor-level structures, flex cells open up a
A
Table 4. Flex-cell-based industrial design optimizations.
Design
CKT1
CKT2
CKT3
CKT4
CKT5
60
Total number
of instances,
initially
7017
18,265
33,940
38,310
80,277
Number of
unique flex
cells added
96
49
165
132
80
Total number
of flex-cell
instances
added
500
183
3,821
5,927
640
Final total
number of
instances
6,951
18,275
34,389
36,192
78,639
Initial clock
frequency
(MHz)
339
167
187
297
188
Final clock
frequency
(MHz)
400
193
219
345
206
Performance
improvement
(percent)
18
16
18
16
10
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Runtime
(hours)
5
8
33
35
5
new dimension in the optimization of automatically
created designs.
Practical results using flex-cell-based optimization suggest that when employed properly, this
methodology holds the promise of significantly benefiting the automatic optimization of digital designs.
Achieving this would represent a significant step
toward bridging the performance gap between
custom and ASIC design. Besides performance, flex
cells can be used for optimizing power, both
dynamic as well as static, in ASICs. Moreover, flex
cells naturally fit into the methodology of creating
variations of custom transistor topology, for example, the use of a ground gating transistor for leakage power control.14 ■
References
1. D. Chinnery and K. Keutzer, Closing the Gap
between ASIC & Custom, Kluwer Academic Publishers, 2002.
2. Collett International, “1999 IC/ASIC Functional &
Timing Verification Study,” 1999; www. Collett.com.
3. R.B. Hitchcock, “Timing Verification and Timing
Analysis Program,” Proc. 19th Design Automation
Conf., IEEE CS Press, 1982, pp. 594-604.
4. J.L. Burns and J.A. Feldman, “C5M–A Control-Logic
Layout Synthesis System for High-Performance
Microprocessors,” IEEE Trans. Computer-Aided
Design of Integrated Circuits and Systems, vol. 17,
no. 1, 1998, pp. 14-23.
5. University of California, Berkeley, SIS Abstract Page;
www.eecs.berkeley.edu/IPRO/Software/Catalog/
Description/sis1.2.html.
6. M.R.C.M. Berkelaar and J.A.G. Jess, “Technology
Mapping for Standard-Cell Generators,” Proc. Int’l
Conf. Computer-Aided Design, IEEE CS Press, 1988,
pp. 470-473.
7. K. Taki, “A Survey for Pass-Transistor Logic Technologies—Recent Research and Developments and
Future Prospects,” Proc. Asia-South Pacific Design
Automation Conf., IEEE CS Press, 1998, pp. 223226.
8. R.E. Bryant, “Graph-Based Algorithms for Boolean
Function Manipulation,” IEEE Trans. Computing,
Aug. 1986, pp. 677-691.
9. C.P. Liu and J.A. Abraham, “Transistor-Level Synthesis for Static Combinational Circuits,” Proc. 9th
Great Lakes Symp. VLSI, IEEE CS Press, 1999, pp.
172-175.
10. M. Cirit and P. Hurat, “Automated Cell Optimization,” Numerical Technologies white paper, 2002;
www.synopsys.com/products/ntimrg/abstracts/
AutomatedCellOptimization.html.
11. A. Reis et al., “The Library Free Technology Mapping Problem,” Proc. Int’l Workshop Logic Synthesis, 1997, IEEE CS Press, pp. 102-106.
12. S. Bodapati and F.N. Najm, “Prelayout Estimation
of Individual Wire Lengths,” IEEE Trans. VLSI Systems, vol. 9, no. 6, 2001, pp. 943-958.
13. N. Sherwani, Algorithms for VLSI Physical Design
Automation, 2nd ed., Kluwer Academic Publishers,
1995.
14. M. Johnson, D. Somasekhar, and K. Roy, “Leakage
Control with Efficient Use of Transistor Stacks in Single Threshold CMOS,” Proc. 36th Design Automation Conf., ACM Press, 1999, pp. 442-445.
Rob (Rabindra) Roy is vice president of marketing
and business development at Zenasis Technologies.
His research interests include VLSI design and test,
timing and power optimization, and wireless communication and computing systems. Roy received
a PhD in electrical and computer engineering from
the University of Illinois at Urbana-Champaign.
Contact him at [email protected].
Debashis Bhattacharya is the chief technology officer and cofounder at Zenasis Technologies. His
research interests include CAD for digital VLSI
design, high-performance design, and design for
test. Bhattacharya received a PhD in computer
engineering from the University of Michigan. Contact him at [email protected].
Vamsi Boppana is vice president of engineering and
cofounder at Zenasis Technologies. His research
interests include all aspects of VLSI design, test,
and verification. Boppana received a PhD in electrical and computer engineering from the University of Illinois at Urbana-Champaign. Contact him
at [email protected].
C
omputer Wants You
Computer is always looking for interesting editorial content.
In addition to our theme articles, we have other feature sections
such as Perspectives, Computing Practices, and Research
Features as well as numerous columns to which you can
contribute. Check out our author guidelines at
www.computer.org/computer/author.htm
for more information about how to
contribute to your magazine.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
61
COVER FEATURE
Hardware/Software
Interface Codesign
for Embedded
Systems
Separate hardware- and software-only engineering approaches cannot
meet the increasingly complex requirements of embedded systems.
HW/SW interface codesign will enable the integration of components in
heterogeneous multiprocessors. The authors analyze the evolution of
this approach and define a long-term roadmap for future success.
Ahmed A.
Jerraya
TIMA Laboratory
Wayne Wolf
Princeton University
0018-9162/05/$20.00 © 2005 IEEE
A
n embedded computing system is an
application-specific electronic subsystem
that is used in a larger system such as a
consumer appliance, medical device, or
automobile. Embedded systems can
embody complete system functionality in several
ways—for example, by using software running on
CPUs or in specialized hardware accelerators.
Technological evolution—particularly shrinking silicon fabrication geometries—is enabling the
integration of complex platforms in a single system on chip (SoC). In addition to specific hardware
subsystems, a modern SoC also can include one or
several CPU subsystems to execute software and
sophisticated interconnects.
Mastering the design of these embedded systems
is a challenge for both system and semiconductor
houses that used to apply a software- or hardwareonly strategy. In addition to classic software and
hardware, SoC engineers must design hardwaredependent software and software-dependent hardware. Codesigning these HW/SW interfaces
requires a new kind of engineer who understands
both hardware and software design.
Ninety percent of new application-specific integrated circuits (ASICs) fabricated using 130-nm
technology already include a CPU,1 and 65-nm
SoCs with more than 100 processors could become
commonplace by 2007. Multimedia platforms such
as Nomadik and Nexperia are examples of multiprocessor SoCs that use digital signal processors,
microcontrollers, and other kinds of programmable
processors.2 These systems exploit heterogeneous
cores to meet tight performance and cost constraints. As the trend of building heterogeneous multiprocessor SoCs accelerates, they will be composed
of multiple, possibly highly parallel processors for
use in applications such as mobile terminals, set-top
boxes, and game, video, and network processors.
To facilitate communication, these chips will also
contain sophisticated networks-on-chips (NoCs).
Providing SoCs consisting of an assembly of
processors executing tasks concurrently will require
design methodologies to focus on selecting and
using either programmable or dedicated processors
in place of the gates and arithmetic logic units that
current methods use. Compared with conventional
ASIC design, such a multiprocessor SoC requires a
fundamental change in chip design.
MULTIPROCESSOR PLATFORMS
Application requirements force today’s system
designers to develop specific platforms for different
design spaces. Some have speculated that the semi-
Published by the IEEE Computer Society
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
63
SW tasks
SW adaptation
(OS/drivers)
CPU
subsystem
HW/SW
interface
SW tasks
HW
adaptation
HW
subsystems
HW
subsystem
(a)
(b)
Figure 1. Evolution
of interface-based
design. (a) Current
methodology
prevents designing
the software until
the hardware platform design is complete. (b) HW/SW
interface codesign
requires abstract
models of both types
of components.
64
conductor industry is moving toward a universal
chip that will provide all the computation power
that all applications require. This solution should
be developed when it becomes feasible; using a
standard platform is likely to become the preferred
option because it would eliminate the cost and
effort required to build specific HW/SW platforms.
However, due to many factors, it is likely that
SoC designs will use multiple platforms in the foreseeable future. It is possible to build one or more
platforms that effectively provide the basis for
many products within a particular application
space—such as Texas Instruments’ TI-OMAP and
STElectronics’ Nomadik for mobile terminals, and
Philips’ Nexperia for digital TV.2 However, enhancing such hardware platforms for use across multiple applications is not viable because they must
meet several stringent design constraints simultaneously: hard real-time performance, low power
consumption, and low cost. Under these circumstances, the platform must be specialized to exploit
a given application’s characteristics.
Further, applications that have different combinations of requirements demand multiple architectures. For example, although AVC/H.264 is a
common standard for video compression, different
types of video compression systems require different platforms. The computation complexity required
for video compression in digital cinema and highdefinition video (HDV) cameras is more than 32 tera
instructions per second. A cell phone with a video
camera uses much smaller frames and lower frame
rates, which requires less computation but imposes
more stringent power consumption requirements.
Because cell phones must also be more physically
compact than high-end video cameras, they require
more highly integrated architectures. Thus, even this
one application can require different platforms.
HDV recording illustrates the need for heterogeneous platforms. Assume that the design uses a
pure software approach with a SoC platform consisting of programmable processors. To meet the
computation requirements, the SoC platform
requires 32,000 RISC processors running at 1 GHz.
In the foreseeable future, such a SoC platform may
not be realizable in terms of either chip area or
power consumption. Such a platform has a significant limitation in terms of power consumption
because it would require numerous transistors, and
the leakage current—which is proportional to the
number of devices—would dominate power consumption. However, when implemented as a mixed
HW/SW design, the same MPEG-2 encoder would
require only a four-processor solution for a digital
cinema application.3
INTERFACE-BASED DESIGN
For quite some time, Moore’s law has driven
advances in chip density that far outpace advances
in designer productivity. To get back on track,
designers must work at higher levels of abstraction.
The productivity of a designer who can generate
only 100 lines of Hardware Description Language
(HDL) code per day is higher if those lines represent
large blocks rather than logic gates.
SoC design generally requires developing complex
software, entailing hundreds of thousands of lines
of code, to run on the SoC platform. The designer
must accomplish this work while balancing the competing constraints of a short time-to-market window
and ever-increasingly complex functionality. Scaling
current ASIC design approaches to such highly parallel multiprocessor SoCs is difficult, and using classic methods to design these new systems would result
in unacceptable realization costs and delays.
These constraints are pushing SoC design toward
an interface-based methodology that takes advantage of intellectual property.
Current design methodology
Traditional ASIC designers have a hardware-centric view of the system design problem. Similarly,
software designers have a software-centric view.
SoC designs require creating and using radical new
design methodologies because some of the key
problems in SoC design lie at the boundary between
hardware and software.
As Figure 1a shows, a SoC can include specific
hardware subsystems and one or several CPU subsystems to execute software. The design includes a
hardware adapter—a bridge or communication
coprocessor—to connect the CPU subsystems to
the other subsystems. Each CPU subsystem includes
a register transfer level (RTL) or gate model of the
CPU and a set of peripherals connected using the
CPU bus.
In the final design, the system compiles and represents software as binary code that it can load in
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Embedded
software
Application software
Platform API
A new approach
Concurrent HW/SW design requires abstract
models of both types of components, as Figure 1b
shows. Ideally, the design process would start with
a set of software tasks communicating with a set
of hardware subsystems. Because software components run on processors, the abstraction needed
to describe the interconnection between the software and hardware components is totally different
from the existing abstraction of wires between
hardware components as well as the function call
abstraction that describes the software.
The HW/SW interface abstraction must hide the
CPU, a hardware module that executes a software
program. On the software side, the abstraction
hides the CPU under a low-level software layer
ranging from basic drivers and I/O functionality to
sophisticated operating systems and middleware.
On the hardware side, the interface abstraction
hides CPU bus details through a hardware adaptation layer generally called the CPU interface. This
can range from simple registers to sophisticated I/O
peripherals including direct memory access queues
and complex data conversion and buffering systems.
This heterogeneity complicates designing the
interface and makes it time-consuming because it
requires knowledge of both hardware and software
and their interaction. Consequently, HW/SW interface codesign remains a largely unexplored noman’s-land.
General-purpose computer system designers
must also consider both hardware and software,
but the two are more loosely coupled than in SoC
HW interfaces
System interconnect
(system bus or network on chip)
Embedded system
CPU subsystem
Execution platform
Hardware-dependent
software
HW/SW interfaces
the CPU subsystem’s memory. The current SoC
design process uses separate teams working serially
to create the hardware and software designs.
The first step consists of designing the hardware
and validating it through RTL simulation using classic HDL simulators, which are much too slow to
handle the embedded CPUs. Using a CPU instruction-set simulator can accelerate this simulation.
Instruction-set simulators can use a cosimulation
backplane to connect to the HDL simulator.4
The next step involves testing an operating system or middleware on the hardware platform and
then porting the software to the OS or middleware.
Thus, the software design team can begin only after
the hardware platform design is complete. This
often leads to poor hardware designs because problems caught during software development cannot
be fixed in the platform. It also means that the
design process takes far too long.5
Figure 2. Embedded
system architecture.
In addition to hardware, a SoC includes
classic application
software and
hardware-dependent
software that must
be codesigned
with hardware
interfaces.
HW interfaces
HW component
design. Consequently, general-purpose systems typically model HW/SW interfaces twice: once to test
the hardware design and the second time to validate software functionality. Using two separate
models induces a discontinuity that wastes design
time and results in less efficient, lower-quality hardware and software.
This overhead in cost and reduced efficiency are
unacceptable for SoC design. Efficiently combining hardware and software to share a single interface requires a new type of HW/SW designer.5
LINKING INTERFACES TO EMBEDDED
SOFTWARE
Some designers use the term “embedded software” to designate any software in an embedded
system, while others use it to mean only that part
of the software that is intimately related to hardware—for example, the hardware team generally
designs low-level software functions such as drivers and interrupt management.
The generic architecture shown in Figure 2 helps
to clarify the relationship between hardware and
software. An embedded system is an applicationspecific HW/SW architecture. Ideally, the application is a body of software to be executed on a
hardware platform. The SoC platform itself also
includes, in addition to hardware, a software layer
called hardware-dependent software that must be
codesigned with hardware interfaces.
From the software application point of view, the
reaction time is measured in milliseconds; at that
execution rate, the platform can be abstracted as
an application programming interface or programming model. The API hides hardware details
such as interrupt controllers or memory and I/O
systems. Software designers develop the application software and use real-time techniques to validate the software application properties.6
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
65
There is a
temptation to
continue using
the traditional
software- or
hardware-only
approach to
implement large
applications.
The hardware-dependent software supports the API, adapting it to the CPU subsystem. The CPU subsystem can hide the
complex architecture it generally requires to
meet performance demands. In addition to
the classical CPU, the subsystem can include
sophisticated I/O and memory subsystems.
The SoC can include several heterogeneous
subsystems, including specific hardware components and sophisticated interconnects.
When the subsystems and interconnect
designs are decoupled, hardware interfaces
are required to adapt them in the final SoC.
Both application software and hardwaredependent software may be distributed over
different subsystems.
To design the application software, classical realtime software designers can use specific analysis
tools that support complexity. The hardware-dependent software adapts the application software to a
CPU subsystem. In general-purpose computer systems, this layer can use standard components, such
as an operating system or middleware, that can be
ported to different hardware platforms. When
applied to a SoC, however, this solution induces significant overhead in code size, runtime, energy consumption, and other system costs.
Two factors cause this overhead. The operating
system and middleware must be ported from a
uniprocessor platform to heterogeneous multiprocessor platforms. The systems also must implement full-featured functionality to support various
types of embedded software.
A similar distinction exists on the hardware side,
where part of the design depends on software and
must be isolated.
HARDWARE/SOFTWARE INTERFACE CODESIGN
High-performance embedded systems consist of
multiple HW/SW subsystems, with application
software tasks distributed over heterogeneous
processor subsystems using sophisticated interconnects. The HW/SW interface and the CPU subsystems must handle the interaction between
software tasks and the interconnect structure. The
interface provides the application software layer
with an abstraction of the SoC architecture, called
a parallel programming model. It also includes a
network interface for both multiprocessor booting
and interprocessor communication that connects
the subsystem to the network.
When the SoC includes more than one CPU,
HW/SW interface design becomes more complicated.
Parallel programming models are more complex than
66
uniprocessor programming models; similarly, network interfaces are more complex than a unified
memory. Thus, as a recent multiprocessor SoC case
study confirms,3 the HW/SW interface could become
a key challenge in heterogeneous SoC design.
Bridging the gap
Because design teams traditionally have applied a
software- or hardware-only strategy, there is a temptation to continue using this approach to implement
large applications. Software teams claim that their
approach results in a shorter design cycle. For example, a pure software approach may reduce the design
cycle for derivative design because software is flexible enough to add new functionality. On the other
hand, hardware teams argue that their approach is
more efficient. While an embedded software
approach could result in a larger chip or even a
chipset, the ASIC approach will yield a smaller chip.
Even for a single product, achieving the best volume in a given market window considering chipsize and yield in chip production may require
combining hardware and software solutions. In
terms of yield in chip production, both ASIC and
embedded software approaches have pros and
cons. The ASIC approach can suffer from low yield
in the first few months of chip production until the
learning curve improves. However, the reduced
chip size may improve total chip production. An
embedded software approach can give a good initial yield since it reuses an already proven SoC platform. However, a larger chip size may reduce the
effects of yield improvement.
Ultimately, achieving optimal SoC production
will require some combination of hardware and
software solutions.
Figure 3 shows a simplified flow of concurrent
HW/SW design. This codesign scheme opens the
design process to several optimizations that are not
possible using the classic approach in which hardware and software are designed separately.
The most obvious improvement is better adaptation of the CPU to both hardware and software
interfaces. For example, designers can use new flexible processor technologies such as Tensilica7 to
optimize performance at the HW/SW interface by
introducing application-specific I/O operation. In
addition, using reconfigurable hardware, such as
the Xilinx Virtex II Pro, can optimize hardware
interfaces to an embedded CPU.
Interface codesign roadmap
The complexity of the HW/SW codesign process
will depend on the abstraction level at which the
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Figure 3. HW/SW
interface codesign
flow. This
scheme enables
optimizations that
cannot be achieved
using separate
hardware- and
software-only
approaches.
System specification
Architecture exploration
SW module
specification
Embedded
software
design
Abstract HW/SW interface model
• API for SW modules
• Abstract interfaces for HW modules
• QoS specification
HW module
specification
HW/SW codesign
• CPU subsystem design
• HW-dependent SW design
• SW-dependent HW design
Hardware
module
design
Back-end and hardware prototypes
Embedded system
process starts. Researchers2,8,9 have clearly identified
five abstraction levels that will constitute key milestones for future HW/SW codesign automation.
Explicit interfaces. The currently used model for
SoC design describes hardware as RTL modules.
The CPU acts as the HW/SW interface, and designers use explicit memory and I/O architectures to
detail the software down to assembly code or lowlevel C programs.
Data transfer. At this level, the CPU is abstract.
Hardware and software modules interact by
exchanging transactions through an explicit interconnect structure, a model generally referred to as
transaction-level modeling. Among the various
TLM languages, most were developed using
SystemC.4 In addition to designing interfaces for
different hardware modules, refining a TLM model
requires designing a CPU subsystem for each software subsystem.
Synchronization. At this level, the interconnect and
synchronization are abstractions. The hardware
and software modules interact by exchanging data
following well-defined communication protocols.
The Message Passing Interface (MPI)8 is an example of this approach. Refining an abstract HW/SW
interface model requires first designing the interconnect—a system bus or NoC—and then correcting the synchronization schemes. Data transfer
must also be refined down to the RTL.
Communication. At this level, the communication
protocol is abstract. The hardware and software
modules interact by exchanging abstract data with-
out regard to the protocol used or the synchronization and interconnect the design will implement.
The design typically uses the Specification and
Description Language8 to abstract communication.
Refining an SDL model requires first selecting a
communication protocol—for example, message
passing or shared memory—and then following the
refinement steps used in lower abstraction levels.
Partitioning. The ultimate abstraction level is the
functional model in which hardware and software
are not partitioned. Designers can use a variety of
models to abstract HW/SW partitioning, including sequential programming languages such as
C/C++, concurrent languages, and higher-level
models such as algebraic notation—for example,
the B language. Refining such a model requires first
separating the software and hardware functions
and then performing the refinements used in higher
abstraction levels.
Toward full codesign
The ultimate goal is to design both hardware and
software at all abstraction levels. Figure 4 details
one such full codesign scheme. Traditional codesign
research has concentrated on HW/SW partitioning,
but without solving the problem of abstracting the
hardware platform. Rather than using ad hoc hardware models, SoC designs demand a well-thoughtout approach to the HW/SW interface. The next
steps in automation would be data transfer synthesis, synchronization and interconnect, communication, and HW/SW partitioning.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
67
Figure 4. Full HW/SW
interface codesign
scheme: (a) explicit
interfaces, (b) data
transfer, (c)
synchronization,
(d) communication,
and (e) partitioning.
f1
(e) f 3
(d)
HW/SW partitioning
f2
Embedded software
f1
f2
Hardware
f3
Platform API
HW interfaces
HW/SW
communication
protocol
synthesis
Abstract communication
Embedded software
f1
f2
Hardware
f3
Platform API
HW interfaces
(c)
HW/SW
interconnect
synthesis
Communication protocols
Embedded software
f1
f2
Hardware
f3
Platform API
HW interfaces
Communication protocol
and synchronization
Communication protocol
and synchronization
Interconnect API
Interconnect API
HW/SW
interface
codesign
Interconnect
(b)
Embedded software
f1
f2
Hardware
f3
SW adaptation
CPU
subsystem
HW adaptation
HW adaptation
(a)
Interconnect
OPPORTUNITIES AND CHALLENGES
Early verification
Successful HW/SW interface codesign will fundamentally improve the SoC design process by
increasing both hardware and software quality and
reliability and by enabling early verification,
reusability, and interoperability. It will also provide
opportunities for tackling a number of technical
challenges confronting embedded-system designers.
Verifying the interface independent of its context
is not sufficient—the interface must be verified relative to a given hardware platform. It is not possible to delay performing this verification until the
hardware prototype is available. Abstracting the
HW/SW interface model will make it possible to
verify the interface abstract design itself without
using the physical prototype.
Improved software quality
Embedded software relies on the hardware platform to support complex quality-of-service (QoS)
requirements and ensure reliability. Current practice is to use an existing OS or middleware to validate the nonfunctional properties of application
software. Because these generally support real-time
and delay requirements but not nonfunctional
properties such as intersubsystem communication,
bandwidth, jitter, and reliable communication, they
cannot systematically monitor and guarantee QoS.
In this scenario, it is even difficult to guarantee the
reliability of the HW/SW interface design itself.
Overcoming this challenge requires a QoS-aware
HW/SW interface abstraction.
68
Reusability
Mastering embedded system design requires
using an efficient method to configure and optimize
the HW/SW interface. Using a general HW/SW
interface model makes it possible to reuse application software, hardware components, and platform
and middleware modules across different products,
product families, and even application domains.
However, a drawback of generality is inefficiency.
For applications that require only a small subset of
the complete HW/SW interface functionality, a
generic model imposes tremendous overhead that
cost-sensitive applications cannot tolerate. A highly
configurable and parameterized abstract interface
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
architecture enables designers to optimize and
streamline an instance of the interface to a given
application’s particular needs.
Interoperability
Creating abstract HW/SW interfaces facilitates
dialogue between design teams that can belong to
different companies or even market sectors.10 For
example, an automaker could use a standard interface HW/SW API to develop the car’s software
while reserving the right to select the hardware platform as late as possible.
he key issue when integrating the parts of an
embedded system is the creation of a continuum between the hardware and software,
which requires new technologies to effectively integrate components. For example, most conventional
parallel programming models—including MPI, the
open specifications for multiprocessing (OpenMP),
the bulk synchronous parallel (BSP) model, and
LogP—are designed for general-purpose computing. SoC APIs must specify application-specific
design constraints—for example, in terms of energy
consumption, runtime, cost, and reliability.
In addition, abstract HW/SW interface models are
widely available for single-processor subsystems and
homogeneous multiprocessors, but SoCs involve
complex interactions between heterogeneous subsystems. Abstracting multiprocessor platforms
require a scalable, configurable interface architecture.
Embedded computing applications often combine several different kinds of algorithms.
Specializing cores by operation type would provide
substantial savings in cost and power consumption.
Embedded applications also show wide variations
in data loads during execution. Flexible networks
would allow using interconnect resources more efficiently. Finally, using reconfigurable fabrics as
embedded system components would make it possible to target a platform to far more products. ■
T
Acknowledgments
The authors thank all the members of TIMA’s
System-Level Synthesis Group, especially Sungjoo
Yoo, Wander Cesário, and Xi Chen, for their help
in preparing this article.
References
1. H. Jones, “Analysis of the Relationship between EDA
Expenditures and Competitive Positioning of IC Ven-
dors for 2003,” Int’l Business Strategies, 2002; www.
edac.org/downloads/04_05_28_IBS_Report.pdf.
2. A.A. Jerraya and W. Wolf, Multiprocessor Systemson-Chips, Morgan Kaufmann, 2004.
3. H. Iwasaki et al., “Single-Chip MPEG-2 422P@HL
CODEC LSI with Multi-Chip Configuration for
Large-Scale Processing beyond HDTV Level,” Proc.
Design, Automation and Test in Europe Conf. and
Exhibition (DATE 03 Designers’ Forum), IEEE CS
Press, 2003, p. 20,002.
4. SystemC Transaction Level Modeling Working
Group, www.systemc.org/projects/tlm.
5. M-W. Youssef et al., “Debugging HW/SW Interface
for MPSoC: Video Encoder System Design Case
Study,” Proc. 41st Design Automation Conf. (DAC
04), IEEE CS Press, 2004, pp. 909-913.
6. J.W.S. Liu, Real-Time Systems, Prentice Hall, 2000.
7. C. Rowen, Engineering the Complex SoC: Fast, Flexible Design with Configurable Processors, Prentice
Hall, 2004.
8. D.B. Skillicorn and D. Talia, “Models and Languages
for Parallel Computation,” ACM Computing Surveys, vol. 30, no. 2, 1998, pp. 123-169.
9. W. Wolf, Computers as Components: Principles of
Embedded Computing System Design, Morgan
Kaufmann, 2001.
10. S. Yoo and A.A. Jerraya, “Introduction to Hardware
Abstraction Layers for SoC,” Proc. Design, Automation and Test in Europe Conf. and Exhibition (DATE
03), IEEE CS Press, 2003, pp. 10,336-10,337; http://
sigda.org/Archives/ProceedingArchives/Date/papers/
2003/date03/pdffiles/04e_1.pdf.
Ahmed A. Jerraya leads the System-Level Synthesis Group at TIMA (Techniques of Informatics and
Microelectronics for Computer Architecture) Laboratory in Grenoble, France. His research interests
include embedded computing, flexible multiprocessor SoCs, hardware-dependent software,
and computer-aided design. Jerraya received a PhD
in computer sciences from the University of Grenoble. He is a member of the IEEE and the ACM and
is a board member of the European Design
Automation Association. Contact him at ahmed.
[email protected].
Wayne Wolf is a professor in the Department of
Electrical Engineering at Princeton University. His
research interests include embedded computing,
multimedia systems, VLSI, and computer-aided
design. Wolf received a PhD in electrical engineering
from Stanford University. He is a Fellow of the IEEE
and the ACM. Contact him at [email protected].
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
69
R ESEA R C H FEAT URE
A New Framework for
Power Estimation of
Embedded Systems
A proposed modular framework for assessing power consumption of
embedded systems early in the design cycle can be extended to any
performance metric and uses a high level of abstraction, leading to a
faster execution time. Experimental results indicate that the approach is
within 20 percent of gate-level estimation and executes three orders of
magnitude faster.
Claudio
Talarico
Jerzy W.
Rozenblit
University of
Arizona
Vinod
Malhotra
University of
Hawaii at Manoa
Albert
Stritter
Infineon
Technologies AG
T
he overall goal of system design is to
minimize development time and costs,
subject to various performance and functionality constraints. To cope with the
rapidly growing complexity of embedded
systems, designers must work at higher levels of
abstraction.1
Depending on the abstraction layer—the level of
detail used to describe the system—designers can
address different concerns. The key is to model the
system at each abstraction layer with as little detail
as possible and then collect performance metrics
that help the development team make sound engineering decisions.
Among the many metrics used to characterize the
quality of an embedded system-on-chip (SoC)
design, power consumption has emerged as one of
the most important. This is largely due to the proliferation of mobile battery-powered computing
devices, the increasing speed and density of CMOS
(complementary metal-oxide semiconductor) VLSI
(very large-scale integration) circuits, and continuous shrinking of the transistor feature size of deepsubmicron technologies.2
Designers can estimate power consumption at
four different abstraction levels:
• Circuit-level approaches simulate the circuit at
the transistor or switch level and monitor the
supply current.3
0018-9162/05/$20.00 © 2005 IEEE
• Logic-level techniques simulate a design at the
logic-gate level and calculate power by considering the switching activity and node capacitance. Logic-level approaches execute orders of
magnitude faster than circuit-level approaches
but at the expense of accuracy.4
• Register-transfer-level approaches5 model the
power consumption of more abstract components such as muxes, adders, multipliers,
and registers. They have satisfactory accuracy
(5-10 percent of gate-level power estimates),
but their computational time, while orders of
magnitude smaller than with logic-level
approaches, is too slow when applied to large
designs.
• System-level approaches6 estimate power consumption based on simple high-level descriptions of the system’s behavior and its intended
application, using an abstract notion of capacitance and switching.
Different estimation techniques are best suited to
different parts of a design or different stages in the
design flow.
We have developed a technique that derives
power figures from the execution of high-level models rather than gate- or transistor-level precharacterizations. This technique makes it possible to
assess embedded SoC designs much earlier in the
design cycle, contributing to sounder decisions
Published by the IEEE Computer Society
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
71
Figure 1. Power
estimation
framework. The
framework acts as a
generic wrapper
around the system
components, each
of which has an
associated simulation model and monitor. The monitor
observes the
model’s execution
and probes the data
needed to characterize the
component’s behavior. Power analyzers
then compute the
performance indices
of interest.
Application program
Instruction
set
simulator
Memory
trace
profiler
Peripherals
trace
profiler
Memory
access trace
Program
execution trace
Peripheral
access trace
Main memory
vs.
cache profiler
Cache misses
Cache
access trace
Main memory
access trace
System under development
CPU model
Cache model
Memory model
Peripheral model
CPU monitor
Cache monitor
Memory monitor
Peripheral monitor
Software
power
analyzer
Local bus
power
analyzer
Cache
power
analyzer
System
bus power
analyzer
Main memory
power
analyzer
Peripheral
bus power
analyzer
Peripheral
power
analyzer
Σ
Total system
power
throughout the entire development process and
leading to a faster execution time.
To validate our methodology, we applied it to a
peripheral core—a baud rate generator—and compared the results with those obtained using a gatelevel approach.
POWER CONSUMPTION MODELS
Researchers have developed several techniques
for estimating software power consumption for
microprocessor and digital signal processor cores,
mainly at the instruction level.2,7,8 Given a program
execution trace, this approach computes the energy
that each executed instruction consumes. Energy
consumption depends on the specific instruction
being executed as well as on previously executed
instructions and on the data on which the instruction operates. This process can be accelerated by
deriving a trace file of reduced size that generates
equal power dissipation.9
Other researchers have explored software power
optimization techniques.10 In addition, a proposed
mathematical model of a generic 32-bit processor,
obtained through functional decomposition, classifies instructions based on the functional units exercised.11 This model estimates the static power
consumption of the single instructions executed, but
72
it does not consider the dynamic power information associated with the actual applied input data.
Another technique estimates power consumption
of peripheral cores.12 Finally, a number of proposed
system-level models for cache, memory, and bus
power consumption consist mainly of closed-form
equations that express power consumption as a
function of usage/traffic and component parameters.13-15
All of these system-level techniques use gate- or
transistor-level precharacterizations, which require
detailed knowledge of the components’ internal
structure, to develop energy consumption models.
However, such information may not be available
early in the design process, or IP providers may not
want to disclose it. In addition, a given application’s
power consumption provides little information
about the power consumption of other applications
for the same system. Consequently, characterization-based power models are highly accurate only
if evaluated in the same context as that used for
characterization.
POWER ESTIMATION FRAMEWORK
Figure 1 illustrates our proposed framework for
estimating the power consumption of a generic
embedded system. The framework functions as a
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
generic wrapper around the system components,
each of which has an associated simulation model
and monitor. The monitor observes the model’s
execution and probes the data needed to characterize the component’s behavior. Various power
analyzers then compute the performance indices of
interest.
Our framework generalizes and extends the
schema developed by Tony Givargis, Frank Vahid,
and Jörg Henkel.12 The key difference is that our
framework does not rely on gate-level simulation to
characterize each core’s per-instruction power consumption. In our view, a core’s behavior can be seen
as the execution of a sequence of instructions, in
which the term “instruction” is synonymous with
“action” and is not necessarily atomic.
The framework’s distinguishing feature is its
modularity, which helps isolate the various system
components from one another and to abstract their
implementation details. This makes it possible to
assess designs early in the process, when the impact
of decisions is critical to avoid expensive and timeconsuming iterations. The modeling concepts’ generality also extends our framework beyond power
consumption for use in evaluating other performance metrics.
System power consumption
Our framework consists of four steps that lead to
an estimate of overall system power consumption:
• translating each core’s functionality to a set of
primitive instructions,
• simulating the application program,
• mapping the instructions requested by the
application program into abstract functional
units, and
• computing aggregate power consumption of
the entire system.
The first step consists of breaking each core’s
functionality into a set of instructions. A component’s functionality represents all possible behaviors it can assume, with behavior meaning the set
of actions that the component performs during execution of an application. The goal is to devise a
high-level executable model of the core that can
output power consumption data during system
simulation.
This first step hides the complexity of the core’s
internal implementation behind the simple interface offered by the instruction set. There is a tradeoff in selecting the right set of instructions: Having
many fine-grained instructions can lead to greater
accuracy, but it requires a longer simulation
time than having fewer coarse-grained
The framework’s
instructions.12 The framework associates
modularity makes it
with each core’s instruction set only the inforpossible to assess
mation needed to describe the performance
designs early in the
metric of interest—in this case, power consumption.
process, when the
The second step involves simulating the
impact of decisions
application program and extracting a trace
is critical to avoid
file for the core. A trace is the sequence of
expensive and
instructions/data items a core executes during
its simulation. The aim is to estimate the
time-consuming
core’s switching activity.
iterations.
The third step consists of mapping the
instructions requested by the various tasks
performed by the core into abstract functional units that are used to estimate complexity—
that is, gate count. Given switching activity and
complexity, the framework can compute the core’s
power per instruction.
The fourth step involves connecting all the core
models to compute the power consumption of the
entire system.
Power analyzer modules
Each of the power analyzer modules shown
in Figure 1 embodies the analytical expressions
needed to compute the power consumed by the various types of cores: processor, cache, main memory, bus, and peripherals.
The input of the power estimation flow is the
application program, which feeds into the target
CPU’s instruction set simulator (ISS) to produce a
program trace. A software power analyzer then
postprocesses the program trace to estimate the
power the processor consumes during software execution.
The application program also feeds into a memory trace profiler, which records all memory access
traces and then calculates the number of cache
demand misses for both data and instructions. The
software power analyzer uses this information to
account for additional power consumption due to
cache-miss stalls. The main memory power analyzer and cache power analyzer also use this data to
compute the power consumption of the main memory and cache accesses.
Depending on whether peripherals are accessed
through memory-mapped or dedicated I/O, it is
possible to extract a peripheral access trace from
either the memory access traces or program traces.
Any access to or from main memory, caches, and
peripherals translates eventually into information
traffic over the communication buses. Specific bus
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
73
The framework
models the
peripheral in
terms of a set
of instructions
and a set of
power modes.
74
power analyzers compute the power that
each bus in the system consumes.
Component power consumption
Because gate-level representation of most
cores may not be available early in the design
process, our framework computes power dissipation analytically, combining the technology parameters obtainable from data sheets
with the data gathered by executing the core’s
high-level model.
The framework uses ad hoc correction
methods to evaluate the power consumed by
nonlinear components. A typical example is the
interaction between cache and processor. In this
case, it is necessary to first evaluate processor power
consumption by assuming the ideal case in which
all instructions and data can be retrieved from the
cache and then account for the energy penalty
caused by the processor stalling due to read or write
misses in the data cache and fetch misses in the
instruction cache.
Processor. Our framework relies on an ISS to estimate the power the CPU consumes to execute the
application software. The ISS maintains detailed
statistics of the processor’s internal activity—such
as fetches, stalls, instruction execution frequency,
and internal register accesses—that the software
power analyzer can postprocess to compute power
consumption. This technique is an extension of
earlier instruction-based approaches.2,12 The idea
behind such approaches is that “by measuring the
current drawn by the processor as it repeatedly
executes certain instructions, it is possible to obtain
most of the information that is needed to evaluate
the power cost of a program for that processor.”2
Cache. To estimate cache energy consumption,
we adapted analytical models developed by Milind
Kamble and Kanad Ghose.13 Accurate estimation
requires that the cache simulator maintains activity statistics for several metrics including number
of hits and misses, number of tag comparisons,
word-line activity, and bit-line activity.
The major components of energy consumption
are in the bit lines, word lines, output lines, and
input lines: Ecache = Ebitline + Ewordline + Eoutput + Einput.
The energy dissipated in other cache components
such as comparators, registers, data steering logic,
control logic, and sense amplifiers is relatively small
and can be neglected.
Main memory. To compute the energy that main
memory consumes, our framework uses the analytical models described by Kiyoo Itoh.14 The main
sources of power dissipation are the memory cell
array, row decoder, column decoder, and periphery circuits.
Bus. In deep-submicron technologies, bus power
is a significant part of total power. Execution time
and bus power are inversely related: A smaller bus
width implies less wire capacitance and hence less
power, but it requires more bus transfers and hence
a longer execution time.
Every memory and peripheral access implies a
data transfer over a communication bus. The total
number of cache accesses Nacc measures traffic on
the CPU-cache bus, the number of cache misses
Nmiss measures traffic on the cache-main memory
bus, and the number of peripheral references Nper
measures traffic on the peripheral bus—the bus
between main memory and the peripheral devices.
Given this traffic and assuming that on average at
most half of the bits will toggle, our framework can
then compute bus switching activity. It uses this
value and bus capacitance to compute power consumption.
Peripherals. For a processor, the term “instruction” generally means an atomic action for programming the desired behavior. However, for a
peripheral, an instruction is an action that,
together with all other instruction set actions,
describes the peripheral’s functionality.12 Our
framework models the peripheral in terms of a set
of instructions and a set of power modes. Power
modes take into account that certain instructions
can significantly change power consumption.
The framework follows a four-step procedure to
obtain peripheral power consumption. First, it profiles the application program for requests to and
from the peripheral. The number and frequency of
peripheral accesses is a measure of its switching
activity. Second, it decomposes the various types of
tasks requested into instructions and maps them
into abstract functional units for use in estimating
complexity. Given switching activity and complexity, the framework then creates a power-perinstruction lookup table. Third, the framework
executes the peripheral model to generate the corresponding trace. Finally, given the instructions
trace, it uses the power-per-instruction lookup table
to compute power consumption.
EXAMPLE SIMULATION
To validate our system-level approach, we used
SystemC to model a baud generator unit that clocks
the universal asynchronous receiver/transmitter
inside the Infineon XC161CJ microcontroller.
Embedded systems are inherently heterogeneous—
they consist of an intricate intermix of both hard-
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Prescaler
Enable
Select
Fractional
divider
Integer
divider (÷2)
fCLOCK
Figure 2. Baud
generator
architecture and
power model. (a)
The baud generator
consists of a
prescaler
containing a
selectable
fractional divider
and two fixedinteger dividers, a
13-bit timer, and
an output stage
providing the baud
rate. (b) Each state
represents a power
mode, and the
transition from state
to state depends on
the instructions
from the application
program.
Mux
fDIV
13-bit timer
Output stage
Baud rate clock
÷16
Sample clock
Integer
divider (÷3)
(a)
!bg.enable
bg_sleep
bg.reset
bg.enable
bg_active
(b)
!bg.reset
ware and software components. Using the same
high-level language to describe both hardware and
software makes the modeling task easier.
As Figure 2a shows, the baud generator consists
of three functional units: a prescaler containing a
selectable fractional divider and two fixed-integer
dividers, a 13-bit timer, and an output stage providing the baud rate.
Modeling power consumption requires only a
small amount of detail. As Figure 2b illustrates, our
framework uses a finite state machine to describe
power behavior. Each state represents a power
mode, and the transition from state to state depends
on the instructions from the application program.
The total energy that the baud generator consumes during execution of the application program
is given by
N
Ebg = Tclock ×
∑ (n
j = 1
cyc, j
× pinst, j
)
N
=
∑ (T
clock
× ncyc, j × pinst , j
j = 1
)
where Tclock is the clock-cycle period, pinstr,j is the
power dissipated during execution of instruction j,
and ncyc,j is the number of clock cycles taken to execute instruction j. The power per instruction can
be computed as
pinstr, j =
NF
1
× Vdd2 × ∑ Ck × α k , j
2
k = 1
where Vdd is the power supply voltage, NF is the
number of functional units composing the baud gen-
erator, Ck is the total capacitance of functional unit
k, and αk,j is the switching activity occurring within
the functional unit k to execute instruction j.
Experimental setup
To test our approach, we implemented a systemlevel model of the baud generator. The model represents the peripheral module of the power
estimation framework shown in Figure 1. We used
the C++ language—the availability of SystemC
makes this an ideal match for a unified HW/SW
framework.
The baud generator model includes only the
power behavior. To achieve good accuracy, we used
a state machine to express each instruction’s dependency on the previous ones, with the instructions
triggering transitions from one state to the other.
To compute the power per cycle of each core’s
instruction, we employed Infineon’s 0.25-µm
CMOS technology. We estimated the average
capacitance of combinational cells and sequential
cells separately, averaging the intrinsic capacitance
of cells in the target technology, and stored the
resulting values in a lookup table to facilitate access
during model execution.
Figure 3a shows implementation details of our
framework’s peripherals module.
The design explorer analyzes the functional
units forming the various components, estimates
their complexity (total capacitance) based on the
target technology, and evaluates each unit’s power
consumption.
The application profiler parses the application
program and extracts the instructions that affect
the peripherals and distributes them accordingly.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
75
Gate-level
design
Application
program
Technology
library
FU_a
HDL test bench
for the application
Synthesis
Application
profiler
Design
explorer
Netlist
back-annotation
Peripherals
FUs power
LUT
model_a
Gate-level
design
Switching
activity
Power
analysis
monitor_a
Power
analyzer
FU = Functional unit
LUT = Lookup table
RTL
simulation
RTL = Register transfer level
HDL = Hardware Description Language
(a)
(b)
Compare
ε
Figure 3. Two
approaches to
measuring power
consumption in a
baud generator.
(a) System-level
approach. (b)
Gate-level
approach.
Different models and monitors are associated
with different peripherals or various refinements of
the same peripheral. The monitors observe the associated models’ execution and characterize their
power behavior. The power analyzer collects the
information that the monitors capture and computes power consumption.
A distinguishing feature of our system-level
approach is that it does not require gate-level synthesis and simulation. However, to validate its effectiveness and efficiency, we compared our results
against those obtained using gate-level power estimation, as shown in Figure 3b.
The experiments consisted of running 20 randomly
generated application programs for 2,000 clock
cycles. To perform a comparative analysis, we used
a VHSIC Hardware Description Language model of
the baud generator implemented at the register transfer level and then used Synopsys tools to synthesize
it down to the gate level. To perform the gate-level
power estimation, we used Synopsys power-estimation tools and a set of VHDL test benches generated
by replicating the application programs.
consistently lower than gate-level estimation
because the former always considers a lower level
of detail than the latter. Power consumption underestimation represents a serious problem in scenarios that focus on worst-case design analysis rather
than design tradeoffs.
Profiling energy consumption is particularly useful to gain insight into system hot spots. Figure 5a
shows the energy the system consumes while executing three of the benchmarked application programs. Figure 5b shows a scatter plot of the power
that all 20 benchmarks dissipated over 2,000
cycles. The average error is less than 20 percent,
and the standard deviation is 8.26 percent.
Each point on the scatter plot depicts a benchmark’s average power. The abscissa represents the
average power obtained using our system-level
approach, while the ordinate represents the average
power obtained using the gate-level technique. If
there was no difference between the two methods, all
dots would lie on the solid line. Compared to gatelevel power estimation, our approach achieves a
speedup of 1,343—three orders of magnitude faster.
Experimental results
Figure 4 summarizes the power per cycle dissipated by each instruction using both approaches.
The average error is 9.71 percent, and the standard
deviation is 9.36 percent. System-level estimation is
76
he primary goal of our approach is to make
power-related system-level design decisions as
early as possible in the design cycle. Therefore,
20 percent accuracy can be considered satisfactory
T
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Figure 4. Power per
cycle dissipated by
each instruction.
System-level
estimation is
consistently lower
than gate-level
estimation.
Power per cycle (µW)
250
Gate level
System level
200
150
100
50
0
Active
reset
Active
enable
Active
no operation
Sleep
reset
Sleep
enable
Sleep
no operation
Instruction
Figure 5. Energy
consumption profile.
(a) Plot of energy
per clock cycles
elapsed for three
benchmarked
applications. (b)
Scatter plot of
average power for
20 benchmarks
over 2,000 cycles.
1,000,000
100,000
Energy (µJ)
10,000
1,000
100
Benchmark A
Benchmark B
Benchmark C
10
1
1
10
(a)
100
Clock cycle
1,000
10,000
250
Average power using
gate-level approach (µW)
200
150
100
50
0
(b)
0
50
100
150
Average power using system-level approach (µW)
and, in fact, may be the only viable alternative
when gate-level or transistor-level precharacterization is impossible. Indeed, at this level, the key is
to provide fidelity—a high percentage of correctly
predicted comparisons between design implementations—rather than very high estimation accuracy.
Future work will include validation on largerscale designs and iterative refinements of the models based on earlier results. We also plan to extend
the framework to as many performance indices as
200
250
possible including response time, throughput, chip
area, software size, and production costs. ■
References
1. A. Sangiovanni-Vincentelli and G. Martin, “Platform-Based Design and Software Design Methodology for Embedded Systems,” IEEE Design & Test of
Computers, Nov./Dec. 2001, pp. 23-33.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
77
2. V. Tiwari, S. Malik, and A. Wolfe, “Power Analysis
of Embedded Software: A First Step toward Software
Power Minimization,” IEEE Trans. VLSI Systems,
Dec. 1994, pp. 437-445.
3. S.M. Kang, “Accurate Simulation of Power Dissipation in VLSI Circuits,” IEEE J. Solid-State Circuits,
Oct. 1986, pp. 889-891.
4. T.H. Krodel, “PowerPlay—Fast, Dynamic Power
Evaluation Based on Logic Simulation,” Proc. 1991
IEEE Int’l Conf. Computer Design: VLSI in Computers & Processors (ICCD 91), IEEE CS Press, 1991,
pp. 96-100.
5. S. Ravi, A. Raghunathan, and S. Chakradar, “Efficient RTL Power Estimation for Large Designs,”
Proc. 16th Int’l Conf. VLSI Design (VLSI 03), IEEE
CS Press, 2003, pp. 431-439.
6. M. Nemani and F.N. Najm, “High-Level Area and
Power Estimation for VLSI Circuits,” IEEE Trans.
CAD of Integrated Circuits and Systems, June 1999,
pp. 697-713.
7. R.Y. Chen, M.J. Irwin, and R.S. Bajwa, “Architecture-Level Power Estimation and Design Experiments,” ACM Trans. Design Automation of
Electronic Systems, Jan. 2001, pp. 50-66.
8. A.C.S. Beck et al., “CACO-PS: A General-Purpose
Cycle-Accurate Configurable Power Simulator,”
Proc. 16th Symp. Integrated Circuits and Systems
Design (SBCCI 03), IEEE CS Press, 2003, pp. 349354.
9. C-T. Hsieh et al., “Profile-Driven Program Synthesis
for Evaluation of System Power Dissipation,” Proc.
34th Design Automation Conf., IEEE CS Press, 1997,
pp. 576-581.
10. V. Dalal and C.P. Ravikumar, “Software Power Optimizations in an Embedded System,” Proc. 14th Int’l
Conf. VLSI Design (VLSI 01), IEEE CS Press, 2001,
pp. 254-259.
11. C. Brandolese et al., “Static Power Modeling of 32Bit Microprocessors,” IEEE Trans. CAD Integrated
Circuits and Systems, Nov. 2002, pp. 1306-1316.
12. T. Givargis, F. Vahid, and J. Henkel, “InstructionBased System-Level Power Evaluation of System-ona-Chip Peripherals Cores,” IEEE Trans. VLSI
Systems, Dec. 2002, pp. 856-863.
13. M.B. Kamble and K. Ghose, “Energy-Efficiency of
VLSI Caches: A Comparative Study,” Proc. 10th Int’l
Conf. VLSI Design: VLSI in Multimedia Applications (VLSI 97), IEEE CS Press, 1997, pp. 261-267.
14. K. Itoh, “Trends in Low-Voltage Embedded-RAM
Technology,” Proc. 23rd Int’l Conf. Microelectronics (MIEL 02), IEEE Press, 2002, pp. 497-501.
15. W. Fornaciari, D. Sciuto, and C. Silvano, “Power Estimation for Architectural Exploration of HW/SW
Communication on System-Level Buses,” Proc. 7th
78
Int’l Workshop Hardware/Software Co-Design
(CODES 99), IEEE CS Press, 1999, pp. 152-156.
Claudio Talarico is a research assistant professor
in the Electrical and Computer Engineering
Department at the University of Arizona. His
research interests include design methodologies for
integrated circuits and systems with emphasis on
system-level design, embedded systems, HW/SW
codesign, low-power design, system specification
languages, and early design assessment, analysis,
and refinement of complex SoCs. Talarico received
a PhD in electrical engineering from the University
of Hawaii at Manoa. He is a member of the IEEE
Computer Society. Contact him at claudio@ece.
arizona.edu.
Jerzy W. Rozenblit is a professor and heads the
Electrical and Computer Engineering Department
at the University of Arizona. His research interests
are in the areas of complex systems design and simulation modeling. Rozenblit received a PhD in
computer science from Wayne State University. He
is a senior member of the IEEE Computer Society
and the ACM. Contact him at [email protected].
Vinod Malhotra is an associate professor in the
Department of Electrical Engineering at the University of Hawaii at Manoa. His research interests
include wet and dry processes for passivation of
GaAs and InP surfaces and surface-sensitive
devices, such as HBTs, MSM photodetectors, and
VCSELs. Malhotra received a PhD in electrical
engineering from Colorado State University. He is
a member of the American Vacuum Society, the
Electrochemical Society, and the IEEE. Contact
him at [email protected].
Albert Stritter is vice president of design automation at Infineon Technologies AG in Munich, Germany. His research focuses on all aspects of
electronic design automation and technology integration. Stritter received an MS in electrical engineering from the Università degli Studi di Genova.
He is a member of the Virtual Socket Interface
Alliance and EDA (Electronic Design Automation)
Zentrum, Hannover, Germany. Contact him at
[email protected].
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
PRODUCTS
Network Test Accepts Data Traffic
Developers can use Scalable Network Technologies’ latest software
release, QualNet 3.8, to design and test
a wide variety of communication networks, including ad hoc wireless networks. The new Real-Time Interfaces
Module directly supports hardwareand software-in-the-loop simulation
by accepting packet data traffic from
a real network.
QualNet 3.8 also includes a new 3D
GUI for realistic visualizations. The
new release features more than a dozen
new network models, including a
model for background traffic and randomly occurring interface faults;
www.scalable-networks.com.
Feedthrough Filters Reduce Noise
AVX Corporation’s new series of
feedthrough filters offers automotive
engineers a way to significantly reduce
noise in digital circuits up to 5 GHz.
The W2F4 and W3F4 feedthrough
arrays contain four elements with a
common ground connection for multiline designs. The capacitor provides
low parallel inductance and offers
excellent broadband EMI attenuation
for all circuitry in need of passing SAE,
FCC, and IEC EMC requirements.
Applications include data lines to
dashboard and diagnostic units, as
well as RGB lines to LCD displays
within entertainment centers such as
DVD players or GPS units; www.
avxcorp.com.
tection option, eliminating the need for
an extra protection diode. Other key
features include a 2.7 V to 7.5 V VIN
range, a 1.25-MHz constant-switching frequency, and input undervoltage
protection.
The LM3557 converter is available
in an eight-lead LLP package and costs
96 cents in 1,000-unit quantities;
www.national.com.
Advanced Web-Browsing
Technology
Lonopono is a new client-side Webbrowsing technology designed to collect, organize, share, and display
information from Web pages as well
as other sources such as RSS feeds,
photos, videos, OPML lists, and documents. Lonopono runs on any platform, including Windows, Mac OS X,
Linux, and Unix, and it includes an
integrated knowledge base (using the
Web Ontology Language) and advanced scripting support.
The beta version of Lonopono,
available in Standard, Professional,
and Power User versions, is free; www.
lonopono.com.
WinWedge Pro Gets Upgrade
TAL Technologies has released version 3.1 of its data collection program,
WinWedge Pro, designed to interface
RS232 and TCP/IP devices to any
Windows application. Different
instruments can simultaneously send
data to different applications or
“fields” within the same application.
The new version supports 32-bit features such as preemptive multitasking,
up to 100 com ports simultaneously,
and up to 56,000-baud data rates, and
it is 30 percent faster than 16-bit
versions.
Users can now resize the “Analyze”
window as well as leave it open while
defining the structure of incoming
serial data, add a delay between keystrokes sent to other programs, or
minimize WinWedge to the system
tray instead of the taskbar. They also
can send keystrokes to DOS programs or other Windows applications
that do not respond to keyboard
input.
WinWedge Pro v3.1 costs $159 for
owners of previous versions; www.
taltech.com.
LM3557 Can Drive Up to
Five LEDs in Series
National Semiconductor’s LM3557,
a fixed-frequency current-mode stepup DC-DC converter, is suitable for
white-LED applications. An external
feedback resistor sets a constant current through the LEDs.
The LM3557 can drive up to five
LEDs in series with a 20-mA current
and has a fixed 24-V overvoltage proPlease send new product announcements to
[email protected].
Scalable Network Technologies’ QualNet 3.8 includes a new 3D GUI for realistic
visualizations of communication networks.
Published by the IEEE Computer Society
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
79
BOOKSHELF
orensic Discovery: The Definitive
Guide to Computer Forensics,
Dan Farmer and Wietse Venema.
This book covers both the theory and
hands-on practice of forensic discovery, introducing a powerful approach
that can often recover evidence considered forever lost.
The authors draw on firsthand experience to cover subjects ranging from file
systems to memory and kernel hacks to
malware. They expose a wide variety of
computer forensics myths that often
stand in the way of success. Readers will
find extensive examples from Solaris,
FreeBSD, Linux, and Microsoft Windows as well as practical guidance for
writing their own forensic tools.
This book can help readers understand essential forensics concepts such
as volatility, layering, and trust; gather
the maximum amount of reliable evidence from a running system; recover
partially destroyed information and
make sense of it; and timeline their system to understand what really happened and when. Readers will also learn
how to uncover secret changes to everything from system utilities to kernel
modules, avoid cover-ups and evidence
traps set by intruders, and identify the
digital footprints associated with suspicious activity. Other topics covered
include understanding file systems from
a forensic analyst’s point of view, analyzing malware without giving it a
chance to escape, capturing and examining the contents of main memory on
running systems, and how to unravel an
intrusion one step at a time.
A companion Web site contains
complete source and binary code for
the open source software the authors
describe. The site also offers additional
computer forensics case studies and
resource links.
Addison-Wesley Professional; www.
awprofessional.com; 0-201-63497-X;
240 pp.; $39.99.
F
ractical Software Testing: A
Process-Oriented Approach, Ilene
Burnstein. Software testing is rapidly
evolving as a critical software engi-
P
80
Computer
neering subdiscipline. To meet the
needs of software professionals in this
field, the author explains how to effectively plan for testing, design test cases,
test at multiple levels, organize a testing team, and optimize testing tool use.
Using the Testing Maturity Model as
a framework, the book introduces testing in a systematic, evolutionary way;
describes industrial TMM applications; and covers testing topics with
either procedurally based or objectoriented programming code.
The book includes a sample test plan,
comprehensive exercises, and definitions for software testing and quality.
It introduces both technical and managerial aspects of testing in a clear and
precise style and provides a balanced
perspective of all aspects of testing.
Springer; www.springeronline.com;
0-387-95131-8; 706 pp.; $69.95.
nternet Denial of Service: Attack and
Defense Mechanisms, Jelena Mirkovic, Sven Dietrich, David Dittrich,
and Peter Reiher. This book sheds light
on a complex form of computer attack
that impacts the confidentiality,
integrity, and availability of millions
of computers worldwide. It tells the
network administrator, corporate chief
technical officer, incident responder,
and student how hackers prepare and
execute distributed denial of service
attacks, how to think about DDoSs,
and how to arrange computer and network defenses. It also provides a suite
of actions that can be taken before,
during, and after an attack.
The book gives readers comprehensive information on how denial-of-
I
service attacks are waged, how to improve a network’s resilience to denialof-service attacks, what to do when
targeted by a denial-of-service attack,
and the laws that apply to these attacks
and their implications. It also describes
how often denial-of-service attacks
occur and the kind of damage they can
cause and provides real examples of
denial-of-service attacks as experienced by the attacker, victim, and unwitting accomplices.
Prentice Hall PTR; www.phptr.com;
400 pp.; 0-13-147573-8; $39.99.
utonomy Oriented Computing:
From Problem Solving to Complex
Systems Modeling, Jiming Liu, XiaoLong Jin, and Kwok Ching Tsui. This
book provides a comprehensive reference for scientists, engineers, and other
professionals concerned with this
promising development in computer
science. It can also be used as a text in
graduate and undergraduate programs
in computer-related disciplines, including robotics and automation, amorphous computing, image processing,
and computational biology.
In addition to describing the basic
concepts and characteristics of an
autonomy-oriented computing system,
the book enumerates the critical design
and engineering issues faced in AOC
system development. The authors offer
detailed analyses of methodologies and
case studies that evaluate AOC’s use in
problem solving and complex system
modeling. The book’s many illustrative
examples, experimental case studies,
and exercises at the end of each chapter help consolidate the methodologies
and theories presented.
Kluwer Academic Publishers; www.
wkap.nl; 1-4020-8121-9; x pp.; $136.
A
Editor: Michael J. Lutz, Rochester Institute of
Technology, Rochester, NY; mikelutz@mail.
rit.edu. Send press releases and new books
to Computer, 10662 Los Vaqueros Circle,
Los Alamitos, CA 90720; fax +1 714 821
4010; [email protected].
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
COMPUTER
SOCIETY
CONNECTION
2005 Class of IEEE Fellows Announced
he IEEE Board of Directors
recently conferred the title of
Fellow upon 268 senior members of the IEEE, including 44
Computer Society members.
The IEEE’s practice of naming Fellows
dates back to 1912: The constitution
of the American Institute of Electrical
Engineers, a progenitor of the IEEE,
included procedures for selecting Fellows. Today, Fellow status recognizes
a person who has established an extraordinary record of achievements in
any of the IEEE fields of interest.
Senior IEEE members have already
demonstrated outstanding achievement in engineering.
T
A
Robert Thomas Harold Alden,
McMaster University, for contributions to eigenvalue analysis of power
system stability. (Power Engineering)
Minoru Asada, Osaka University, for
contributions to robot learning and
applications. (Robotics and Automation)
B
Dines Bjorner, University of Denmark, for contributions to formal
methods software development
and its applications in industry.
Duane S. Boning, Massachusetts
Institute of Technology, for contributions to modeling and control
in semiconductor manufacturing.
(Electron Devices)
C
Roy H. Campbell, University of Illinois at Urbana-Champaign, for
contributions to concurrent programming, system software, security, and ubiquitous computing.
IEEE policy limits the total number
of Fellows selected in any one year to
one-tenth of one percent of the IEEE’s
total voting membership. With IEEE
membership going strong at more than
359,000 professionals, this year’s
Fellows class is smaller and more elite
than the policy mandates.
The Computer Society members
whose names appear below are now
Francky Catthoor, Interuniversity
Microelectronics Center, for contributions to data and memory
management for embedded systemon-chip applications. (Circuits and
Systems)
Chang Wen Chen, Florida Institute
of Technology, for contributions to
digital image and video processing,
analysis, and communication. (Circuits and Systems)
Yung-Chang Chen, National Tsing
Hua University, for contributions
to low-bit-rate modeling-based
coding. (Circuits and Systems)
Alok Nidhi Choudhary, Northwestern University, for contributions to
high-performance computing systems.
Edmund Melson Clarke, Carnegie
Mellon University, for contributions to model-checking methods
for formal verification.
Thomas M. Conte, North Carolina
State University, for contributions
to computer architecture, compiler
code generation, and performance
evaluation.
Published by the IEEE Computer Society
IEEE Fellows, effective 1 January. An
accompanying citation details the
accomplishments of each new Fellow.
In cases where a Computer Society
member has been named a Fellow based
upon contributions to a field other than
computing, the name of the evaluating
IEEE society appears after the citation.
Two IEEE members with no society
affiliation were named 2005 Fellows for
their contributions to computing: John
Millar Carroll, Pennsylvania State University, for contributions to humancomputer interaction methods and
science; and Charles E. Stroud, Auburn
University, for contributions to the
built-in self-test of integrated circuits.
F
Jeanne Ferrante, University of California, San Diego, for contributions to optimizing and parallelizing compilers.
H
Glenn Edward Healey, University of
California, Irvine, for contributions to the modeling and processing of multispectral and hyperspectral images.
Michael N. Huhns, University of
South Carolina, for contributions
to artificial intelligence applications in distributed computational
environments.
J
Jing-Yang Jou, National Chiao Tung
University, for contributions to the
computer-aided design of digital
circuits.
K
Mohamed Kamel, University of
Waterloo, for contributions to pattern recognition and intelligent sys-
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
81
Computer Society Connection
tems. (Systems, Man, and Cybernetics)
Willis K. King, University of Houston, for contributions to computer
science and engineering education.
Fadi Joseph Kurdahi, University of
California, Irvine, for contributions to design automation of digital systems and to reconfigurable
computing. (Circuits and Systems)
L
Bruce Gilbert Lindsay, IBM Research, for contributions to the
technologies of relational database
systems.
William Peter Loftus, Gestalt LLC,
for leadership in the development
of middleware for interoperability
of large complex software systems.
(Engineering Management)
Ronald Lumia, University of New
Mexico, for leadership in the development of open-architecture control systems for applications in
robotics and automation. (Robotics and Automation)
M
Anthony A. Maciejewski, Colorado
State University, for contributions
to the design and control of kinematically redundant robots.
(Robotics and Automation)
Bangalore S. Manjunath, University
of California, Santa Barbara, for
contributions to the research and
standardization of face animation
and object-based video coding.
(Signal Processing)
James Randal Moulic, IBM Research, for leadership in the
advancement of technology and
architecture of personal and highperformance computing systems.
N
Paul Nikolich, Lynnfield, Massachusetts, for leadership in enabling
ubiquitous broadband Internet
access and associated standards.
82
O
Mohammad S. Obaidat, Monmouth
University, for contributions to
adaptive learning, pattern recognition, and system simulation. (Systems, Man, and Cybernetics)
Jorn Ostermann, University of Hannover, for contributions to research
and standardization of face animation and object-based video
coding. (Signal Processing)
Tokyo, for contributions to highperformance computation models.
Fred James Taylor, University of
Florida, for contributions to highperformance digital signal processing. (Signal Processing)
Shoji Tominaga, Osaka ElectroCommunication University, for
contributions to the analysis of
physical phenomena in digital color
imaging.
P
Hoang Pham, Rutgers University, for
contributions to analytical techniques for modeling the reliability
of software and systems. (Reliability)
Rosalind Wright Picard, Massachusetts Institute of Technology, for
contributions to image and video
analysis and affective computing.
U
Javier Uceda, Polytechnic University
of Madrid, for contributions to the
development of switched-mode
power supplies. (Industrial Electronics)
R
Daniel A. Reed, University of North
Carolina, for contributions to
high-performance computing.
Johan H. C. Reiber, Leiden University, for contributions to medical
image analysis and its applications.
(Engineering in Medicine and Biology)
Christian Roux, National School of
Telecommunications, France, for
contribution to the theory of functional shapes and its applications
in medical imaging. (Engineering
in Medicine and Biology)
S
Bjarne Stroustrup, Texas A&M University, for contributions to the creation of the C++ programming language and its applications.
Richard Szeliski, Microsoft Research,
for contributions to image-based
modeling and rendering, and
Bayesian and optimization-based
techniques in computer vision.
T
Hidehiko Tanaka, University of
V
Adrianus Johannes Vinck, University
of Essen, for contributions to
coding techniques. (Information
Theory)
W
Lois D. Walsh, US Air Force, for leadership in electronic device reliability. (Reliability)
Paul B. Wesling, Saratoga, California, for contributions to multimedia education development within
IC packaging. (Components, Packaging, and Manufacturing)
Donald Coolidge Wunsch, University
of Missouri, for contributions to
hardware implementations of reinforcement and unsupervised learning. (Neural Networks)
Y
Kazuo Yano, Hitachi Research, for
contributions to nanostructured
silicon devices and circuits and to
advanced CMOS logic. (SolidState Circuits)
Z
Zhengyou Zhang, Microsoft, for
contributions to robust computer
vision techniques.
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
IEEE Fellow Nominations Due 1 March
Leonard L. Tripp, Chair, 2005 Computer Society Fellows Nomination Committee
he IEEE and its member societies
cooperate each year to select a
small group of outstanding professionals for recognition as IEEE Fellows. A senior IEEE member who has
achieved distinction in his or her field
can be named an IEEE Fellow only
after being nominated for the honor.
All such nominations undergo rigorous review before the IEEE Board of
Governors votes on bestowing the
prestigious rank of Fellow.
To nominate a candidate for IEEE
Fellow recognition, begin the process
by visiting www.ieee.org/fellows/. The
Electronic Fellow Nomination Process
is detailed at http://elektra.ieee.org/
Fellows/FellowNo.nsf. The deadline for
2006 Fellow nominations is 1 March.
In the event that the online nomination
process is unsuitable, paper nomination
materials can be obtained from the
IEEE Fellow Committee, 445 Hoes
Lane, PO Box 1331, Piscataway, NJ
08855-1331; voice +1 732 562 3840;
fax +1 732 981 9019. Hard copies
may also be obtained by request from
[email protected]. Nominators should
avoid submitting the forms via fax.
T
Nominators
A nominator need not be an IEEE
member. However, nominators cannot
be IEEE staff or members of the IEEE
Board of Directors, the Fellow Committee, the technical society, or council
evaluation committee.
Preparing a nomination
Essential to a successful nomination
is a concise account of a nominee’s
accomplishments, with emphasis on
the most significant contribution. The
nominator should identify the IEEE
society or council that can best evaluate the nominee’s work and must send
the nomination form to the point of
contact for that group. For the IEEE
Computer Society, the point of contact
is Lynne Harris, whose address
appears at the end of this article.
Careful preparation is important.
Endorsements from IEEE entities
such as sections, chapters, and committees and from non-IEEE entities
and non-IEEE individuals are optional but may be useful when these
entities or individuals are in the best
IEEE Computer Society Press Considers
Editor in Chief Shafer for Reappointment
The IEEE Computer Society Press, the nonperiodical publishing arm of the
IEEE Computer Society, is considering the reappointment of its current editor
in chief, Don Shafer. Shafer is a cofounder, director, and chief technology officer of the Athens Group, an employee-owned technology and software consulting firm. He has also developed hardware and software products for
Motorola, Advanced Micro Devices, and Crystal Semiconductor. Shafer is a
senior member of the IEEE and an adjunct professor of software engineering
at Texas State University.
To provide feedback on Shafer’s contributions to the IEEE Computer Society
Press, please e-mail comments to Deborah Plummer at [email protected].
Nominees
A nominee must be a senior member
at the time of nomination and must
have been an IEEE member in any
grade for the previous five years. This
includes exchange, student, associate,
member, senior, and honorary member,
as well as the life category of membership. It excludes affiliates, however,
because this category does not comprise
IEEE members. The five-year requirement must be satisfied at the date of
election, 1 January 2006; thus, a nominee must have been in any member
grade continuously since 31 December
2000. The five-year membership
requirement may be waived in the case
of nominees in Regions 8, 9, and 10.
Fellows are never named posthumously.
Computer Society Staffers Receive Harry Hayman Awards
At a recent meeting in New Orleans, the IEEE Computer Society Board of
Governors honored both Computer Society publisher Angela Burgess and continuing education coordinator Stacy Saul with 2004 Harry Hayman
Distinguished Service Awards.
Burgess was recognized for her many years of outstanding work on
Computer Society publications. Saul was honored for her outstanding work
in support of the Computer Society International Design Competition and for
her leadership on the Certified Software Development Professional initiative.
Honorees receive a plaque and a $5,000 honorarium in recognition of “long
and distinguished service of an exemplary nature in the performance of duties
over and above those called for as a regular employee of the Society.”
The Hayman Award is the highest service award given to an active member
of the Computer Society staff. Award criteria and a list of previous winners are
available at www.computer.org/awards/.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
83
Computer Society Connection
position to provide credible statements.
References
The nominator should select references who are familiar with the nominee’s contributions and can provide
insights into these achievements. For
nominees in the US and Canada, references must be from IEEE Fellows;
outside the US and Canada, senior
members can provide references if necessary. References cannot be from the
IEEE staff or members of the IEEE
Board of Directors, the Fellow Committee, the technical society, or council
evaluation committee. While a minimum of five references are needed, it is
strongly recommended that the maximum of eight be sought.
Grant Funds Available for Educational Projects
To advocate for and support wide-reaching educational projects in fields of
IEEE interest, the IEEE Foundation each year awards several generous grants.
The IEEE Foundation, an independent philanthropic body, was established in
1973 “exclusively to support the scientific and educational purposes of the
IEEE.” The Foundation is now soliciting proposals for grant funds to be distributed later this year.
The Foundation bestows program grants and subsidies that support education, history, and other special initiatives. Proposals must meet a number of criteria. The proposed project must promise to improve education in mathematics,
science, and technology from precollege through continuing education; preserve, study, or promote the history of IEEE-associated technologies; recognize
major contributions to these technologies; or provide a major contribution to
communities served by the IEEE. Guidelines on applying for a 2005 IEEE
Foundation grant are available at www.ieee.org/foundation.
Early 2005 grant proposals are due by 15 April. For consideration later in
2005, proposals are due by 16 September.
Past recipients
At its November 2004 meeting, the IEEE Foundation awarded two new
education grants totaling $50,000. One $25,000 grant will fund the 2005
IEEE Sections Congress, a triennial gathering of hundreds of delegates from
all 10 regions of the IEEE. The Foundation also awarded a two-year, $25,000
grant to the IEEE’s emeritbadges.org, an international preuniversity technology education program for boys and girls.
Other grant recipients recognized in 2004 included the IEEE Nigeria Section,
which received $24,400 for “Networking Nigeria.” This project supplements a
Hewlett-Packard Foundation-funded computer lab at the University of Ibadan
and two years of complimentary access to IEEE Xplore by providing Internet
access and a lab coordinator.
In addition, Rutgers University received $10,000 in support of “Edison
Across the Curriculum,” an educational project that is integrating documents
from Rutgers’ Thomas A. Edison Papers collection into a preK-12 multiplesubject curriculum.
A full list of the grants awarded in 2004 is available at www.ieee.org/
organizations/foundation/html/2004grants.html.
The IEEE Computer Society International Design Competition is also a past
recipient of IEEE Foundation funds. The CSIDC program, which provides undergraduate students a start-to-finish real-world hardware and software engineering challenge, received $50,000 in support of its 2003 and 2004 competitions.
84
Evaluation of nominees
The IEEE Fellow Committee considers the following criteria:
• individual contributions as an
engineer or scientist, technical
leader, or educator;
• technical evaluation by one IEEE
society or council;
• tangible and verifiable evidence of
technical accomplishment, such as
technical publications, patents,
reports, published product descriptions, and services, as listed on the
nomination form;
• confidential opinions of referrers
who can attest to the nominee’s
work;
• IEEE and non-IEEE professional
activities, including awards, services, offices held, committee memberships, and the like; and
• total years in the profession.
Resubmission of nominations
Typically, less than half of the nominations each year are successful.
Therefore, highly qualified individuals
may not succeed the first time. Because
reconsideration of a nominee is not
automatic, nominators are encouraged
to update and resubmit nominations
for unsuccessful candidates. To resubmit these materials, ensure that the
nomination forms are current. The
deadline for resubmission is the same
as for new nominations.
Nomination deadline
The IEEE Fellow Committee must
receive 2005 nomination forms by
1 March. The staff secretary must also
receive at least five Fellow-grade reference
letters directly from the referrers by that
date. In addition, the evaluating society
or council must also receive a copy of the
nomination by 1 March. The deadline
will be strictly enforced. If the evaluation
is to be conducted by the Computer
Society, send a copy, preferably via e-mail,
to Lynne Harris, IEEE Computer Society,
1730 Massachusetts Ave. NW, Washington, DC 20036-1992; voice +1 202 371
0101; fax +1 202 728 9614; l.harris@
computer.org. ■
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
Computer Society Announces
2005 Meetings and Election Schedule
ecently, the IEEE Computer Society released its official 2005
administrative schedule. The
three administrative meeting series of
the Society’s governing boards provide focal points for other deadlines.
Groups that are scheduled to meet
during the weeklong sessions include
the Chapters Activities Board, the
Publications Board, and the Electronic
Products and Services Board. Also
noted are deadlines for both
Computer Society and IEEE election
materials.
R
2005 ELECTION
The 2005 calendar includes significant dates in the 2005 election cycle.
The 4 October election will name the
2006 first and second vice presidents;
the 2006 president-elect, who will
serve as president in 2007; and seven
members of the Board of Governors,
who serves three-year terms. Officers
elected in the 2005 elections begin their
terms on 1 January 2006.
Nomination recommendations for
candidates in this year’s election must
be received by the Nominations
Committee no later than 10 May.
Recommendations must be accompanied by the nominee’s biographical
information, which should include
facts about past and present participation in Society activities. Nomination materials should be sent to Carl
K. Chang, Nominations Committee
Chair, IEEE Computer Society, 1730
Massachusetts Ave. NW, Washington,
DC 20036-1992; voice +1 202 371
0101; fax +1 202 296 6896; c.chang@
computer.org.
2005 SCHEDULE
Member participation and volunteer
involvement are welcomed throughout
the year. The following calendar highlights dates of note for the Society.
11 March: Computer Society Board
of Governors Meeting, Portland,
Oregon. Culminates weeklong administrative meetings series for Society
governing boards.
10 May: The Nominations Committee sends its slate of officer and board
candidates to the Board of Governors.
21 May: Deadline for recommendations from membership for board and
officer nominees to be mailed to Nominations Committee.
31 May: Last day to send candidates’ petitions, signed by members of
the 2005 Board of Governors, to
Stephen B. Seidman, Society Secretary,
IEEE Computer Society, 1730 Massachusetts Ave. NW, Washington, DC
20036-1992; voice +1 202 371 0101;
fax +1 202 296 6896; s.seidman@
computer.org.
10 June: Computer Society Board
of Governors Meeting, Long Beach,
California. Culminates weeklong administrative meetings series for Society
governing boards.
10 June: Last day to submit 2007
IEEE delegate-director-elect petition
candidates to the IEEE.
30 June: Position statements, photos,
and biographies of those candidates
approved by the Board of Governors
are due at the Society’s publications
office in Los Alamitos, California, for
publication in the September issue of
Computer.
July: Computer publishes the Boardapproved slate of candidates and a call
for petition candidates for the same
officer and Board positions.
31 July: Member petitions and petition candidates’ position statements,
biographies and photos due to Society
Secretary Stephen B. Seidman at the
address above.
August: Computer publishes schedule and call for 2006 IEEE delegatedirector-elect recommendations to
Nominations Committee.
8 August: Ballots are mailed to all
members who are eligible to vote.
September: Computer publishes position statements, photos, and biographies of the candidates.
4 October: Ballots from members
are received and tabulated.
7 October: The Nominations Committee makes recommendations to the
Board of Governors for 2008 IEEE delegate-director-elect.
4 November: Computer Society
Board of Governors Meeting, Philadelphia, Pennsylvania. Culminates
weeklong administrative meetings
series for Society governing boards.
4 November: The IEEE delegatedirector-elect slate is approved by the
Board of Governors.
December: Computer publishes election results.
Editor: Bob Ward, Computer;
[email protected]
RENEW your IEEE Computer Society
membership for...
✔ 12 issues of Computer
✔ Access to 350 distance learning course modules
✔ Access to the IEEE Computer Society online bookshelf
✔ Membership in your local Society chapter
http://www.ieee.org/renewal
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
85
CALL AND CALENDAR
CALLS FOR IEEE CS PUBLICATIONS
An August/September 2005 special
issue of IEEE Intelligent Systems aims
to bridge the gap between data mining
and bioinformatics. The guest editors
seek papers that propose novel data
mining techniques for challenges in
areas that include genomics, proteomics, and metabolomics; the diagnosis, prognosis, and treatment of
diseases; drug design; systems biology;
and the structure and function of proteins and RNA.
The deadline for submitting manuscripts is 7 March. See the full call for
papers at www.computer.org/intelligent/
cfp14.htm.
IEEE Internet Computing invites
contributions for a November/
December 2005 special issue on security for P2P systems and ad hoc networks. Topics include key management,
access control, secure MAC protocols,
performance and security trade-offs,
and denial of service.
Manuscripts are due by 1 April. See
the complete call at www.computer.
org/internet/call4ppr.htm#v9n6.
For an October/November 2005 special issue on artificial intelligence and
homeland security, IEEE Intelligent
Systems is encouraging submissions of
practical and novel AI technologies,
Submission Instructions
The Call and Calendar section
lists conferences, symposia, and
workshops that the IEEE Computer
Society sponsors or cooperates in
presenting. Complete instructions
for submitting conference or call listings are available at www.computer.
org/conferences/submission.htm.
A more complete listing of upcoming computer-related confeences
is available at www.computer.org/
conferences/.
86
Computer
techniques, methods, and systems.
Submissions on all research areas relating to both AI and national or homeland security are welcome. Topics
include bioterrorism tracking, alerting,
and analysis; criminal data mining;
deception detection systems; and crime
and intelligence visualization.
Manuscripts are due by 1 April. See
the complete call at www.computer.
org/intelligent/cfp16.htm.
OTHER CALLS
VL/HCC 2005, IEEE Symp. on Visual
Languages & Human-Centric Computing, 21-24 Sept., Dallas, Texas.
Papers due 6 Mar. http://viscomp.
utdallas.edu/vlhcc05/
HASE 2005, 9th IEEE Int’l Symp. on
High-Assurance Systems Eng., 12-14
Oct., Heidelberg, Germany. Papers due
15 Mar. http://hase.informatik.
tu-darmstadt.de/submit/
ISESE 2005, ACM-IEEE 4th Int’l
Symp. on Empirical Software Eng., 1718 Nov., Noosa Heads, Australia.
Papers due 4 Apr. http://attend.it.uts.
edu.au/isese2005/cfp.htm
ICDM 2005, 5th IEEE Int’l. Conf on
Data Mining, 26-30 Nov., New
Orleans. Papers due 1 Jun. www.cacs.
louisiana.edu/~icdm05/cfp.html
12-16 Mar: IEEE VR 2005, IEEE
Virtual Reality Conf., Bonn, Germany.
www.vr2005.org/
14-16 Mar: ASYNC 2005, 11th Int’l
Symp. on Asynchronous Circuits &
Systems, New York. http://vlsi.cornell.
edu/async2005/
20-22 Mar: ISPASS 2005, IEEE Int’l
Symp. on Performance Analysis of
Systems & Software, Austin, Texas.
http://ispass.org/
20-23 Mar: CGO 2005, 3rd Ann.
IEEE/ACM Int’l Symp. on Code
Generation & Optimization, San Jose,
Calif. www.cgo.org/
28-30 Mar: AINA 2005, IEEE 19th
Int’l Conf. on Advanced Information
Networking & Applications, Taipei.
www.takilab.k.dendai.ac.jp/conf/aina/
2005/
29-31 Mar: DCC 2005, IEEE Data
Compression Conf., Snowbird, Utah.
www.cs.brandeis.edu/~dcc/
29 Mar.-1 Apr: EEE 2005, IEEE Int’l
Conf. on e-Technology, e-Commerce,
& e-Service, Hong Kong. www.comp.
hkbu.edu.hk/~eee05/
30 Mar.-2 Apr: LATW 2005, 6th
IEEE Latin-American Test Workshop,
Salvador, Brazil. www.latw.net/
CALENDAR
APRIL 2005
MARCH 2005
3-8 Apr: SEW-29, 29th IEEE/NASA
Software Eng. Workshop, Greenbelt,
Md. http://sel.gsfc.nasa.gov/
7-10 Mar: RTAS 2005, 11th IEEE
Real-Time and Embedded Technology
and Applications Symp., San
Francisco. www.cis.upenn.edu/rtas05/
7-11 Mar: DATE 2005, Design, Automation, & Test in Europe, Munich.
www.date-conference.com/
8-12 Mar: Percom 2005, Int’l Conf. on
Pervasive Computing & Comm.,
Koloa, Hawaii. www.percom.org/
4-5 Apr: ECBS 2005, 12th Ann. IEEE
Int’l Conf. and Workshop on Eng. of
Computer-Based Systems (with SEW29), Greenbelt, Md. http://abe.eng.uts.
edu.au/ECBS2005/
4-8 Apr: ISADS 2005, 7th Int’l Symp.
on Autonomous Decentralized Systems, Chengdu, China. http://isads05.
swjtu.edu.cn/
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
4-8 Apr: IPDPS 2005, Int’l Parallel &
Distributed Processing Symp., Denver,
Colo. www.ipdps.org/
5-8 Apr: ICDE 2005, 21st Int’l Conf.
on Data Eng., Tokyo. http://icde2005.
is.tsukuba.ac.jp/
6-7 Apr: EDPS 2005, Electronic
Design Process Symp., Monterey,
Calif. www.eda.org/edps/
11-13 Apr: ITCC 2005, Int’l Conf. on
IT, Las Vegas. www.itcc.info/
11-14 Apr: MSST 2005, 22nd IEEE
Conf. on Mass Storage Systems and
Technologies, Monterey, Calif. www.
storageconference.org/
18-21 May: ISMVL 2005, 35th Int’l
Symp. on Multiple-Valued Logic,
Calgary, Canada. www.enel.ucalgary.
ca/ISMVL2005/
22-25 May: ETS 2005, 10th European
Test Symp., Tallinn, Estonia. http://
sise.ttu.ee/ati/ETS/
25-26 May: EBTW 2005, European
Board Test Workshop (with ETS 2005),
Tallinn, Estonia. www.molesystems.
com/EBTW05/
30-31 May: EMNETS-II 2005, 2nd
IEEE Workshop on Embedded Networked Sensors, Sydney, Australia
www.cse.unsw.edu.au/~emnet/
JUNE 2005
20-22 Apr: Cool Chips VIII, Int’l Symp.
on Low-Power & High-Speed Chips,
Yokohama, Japan. www.coolchips.org/
MAY 2005
1 May: DBT 2005, IEEE Int’l
Workshop on Current & Defect-Based
Testing (with VTS-05), Rancho
Mirage, Calif. www.cs.colostate.edu/
~malaiya/dbt.html
1-5 May: VTS 2005, 23rd IEEE VLSI
Test Symposium, Rancho Mirage,
Calif. www.tttc-vts.org/
9-12 May: CCGrid 2005, 5th IEEE
Int’l Symp. on Cluster Computing &
the Grid, Cardiff, UK. www.cs.cf.ac.
uk/ccgrid2005/
11-13 May: NATW 2005, IEEE 14th
North Atlantic Test Workshop, Essex
Junction, Vt. www.ee.duke.edu/NATW/
15-16 May: IWPC 2005, 13th Int’l
Workshop on Program Comprehension (with ICSE), St. Louis. www.ieeeiwpc.org/iwpc2005/
15-21 May: ICSE 2005, 27th Int’l
Conf. on Software Eng., St. Louis.
www.cs.wustl.edu/icse05/Home/index.
shtml
6-8 June: Policy 2005, IEEE 6th Int’l
Workshop on Policies for Distributed
Systems & Networks, Stockholm.
www.policy-workshop.org/2005/
6-9 June: ICDCS 2005, 25th Int’l
Conf. on Distributed Computing
Systems, Columbus, Ohio. www.cse.
ohio-state.edu/icdcs05/
12-13 June: MSE 2005, Int’l Conf.
on Microelectronic Systems Education (with Design Automation Conf.),
Anaheim, Calif. www.mseconference.
org/
13-16 June: ICAC 2005, 2nd IEEE
Int’l Conf. on Autonomic Computing,
Seattle. www.autonomic-conference.
org/
13-16 June: WOWMOM 2005, Int’l
Symp. on A World of Wireless, Mobile,
& Multimedia Networks, Taormina,
Italy. http://cnd.iit.cnr.it/wowmom2005/
13-17 June: SMI 2005, Int’l Conf. on
Shape Modeling & Applications, Cambridge, Mass. www.shapemodeling.
org/
16-20 June: ICECCS 2005, Int’l Conf.
on Eng. of Complex Computer
Systems, Shanghai. www.cs.sjtu.edu.
cn/iceccs2005/
19-24 June: MSST 2005, 2nd Int’l.
IEEE Symp. on Mass Storage Systems
& Technologies, Sardinia, Italy. www.
storageconference.org/
20-26 June: CVPR 2005, IEEE Int’l
Conf. on Computer Vision & Pattern
Recognition, San Diego, Calif. www.
cs.duke.edu/cvpr2005/
26-29 June: LICS 2005, 20th Ann.
IEEE Symp. on Logic in Computer
Science, Chicago. http://homepages.
inf.ed.ac.uk/als/lics/lics05/
27-29 June: ARITH-17, 17th IEEE
Symp. on Computer Arithmetic, Cape
Cod, Mass. http://arith17.polito.it/
27-29 June: CollaborateCom 2005, 1st
IEEE Int’l Conf. on Collaborative
Computing: Networking, Applications,
& Worksharing, Cape Cod, Mass.
www.collaboratecom.org/
27-30 June: ISCC 2005, 10th IEEE
Symp. on Computers & Communication, Cartagena, Spain. www.comsoc.
org/iscc/2005/
28 June-1 July: DSN 2005, Int’l Conf.
on Dependable Systems & Networks,
Yokohama, Japan. www.dsn.org/
30 June-1 July: DCOSS 2005, Int’l
Conf. on Distributed Computing in
Sensor Systems, Marina del Rey, Calif.
www.dcoss.org/
JULY 2005
5-8 July: ICALT 2005, 5th IEEE Int’l
Conf. on Advanced Learning Technologies, Kaohsiung, Taiwan. www.
ask.iti.gr/icalt/2005/
6-8 July: ICME 2005, IEEE Int’l Conf.
on Multimedia & Expo, Amsterdam.
www.icme2005.com/
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
87
CAREER OPPORTUNITIES
Tenure Track Position
Faculty of Engineering and Computer Science
The Department of Computer Science and Software Engineering at Concordia University
invites applications for one tenure-track faculty position. We are looking for an excellent
candidate in any area of Computer Science or Software Engineering. The rank and salary
will be commensurate with qualifications and experience. The new position requires a
PhD degree in Computer Science or Software Engineering, or a closely related field,
completed or near completion.
The department places a strong emphasis on teaching and on fundamental and applied
research. For the rank of Assistant Professor we seek primarily candidates who have
recently obtained a PhD or who are about to receive it. A publication record or very
strong research potential is required, as is strong interest and ability in teaching at the
undergraduate and graduate levels. For the rank of Associate Professor, excellent
credentials in both research and teaching are required. A successful candidate is expected
to contribute to course and laboratory development at all levels of instruction, to be
active in research, and to secure external research funding. The department encourages
interdisciplinary research partnerships, industrial collaborations, and technology transfer.
The department offers Bachelor of Computer Science and Bachelor of Engineering
(Software Engineering) programs. The programs are accredited by CSAC and CEAB,
respectively. At the graduate level, it offers a Master of Computer Science, a Master of
Applied Computer Science, a PhD (Computer Science), and a Postgraduate Diploma in
Computer Science. The department houses about 1100 undergraduates and 500 graduate
students. The 38 full-time faculty members are assisted by 18 full-time staff members.
The Department has a research centre CENPARMI (Centre for Pattern Recognition and
Machine Intelligence), is involved in two inter-university research centres (mathematical
computing and VLSI architectures), and participates in the Network of Centres of
Excellence. The University has several programs to provide seed grants for research in the
beginning years.
The department is located in downtown Montreal, an exciting and dynamic city noted for
fine restaurants, excellent entertainment, and an urban setting with many opportunities
for a rich social life. Montreal combines the excitement of a modern, multicultural city
with affordable housing and easy access to outdoor activities. It is also noted for its four
major universities and more than two hundred local high-tech companies.
Montreal is currently enjoying high growth in the software industry's main areas of
development, especially in telecommunications, aerospace, software development, and
multimedia. Montreal is rapidly gaining a reputation as one of Canada's leading high-tech
centres. There is ample opportunity for industrial collaboration.
Although the primary language at Concordia University is English, proficiency in French
is considered an asset. Interested applicants should send a detailed curriculum vitae, a list
of publications, and at least three references to:
Prof. C. Lam, Chair Department of Computer Science
and Software Engineering Concordia University
1455 de Maisonneuve West Montreal, Quebec H3G 1M8 Canada
Tel: (514) 848-2424 ext. 3001
Fax : (514) 848-2830
Email : [email protected]
Website : http://www.cse.concordia.ca
All qualified candidates are encouraged to apply; however, Canadians and permanent
residents will be given priority. Concordia University is committed to employment equity.
WASHINGTON STATE UNIVERSITY.
The School of Electrical Engineering and
Computer Science at Washington State
University invites applications for one
tenure-track position in computer science
and two in computer engineering at the
rank of assistant professor. Applicants
must have earned a Ph.D. in computer
engineering, computer science, or electrical engineering by August 15, 2005.
Candidates must have a strong interest in
and a demonstrated capacity to conduct
publishable research and must display the
potential for successful teaching. A record
of publication in refereed journals and
conference proceedings is essential. Areas
in computer engineering given highest
priority will be computer architecture,
digital system design, embedded systems, networks, hardware-software codesign, design automation, test and verification, fault tolerance, reconfigurable
systems, VLSI design, and nanotechnology. In computer science the highest priority areas will be software engineering,
databases, security, networking/distributed systems, and bioinformatics. However, outstanding candidates who have
specialized in other areas will also be considered. The successful candidate will be
expected to teach, effectively communicate and interact with students and colleagues, conduct funded research, and
direct MS and PhD student research programs. WSU is a Carnegie Research I University located in Pullman, a diverse university town near the Washington/Idaho
border 75 miles south of Spokane. Pullman is known for its high quality of life,
including an excellent K-12 public school
system. The region is renowned for its
scenic beauty and recreational opportunities. Pullman combines the facilities of a
major university with the convenience of
a small town to provide a highly productive work environment for its faculty and
students. The University of Idaho is
located seven miles to the east of WSU.
The proximity of these universities
ensures an academic and cultural atmosphere which is far more vibrant and stimulating than that of other towns of similar size. WSU's location also facilitates
collaboration with Seattle-area companies. Such collaborations are supported
financially by the State of Washington
through the Washington Technology
Center and the Washington Research
Foundation. The School of EECS has forty
faculty. Research expenditures in the
School exceed $3.5 million per year.
Research facilities are excellent, including
several hundred workstations, high-speed
networking, and dedicated laboratory
space. The School offers new junior faculty a reduced teaching load for the first
three years of their appointment. To learn
more about WSU, the School, and faculty
research interests, consult http://www.
eecs.wsu.edu/. Screening of applications
will begin upon receipt and continue until
88
TEAM LinG - Live, Informative, Non-cost and Genuine!
89
TEAM LinG - Live, Informative, Non-cost and Genuine!
the positions are filled. Appointments will
start August 16, 2005. Applicants should
send a cover letter, a curriculum vitae,
and the names and addresses of three references qualified to comment on the
applicant's research and teaching qualifications to: Chair, EECS Search Committee, School of Electrical Engineering and
Computer Science, Washington State
University, P O Box 642752, Pullman WA
99164-2752. WSU is an EO/AA educator
and employer.
THE UNIVERSITY OF NEBRASKA AT
OMAHA, Tenure Track Assistant
Professor or Above, Computer Science Department. Candidates are
sought in the following areas: software
engineering, network security, parallel
and distributed computing, and databases in mobile environment. Exceptional
candidates in other areas will also be considered. One of the requirements is an
earned doctorate in computer science or
a closely related area. Applicants interested in the position are asked to apply
on line at: http://careers.unomaha.edu,
where the full text of this advertisement
can be found. The Information Science
90
and Technology (IST) College, which
includes the computer science department, is housed in the Peter Kiewit Institute, a new state-of-the-art facility for
teaching and research. Financial support
of the institute, now exceeding $180M,
reflects a unique business-academic partnership. Degrees of B.S., M.S., and Ph.D.
are offered. We invite you to visit our website at: http://www.ist.unomaha.edu. The
University of Nebraska at Omaha has a
commitment to developing and maintaining a diverse faculty and staff and
strongly encourages those from underrepresented groups to apply for this position. The University of Nebraska at
Omaha is an Affirmative Action/Equal
Opportunity employer.
UNIVERSITY OF LOUISIANA AT LAFAYETTE, The Center for Advanced
Computer Studies, Faculty Position, Graduate Fellowships. Candidates with a strong research record and
an earned doctorate in computer science
or computer engineering are invited to
apply for a tenure-track assistant professor
faculty position starting August 17, 2005.
Target areas include Grid Computing,
Visualization, and Distributed Software
Systems. Consideration will also be given
to outstanding candidates in other areas.
The candidate must have demonstrated
potential to achieve national visibility
through accomplishments in research
contract and grant funding, publications,
teaching and supervising graduate students. Faculty teach mostly at the graduate-level and offer a continuing research
seminar. State and university funds are
available to support research initiation
efforts. Salaries are competitive along
with excellent support directed towards
the attainment of our faculty's professional goals. The Center's colloquium
series brings many world known professionals to our campus each year. The Center is primarily a graduate research unit of
18 faculty, with programs leading to
MS/PhD degrees in computer science
and computer engineering. Approximately 220 graduate students are
enrolled in these programs, including 80
PhD students. The Center has been
ranked 46th in a recent NSF survey based
on research and development expenditures, and ranked 35th among the top
100 graduate programs in North America by the Communications of the ACM,
based on research publications. The Center has state-of-the-art research and
instructional computing facilities, consisting of several networks of SUN workstations and other high performance platforms. In addition, the Center has
dedicated research laboratories in Computer Vision and Pattern Recognition,
Intelligent Systems, Computer Architecture and Networking, Cryptography,
FPGA and Reconfigurable Computing,
Internet Computing, Virtual Reality, Software Research, VLSI and SoC, and Wireless Technologies. Related university programs include the CSAB (ABET)
accredited undergraduate program in
Computer Science, and the ABET accredited undergraduate program in Electrical
and Computer Engineering. Additional
information about the Center may be
obtained at http://www.cacs.louisiana.
edu/. A number of PhD fellowships, valued at up to $18,000 per year including
tuition and most fees, are available. They
provide support for up to four years of
study towards the PhD in computer science or computer engineering. Eligible
candidates must be U.S. citizens or must
have earned an MS degree from a U.S. or
Canadian university. Recipients also
receive preference of low-cost campus
housing. The University of Louisiana at
Lafayette is a Research Intensive University, with an enrollment of 16,500 students. Additional information may be
obtained at http://www.louisiana.edu/.
The University is located in Lafayette, the
hub of Acadiana, which is characterized
by its Cajun music and food, and joie de
vivre atmosphere. The city, with its population of over 120,000, provides many
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
recreational and cultural opportunities.
Lafayette is located approximately 120
miles west of New Orleans. The search
committee will review applications and
continue until the position is filled. Candidates should send a letter of intent, curriculum vitae, statement of research and
teaching interests, and names, addresses
and telephone numbers of at least four
references. Additional materials, of the
candidate's choice, may also be sent to:
Dr. Magdy A. Bayoumi, Director The Center for Advanced Computer Studies, University of Louisiana at Lafayette, Lafayette,
LA 70504-4330. Tel: 337.482.6147; Fax:
337.482.5791. The University is an Affirmative
Action/Equal
Opportunity
Employer.
DARTMOUTH COLLEGE, Director of
the William H. Neukom Institute of
Computational Science. Dartmouth
College invites nominations and applications for the position of Director of the
William H. Neukom Institute of Computational Science. We seek a prominent scientist whose vision and leadership will
establish this interdisciplinary Institute at
Dartmouth. Made possible by a generous
gift, the William H. Neukom Institute will
strengthen and broaden research in computational science. At the same time it will
increase the undergraduate student
awareness of and accessibility to computational science across the College. Dartmouth is committed to building a worldclass Institute that will leverage existing
efforts on campus. The director will hold
a tenured full-professor position in the
department of Computer Science.
She/he must be a skilled academic administrator who has the vision and ability to
position the Institute at the forefront of
research and education. In addition to the
Director, three new tenure-track faculty
positions are available with at least two
being in departments other than Computer Science. Resources for undergraduate research, graduate and post-doctoral
fellowships, equipment, outreach, and
administrative support will be available.
With over 4,000 undergraduate and
1,500 graduate students and a tenuretrack faculty of 355 in arts and sciences
and 743 in professional schools, Dartmouth College combines the best features of an undergraduate liberal arts college with the intellectual vitality and
resources of a research university. This
highly selective institution has been a
leader of American higher education
since 1769. Sixteen graduate programs
are offered in the arts and sciences, and
the three professional schools of business,
engineering, and medicine. Dartmouth
College is committed to diversity and
encourages applications from women
and minorities. Dartmouth is an Equal
Opportunity, Affirmative Action employer. Additional information is available at
http://www.dartmouth.edu. All applications, nominations and inquiries should
be directed to: Professor Scot Drysdale,
Department of Computer Science, Dartmouth College, 6211 Sudikoff Laboratory, Hanover, NH 03755-3510. E-mail:
[email protected]. Applicants
should send letter of application, a curriculum vitae, a research and a teaching
statement, and the names of at least four
professional references. Review of applications will begin February 15 and continue until the position is filled.
GANNON UNIVERSITY, Electrical
and Computer Engineering Faculty.
Gannon University, a Catholic university
located in Erie, Pennsylvania, invites applications for a tenure-track assistant professor position in Computer Engineering
for Fall 2005. Committed to excellence in
teaching, Gannon has a strong relation-
INDIANA UNIVERSITY BLOOMINGTON
School of Informatics/Computer Science
Tenure-track faculty position in Cybersecurity
Starting Fall 2005
The School of Informatics and the Department of Computer Science at Indiana University at
Bloomington are expanding their Cybersecurity research team and invite applications for a
tenure-track or tenured position starting Fall 2005. Applicants must possess an outstanding
record of research and a sincere commitment to teaching. Additionally, a Ph.D. degree or equivalent in computer science or a related discipline is required. Preference will be given to candidates with demonstrated strength in network security but applications from extraordinary candidates in all areas of computing research are welcome.
We are going through an exciting growth phase and have recently added 42 tenure-track faculty. With a current combined academic faculty of 62, in the School of Informatics and the
Department of Computer Science cover a broad range of research areas. Further, research centers like CACR (Center for Applied Cybersecurity Research) and PTL (Pervasive Technology
Labs) support a wide variety of focused as well as collaborative research projects spanning multiple academic units on campus. More information about the School of Informatics can be found
at www.informatics.indiana.edu. The department of Computer Science can be visited at
www.cs.indiana.edu.
Indiana University at Bloomington campus has been named the "most unwired" campus by
Intel and among the "most wired" campuses by Yahoo! Internet Life magazine in their surveys,
providing excellent facilities for teaching and research. The attractive wooded campus is located
in the rolling hills of southern Indiana, only an hour from the Indianapolis airport. Bloomington is a college town with an abundance of cultural and recreational activities and has been
chosen as one of the most cultural and livable small cities in the US.
We encourage early applications but full consideration will be given to all applications that are
received by January 1, 2005. Positions may be filled at any time, but we expect to make most
of our hiring decisions by May 1, 2005. Applicants should submit a curriculum vitae, a statement of research and teaching emphasizing informatics and computer science, and 3 reference
letters for junior faculty and 6 reference letters for associate and full professors. Applications
should be sent to:
Faculty Search Committee
901 E. 10th Street
Bloomington, Indiana 47408
Or online at: http://www.informatics.indiana.edu/positions/
Indiana University is an Affirmative Action/Equal Opportunity Employer. Applications
from women and under-represented minorities are strongly encouraged.
University of
Bridgeport
Faculty Positions in Electrical/
Computer Engineering and
Computer Science
The Electrical Engineering and Computer Science and Engineering Departments at the University of Bridgeport invites applications for full
time tenure-track positions at the Assistant/
Associate Professor level. Candidates for the
tenure-track positions must have a Ph.D. in electrical/computer engineering or computer science. A strong interest in teaching undergraduate and graduate courses and an excellent
research record are required. The ability to teach
lab-based courses is also required. Applicants
are sought in the areas of wireless design, VLSI,
communications, FPGA analysis, solid-state electronics, fiber optics, speech analysis, circuit theory, Image Processing, IC Design, Digital and Analog Controls, Medical Electronics, biomedical
engineering, biometrics, computer architecture,
software engineering, programming, database
design, algorithms, e-commerce, data mining,
artificial intelligence and data structures. There
are opportunities to participate in the external
engineering programs, which include weekend
and evening graduate and continuing education
classes, on-site instruction in local industry and
distance learning initiatives.
Applicants should send a cover letter, resume
and address and e-mail address of four references to: Faculty Search Committee, School of
Engineering, c/o Human Resources Department,
Wahlstrom Library, 7th Floor, 126 Park Avenue,
Bridgeport, CT 06604. Fax: (203) 576-4601.
[email protected].
The University of Bridgeport is an equal
opportunity, affirmative action employer.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
91
ship with regional industry. Areas of priority and specialization include object-oriented programming, embedded systems,
computer architecture, mixed-signal VLSI
circuits and systems; however, exceptional candidates from other areas will be
considered. Must be able to support and
promote the University’s mission, its
Catholic identity, and its liberal arts tradition. Requirements include an earned
Ph.D. in Electrical and Computer Engineering or a closely related field. Successful candidates will be expected to
provide evidence of outstanding potential in teaching and securing external
funding. Submit a cover letter, CV, teaching and research statements, and the contact information for three references to:
Gannon University, Engineering Search,
109 University Square, Erie, PA 165410001; Fax to: (814) 871-7514; or Email
to: [email protected]. Review of
resumes will begin January 31, 2005, and
this position will remain open until filled.
For more information about Gannon visit
www.gannon.edu. Gannon University is
an Equal Opportunity Employer that
encourages diversity and invites women
and underrepresented groups to apply.
TEXAS TECH UNIVERSITY. The Department of Computer Science invites applicants in all areas of Computer Science for
a position as the Director of the Abilene
Institute and Tenured Professor of Computer Science starting the academic year
2005-06. This position will be at the Texas
Tech University graduate campus in Abilene, Texas. The Abilene Institute is dedicated to conducting leading-edge
research in computer science and related
areas. The Director of the Abilene Institute
is expected to lead efforts to secure extramural funding, develop external collaborations, provide the vision and leadership
to establish new initiatives and programs,
recruit Institute staff, and provide support
for Agency Liaison activities. It is critical for
the Director to develop relationships with
state and federal funding agencies and
other government officials. The Professor
position provides a synergy between the
Institute and the Computer Science
Department graduate campus in Abilene.
The successful candidate’s academic
achievement and professional reputation
should be superior and should have
resulted in national recognition. The candidate should have an outstanding record
of peer-reviewed publications, and funded
research. The Computer Science Department in the College of Engineering at
Texas Tech University offers a Ph.D., M.S.,
and B.S. in Computer Science and an M.S.
in Software Engineering. The Abilene
campus offers only the graduate degrees.
We offer competitive salaries, a friendly
and cooperative environment, and excellent research facilities. State-of-the-art
92
two-way video facilities allow the Lubbock
and Abilene sites to exchange course
offerings and to conduct collaborative faculty and committee meetings. More information on the department is available at
http://www.cs.ttu.edu/. Applicants must
have a Ph.D. degree in computer science
or a closely related field and should send
a curriculum vita, contact information for
at least five references, and supporting
documents such as statements about past
accomplishments in research and leadership. Electronic submission of application
materials in PDF is preferred. Electronic
submission should be sent to muriel.
[email protected] or hardcopy
mailed to Dr. Jack A. Barnes, Chair Faculty
Search Committee, c/o Ms. Muriel Bockenstedt, Department of Computer Science, Texas Tech University, 302 Pine, Abilene, TX 79601. Review of applications
will begin as they are received. Applications will be accepted until the position is
filled, assuming the anticipated availability of funds. Candidates must be currently
eligible to work in the United States. Texas
Tech University is an equal opportunity/affirmative action employer and
actively seeks the candidacy of women
and minorities.
DRESDEN UNIVERSITY OF TECHNOLOGY, Master of Computer Science
(MCS), Dresden, Germany. The computer science department of TU Dresden
is a top-tier department in Germany. We
offer two International Master Programs
in computational engineering and computational logic. As a state supported
institution, the tuition for these programs
is free! The focus of the computational
engineering program is on building software-intensive systems. For more information please see http://www.computa
tional-engineering.de.
ILLINOIS INSTITUTE OF TECHNOLOGY, Chicago, IL, Chair, Department of Computer Science. Applications and nominations are invited for the
position of Chair of the Department of
Computer Science. The candidate’s mission is to lead the department to national
prominence, while providing strategic
vision, decisive leadership in research and
education, and fostering interactions with
government and industry. This growing
department currently has 17 tenure track
faculty members. In the past five years the
department has hired ten new faculty –
all from top tier CS departments.
Research funding has dramatically
increased in this time frame and is now
close to the funding levels found at top
CS departments. New facilities include an
84-node Sun Microsystems ComputeFarm, one 14-node IBM Linux-based clus-
ters, one Cray XD1 parallel computer, and
many high-end graphic workstations and
servers. The department is connected to
and shares advanced computing facilities
at the National Center for Supercomputing Applications (NCSA), the Argonne
National Laboratory, Northwestern University, the University of Chicago, and the
University of Illinois at Chicago via a 7.5million-dollar dedicated 10-gigabit-persecond fiber optic research data network.
Several new courses in information
retrieval, data mining, and information
security have recently been added at the
undergraduate level. Faculty research
interests include computer networking,
distributed and parallel systems, information retrieval and databases, intelligent
information systems, and software engineering. The department offers B.S.,
M.S., and Ph.D. degrees. The department has established cooperative
research activities with local research and
government agencies. IIT is a private,
Ph.D.-granting institution, which offers
programs in engineering, science, architecture, psychology, law, and business.
The main campus is located within 10
minutes of downtown Chicago. In addition, we have a campus located along the
high technology corridor in Chicago's
western suburbs. Qualifications for the
position include an international reputation in Computer Science, excellent communication and administrative skills, and
a desire to take a growing department
and continue our long-term goal of transforming it into one of the major centers
for CS research in the country. Applications should be sent to: Professor Fouad
A. Teymour, Chair, CS Search Committee,
10 W. 33rd Street, Illinois Institute of
Technology, Chicago, IL, 60616.
http://www.cs.iit.edu. Illinois Institute of
Technology is an equal opportunity, affirmative action employer.
KANSAS STATE UNIVERSITY, Faculty
Position, Department of Computing and Information Sciences. The
department of Computing and Information Sciences at Kansas State University
invites applications for a tenure track position beginning in Fall 2005. Preference
will be given to candidates in the areas of
Bioinformatics data mining, data management, and data integration. Applicants must be committed to both teaching and research. Applicants should have
a PhD degree in computer science with
demonstrated expertise in Bioinformatics; salary will be commensurate with
qualifications. Applications must include
descriptions of teaching and research
interests along with copies of representative publications. Kansas State University
is committed to the growth and excellence of the CIS department. Details of
the department can be found at
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
http://www.cis.ksu.edu/. Details about
Bioinformatics research at K-State can be
found at http://www.cis.ksu.edu/bioinformatics. Please send applications to
Chair of the Recruiting Committee,
Department of Computing and Information Sciences, 234 Nichols Hall, Kansas
State University, Manhattan, KS 66506
(email: [email protected]). Review
of applications will commence February
1 and continue until the position is filled.
Kansas State University is an Affirmative
Action Equal Opportunity Employer. The
department is committed to diversity,
and women and minority candidates are
encouraged to apply.
BOISE STATE UNIVERSITY, Assistant
Professor, Computer Science Department, College of Engineering,
Boise, Idaho 83725-2075. The department has a tenure-track faculty position
open for Fall 2005. Screening will begin
immediately. Applicants with a Ph.D. in
Computer Science are sought, with preference to bioinformatics, systems, databases or graphics (all areas will be considered). Applicants must provide
evidence of strong teaching and research
potential. Send letter of application, vita,
graduate transcripts, and three letters of
professional reference to: CS Search Committee, Computer Science Department,
Boise State University, Boise, ID 837252075. Boise State University is an EOE/AA
Employer. Vets Preferences. For further
info, see the website: http://cs.boise
state.edu/ad05.html or http://hrs.boise
state.edu/joblistings.
SOFTWARE ENGINEER. For company
in Littleton, CO, research, design, develop
computer software systems in conjunction with product development for engineering, energy-related Ebusiness and
geographic information systems for use
by telecom companies and/or energy/gas
utilities to support a wide range of
requirements including outage/distribution management, asset management
and mobile workforce management.
Design models for spatial data storage
and applications to maintain, manage
and migrate data to support system operation and provide land information and
property data. Implement systems in data
relational data stores using Oracle and
SQL Server. Design and develop software
to retrieve and manipulate spatial data
and customize various mapping and geographical web-based applications and/or
desktop client/server products, and share
this data with other systems using a variety of enterprise integration applications
and/or middleware technologies. Analyze
software requirements and consult with
hardware engineers and other engineer-
ing staff to evaluate interface between
hardware and software. Document performance, availability, reliability and
extensibility. Formulate and design software system using previous project
benchmarks and prototypes. Develop
and direct software system testing procedures, programming and documentation. Consult with customer concerning
requirements, development, deployment
and maintenance of software system.
8am-5pm, 40 hrs/wk, $97,500/yr. Req
Bachelor's (or foreign equiv) in Geomatics, Comp Sci or related field and 3 yrs
exp as Software Engineer, Systems Engineer, Project Manager, Consultant or
Technical Lead and working knowledge
of Oracle, Spatial Data Manager and utility company business practices,
processes. Mail resume to: WORKFORCE
DEVELOPMENT PROGRAMS, PO Box
46547, Denver, CO 80202. Refer to job
order no. CO5104447.
GEORGIA STATE UNIVERSITY, Neuroscience and Computational Biomedicine. BRAINS & BEHAVIOR: The
Brains & Behavior Program (B&B) at
Georgia State University (manager@
cs.gsu.edu) offers graduate fellowships
for its new interdisciplinary initiative in
neuroscience and behavior. The B&B Program brings together seventy faculty
members from eight participating departments, Biology, Chemistry, Computer Science, Computer Information Systems,
Mathematics & Statistics, Philosophy,
Physics & Astronomy, and Psychology, to
conduct collaborative research and graduate training. Interdisciplinary research
groups within B&B include Brains &
Computers, Neurons & Networks, Molecules & Brains, Adaptability & Behavior,
and Brains & Social Behavior. Each B&B
Fellow will matriculate in a member
department and be jointly supervised by
a neuroscientist and a member from their
home department. The Brains & Behavior
Program is affiliated with the Center for
Behavioral Neuroscience (http://www.
cbn-atl.org/research/index.cfm), a National Science Foundation Science and
Technology Center. MOLECULAR BASIS
OF DISEASE: The Molecular Basis of Disease Area of Focus (MBD) at Georgia
State University, Atlanta, GA, (manager@
cs.gsu.edu) is recruiting students for its
newly established Ph.D. fellowship program. The MBD Area of Focus is an interdisciplinary program in computational
biomedicine that includes over seventy
faculty members in the Departments of
Biology, Chemistry, Computer Science,
Physics and Astronomy, Mathematics and
Statistics, and Computer Information Systems. Interdisciplinary research foci within
the MBD include Structural Biology, Computational Biology and Bioinformatics,
Cancer and Infectious Diseases. Applications should be made directly to the Ph.D.
programs of the participating departments. MBD fellows receive an annual
stipend of $22,000 plus a full tuition
waiver. INFORMATION: For more information and to request application materials, please contact Ms. Adrienne Martin,
[email protected], phone: 404-6510610.
SUBMISSION DETAILS:
Rates are $290.00 per column inch ($300 minimum). Eight lines per column inch and average five typeset words per line. Send copy at least one
month prior to publication date to: Marian Anderson, Classified Advertising, Computer Magazine, 10662 Los Vaqueros Circle, PO Box 3014, Los
Alamitos, CA 90720-1314; (714) 821-8380; fax (714) 821-4010.Email: mander [email protected].
In order to conform to the Age Discrimination in Employment Act and to
discourage age discrimination, Computer may reject any advertisement
containing any of these phrases or similar ones: “…recent college grads…,”
“…1-4 years maximum experience…,” “…up to 5 years experience,” or
“…10 years maximum experience.” Computer reserves the right to append
to any advertisement without specific notice to the advertiser. Experience
ranges are suggested minimum requirements, not maximums. Computer
assumes that since advertisers have been notified of this policy in advance,
they agree that any experience requirements, whether stated as ranges or
otherwise, will be construed by the reader as minimum requirements only.
Computer encourages employers to offer salaries that are competitive, but
occasionally a salary may be offered that is significantly below currently
acceptable levels. In such cases the reader may wish to inquire of the
employer whether extenuating circumstances apply.
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
93
ADVERTISER / PRODUCT INDEX
FEBRUARY 2005
Advertiser / Product
Page Number
Advertising Sales Representatives
Air Force Research Laboratory
89
Concordia University
88
Cool Chips 2005
25
D.E. Shaw & Company
89
Embedded Systems Conference 2005
18
Fraunhofer Gesellschaft
IEEE Computer Society Membership
Indiana University Bloomington
5
91
Cover 3
Intel Developer Forum 2005
Cover 4
IRI 2005
New England (product)
Jody Estabrook
Phone:
+1 978 244 0192
Fax:
+1 978 244 0103
Email: [email protected]
New England (recruitment)
Robert Zwick
Phone:
+1 212 419 7765
Fax:
+1 212 419 7570
Email: [email protected]
40-42
Hot Chips 2005
International Conferences 2005
Mid Atlantic (product/recruitment)
Dawn Becker
Phone:
+1 732 772 0160
Fax:
+1 732 772 0161
Email: [email protected]
11
Northwest (product)
Peter D. Scott
Phone: +1 415 421 7950
Fax:
+1 415 398 4156
Email: [email protected]
Southeast (recruitment)
Thomas M. Flynn
Phone:
+1 770 645 2944
Fax:
+1 770 993 4423
Email: [email protected]
7
Systems and Software Week 2005
Cover 2
University of Bridgeport
91
University of Karlsruhe
90
Classified Advertising
Boldface denotes advertisements in this issue.
88-92
Midwest (product)
Dave Jones
Phone:
+1 708 442 5633
Fax:
+1 708 442 7620
Email: [email protected]
Will Hamilton
Phone:
+1 269 381 2156
Fax:
+1 269 381 2556
Email: [email protected]
Joe DiNardo
Phone:
+1 440 248 2456
Fax:
+1 440 248 2594
Email: [email protected]
Midwest/Southwest (recruitment)
Darcy Giovingo
Phone:
+1 847 498 4520
Fax:
+1 847 498 5911
Email: [email protected]
Southwest (product)
Josh Mayer
Email: [email protected]
Phone:
Fax:
+1 972 423 5507
+1 972 423 6858
Connecticut (product)
Stan Greenfield
Phone:
+1 203 938 2418
Fax:
+1 203 938 3211
Email: [email protected]
Southern CA (product)
Marshall Rubin
Phone:
+1 818 888 2407
Fax:
+1 818 888 4907
Email: [email protected]
Northwest/Southern CA
(recruitment)
Tim Matteson
Phone:
+1 310 836 4064
Fax:
+1 310 836 4067
Email: [email protected]
Southeast (product)
Bob Doran
Phone:
+1 770 587 9421
Fax:
+1 770 587 9501
Email: [email protected]
Japan
Tim Matteson
Phone:
+1 310 836 4064
Fax:
+1 310 836 4067
Email: [email protected]
Europe (product/recruitment)
Hillary Turnbull
Phone:
+44 (0) 1875 825700
Fax:
+44 (0) 1875 825701
Email: [email protected]
Advertising Personnel
Computer
IEEE Computer Society
10662 Los Vaqueros Circle
Los Alamitos, California 90720-1314
USA
Phone: +1 714 821 8380
Fax: +1 714 821 4010
http://computer.org
[email protected]
Marion Delaney
IEEE Media, Advertising Director
Phone:
+1 212 419 7766
Fax:
+1 212 419 7589
Email: [email protected]
Marian Anderson
Advertising Coordinator
Phone:
+1 714 821 8380
Fax:
+1 714 821 4010
Email: [email protected]
Sandy Brown
IEEE Computer Society,
Business Development Manager
Phone:
+1 714 821 8380
Fax:
+1 714 821 4010
Email: [email protected]
TEAM LinG - Live, Informative, Non-cost and Genuine!
ENTERTAINMENT COMPUTING
Ender’s Game
Redux
Long before the invasion of Iraq and
the South Asian tsunami, US Marine
General Charles Krulak wrote a seminal article in 1999 about the dilemmas
facing modern soldiers. In “The Strategic Corporal: Leadership in the ThreeBlock War” (www.au.af.mil/au/awc/
awcgate/usmc/strategic_corporal.htm),
Krulak described the need for soldiers
who can fight conventional combat in
one city neighborhood, transition to
peacekeeping operations in another,
Michael Macedonia, Georgia Tech Research Institute
The US military’s new
training simulations
prepare soldiers for war,
peace, and everything
in between.
M
y brother, a US Army surgeon in Iraq, sent me an
e-mail message not long
ago in which he concisely
summarized the convergence of entertainment technology and
military training to create what Ben
Sawyer, a high-tech freelance writer
and technology consultant, has
dubbed serious games. My brother
wrote the following:
Our medics are taking care of fresh
casualties every day. That is why I
think it would be great to have the
sim out here because:
• We are fairly well restricted to
the compound as we are
surrounded by the enemy.
• Soldiers have not much else to
do during their “down time.”
• They have real-life experiences
they can compare the sims to.
• They love computer games.
This situation shows that we live
now in a complex world that changes
dramatically from day to day. Game
technology, now ubiquitous in places
like Camp Falluja, Iraq, offers a path
to sharing and realizing interactive
experiences that mirror reality.
To capitalize on this opportunity, the
military is developing several simulations that help soldiers train for waging
war, keeping the peace, and recovering
from the emotional trauma that combat can cause.
ENTERTAINMENT AND TRAINING
If necessity is the mother of invention, war is often the midwife.
Consider the rapid development of
radio in World War I and radar and
computing in World War II. The symbiosis between military training and
entertainment technology follows a
similar pattern. The first flight simulator used by the Navy, the Link Blue
Box, sold originally to the Coney
Island amusement park while the Navy
awaited funding in the 1930s.
This legacy lives on in simulators,
with the technology going both ways.
For example, Disney’s Mission to Mars
ride at Walt Disney World’s EPCOT
Center provides a technologically complex simulation of space flight that
relies on advanced simulation technology from the military aerospace industry.
Stanford University’s Timothy Lenoir
has documented in detail this longstanding relationship between the
military and entertainment (www.
stanford.edu/dept/HPST/TimLenoir/
Publications/Lenoir_TheatresOfWar.pdf).
21ST-CENTURY WAR
War in the 21st century brings new
military training challenges.
Published by the IEEE Computer Society
then provide humanitarian relief in a
third.
Moreover, every decision a soldier
makes, from private to general, has
become strategically relevant. Thus,
Krulak wrote that “The inescapable
lesson of Somalia and of other recent
operations, whether humanitarian
assistance, peacekeeping, or traditional
warfighting, is that their outcome may
hinge on decisions made by small unit
leaders, and by actions taken at the
lowest level.”
Krulak offers the following prescription for training US soldiers: “The
common thread uniting all training
activities is an emphasis on the growth
of integrity, courage, initiative, decisiveness, mental agility, and personal
accountability.”
Crisis simulation
One approach the US Army has
taken to training for the three-block
war involves actually building the
blocks in highly instrumented urban
war training facilities such as Fort
Polk’s Joint Readiness Training Center
in Louisiana, with actors who roleplay civilians and terrorists. The experience at JRTC resembles a theme
park: Soldiers arrive at some mythical
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
95
Entertainment Computing
Figure 1. Full Spectrum Leader. This military simulation puts the player in charge of an entire
US Army platoon, which must be successfully guided through challenging combat missions.
main street in a distant land, except
they’re there not for amusement but
serious business.
Much like Bill Murray in Ground
Hog Day, soldiers at these facilities
undergo discovery learning. The seriousness comes across in the After
Action Review. An often brutally candid self-examination of what each individual did right and wrong, the AAR
is facilitated by experienced observers
armed with video and data that document every mistake participants made
during the exercise.
The army also applies the AAR to
simulators such as the Engagement
Skills Trainer 2000—a marksmanship
system that employs both synthetic 3D
graphics and filmed scenarios to teach
troops sound judgment in the use of
force.
The Israeli experience with their version of the EST 2000 provides a good
example of how this process works.
The Israeli Defense Force’s Lieutenant
Colonel Golan, an engineer and enthusiastic computer gamer, believes that
units stationed in confrontation zones
can use the mobile range to continue
96
to train and maintain their skills.
Golan believes that the state-of-the-art
simulator can help the IDF do far more
than improve its soldiers’ shooting
skills during a time when its ethical
image is being eroded in its own eyes
by one horrific incident after another.
Golan noted that every real event is
analyzed and the observers’ conclusions drawn up. The officers then
decide what lessons can be learned
from the incident and work with military personnel experienced in film production to create a video that conveys
this information (www.haaretz.com/
hasen/spages/522671.html).
Gaming warfare
Unfortunately, combat units rarely
get to attend training at instrumented
facilities like the JRTC more than once
a year. Further, large virtual simulators
such as EST 2000 cost significantly
more than consumer devices. Thus,
during the 1990s, the military began
exploring the use of PCs and video
game consoles as affordable alternatives to their big simulators.
In 1999, the US Army established the
University of Southern California’s
Institute for Creative Technologies to
foster new training and simulation
research that would address Krulak’s
Strategic Corporal challenge. The institute exploits entertainment technologies
that could enable better methods for
developing leadership and decision
making.
Game technologies became the leading candidate for research because they
provide increasingly sophisticated interactive experiences that can be networked via the Internet (www.ict.usc.
edu/disp.php?bd=proj_games). This
work spurred a flowering of developments in using game technology for
training (www.dodgamecommunity.
com).
James Korris, creative director of
ICT, and Pandemic Studios developed
perhaps the best-known product of
this type, Full Spectrum Warrior, for
the Xbox and PC. FSW employs
advanced artificial intelligence and user
interface techniques that let a player
experience the role of a squad leader
in urban combat.
The game replaces the typically fastpaced first-person shooter design with
a more deliberate, 3D real-time strategy game. As the squad leader, you
can’t even shoot your weapon—your
job is to maneuver your squad through
hostile territory without losing members of your team. The version developed by the military so impressed
commercial publishers that THQ
released it commercially in 2004.
The commercial version differs significantly from the military one, however. The Army version received a
thorough review for realism by subject-matter experts. Given that it accurately models the challenges of urban
combat, it is harder to “win” in this
version.
First-person thinker
ICT is now at work on the new
game shown in Figure 1, Full Spectrum
Leader, in which you role-play an
infantry platoon leader. This makes
your job more complex—you must
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
lead 30 soldiers instead of eight—and
the missions available are more
diverse.
FSL will include AI advancements
such as an opponent’s ability to recognize players’ actions and respond
dynamically. Designed to develop cognitive skills, FSL requires tactical decision making, resource management,
and adaptive thinking. Its scenarios
focus on asymmetric threats within
peacekeeping and peace-enforcement
operations.
ICT’s Mike van Lent leads the effort
to put explainable AI into these simulations. This will give the AI entities—
enemies, civilians, and friendly troops—
the ability to explain the rationale for
their behavior. This, in turn, will enable
the games to provide more relevant
AARs.
Given the cerebral nature of the challenges facing players, FSL may well be
the first of its kind: a first-person
thinker that takes the fast-paced checkers experience of a first-person shooter
and turns it into a strategic speed-chess
match.
Simulation therapy
Some ICT researchers have proposed using game technology to treat
soldiers suffering from post-traumatic
stress disorder.
Skip Rizzo and Jarrell Pair are
extending the research Pair did with
Larry Hodges at Georgia Tech by using
FSW to treat PTSD. Their version of
FSW employs an interface that provides
the clinician with the capacity to monitor a patient’s behavior and customize
the therapy experience by placing individuals in virtual-environment locations that resemble the setting in which
the traumatic events occurred initially.
To foster the anxiety modulation
needed for therapeutic habituation, the
interface also facilitates the gradual
introduction and control of trigger
stimuli in the environment, in real time.
Military psychologists in Iraq support Rizzo and Pair’s research. Soldiers
and marines there already play a host
of games during their down time.
Psychologists who treat combat stress
recommend video games for marines
to unwind and boost morale.
Erin Simmons, a lieutenant and psychologist with Bravo Surgical Company, noted that tastes in games vary
widely: Some soldiers like games with
aggressive military content, while others prefer games that provide an experience as different from the war zone as
possible. However, the troops find the
games relaxing regardless of their preferred genre (http://msnbc.msn.com/id/
6780587).
Digital linguistics
The US military faces a major challenge in providing language and cultural-awareness training to the 150,000
soldiers in Iraq. Thus, another game
getting attention within the military is
the DARPA-sponsored DARWARS
Tactical Language Training System.
Lewis Johnson and Stacy Marcelis
at USC’s Information Sciences Institute
lead the project, which aims to overcome the bottleneck of classroom
training.
Language training in the military
often involves months of classroom
instruction and is limited to intelligence specialists. DARWARS, based
on a modification of the commercial
game Unreal Tournament, provides
interactive language learning through
speech recognition.
Players initially practice on vocabulary items and learn gestures, then
apply them in simulated missions. In
the simulation, they interact with virtual characters in a variety of scenarios that introduce the player to Arabic
culture and language (www.isi.edu/
isd/carte/proj_tactlang).
Environment project is being designed
along the lines of games such as
Everquest, in which thousands of roleplayers interact in the same 3D-persistent world over the Internet.
Forterra president Robert Gehorsam
explains that AWE will let a commander tailor his unit’s training to a specific
environment and scenario or modify an
existing one out of a repository. The relevant training exercises can last anywhere from a couple of days to months,
and all of the action will take place
online (www.homelanfed.com/index.
php?id=20830).
The AWE environment now includes a Baghdad database and has
been used to develop checkpoint training scenarios for distributed teams in
the US National Guard and the active
US Army.
Massively multiplayer classroom
n Orson Scott Card’s visionary 1986
science fiction novel, Ender’s Game,
the hero, Andrew “Ender” Wiggin,
is drafted into Battle School. Although
only a child, Ender tackles increasingly
difficult simulator missions against an
alien race. After he wins the brutally
difficult final scenario, Ender realizes
that the simulated battle he has just
fought was no sim, but the real conflict’s ultimate conclusion. Thus did
Card envision the emergence of simulations and games for training and the
power of the Internet to influence ideas
and change.
Ender Wiggin’s bizarre world is no
longer fiction. The military is already
using the simulation technology of
first-person thinkers to help soldiers
learn how to fight the three-block war,
learn the language of allies and adversaries, heal mental scars, and even save
the lives of others. ■
The Army’s Simulation Technology
Center is working with Forterra to
develop a massively multiplayer online
role-playing game for training soldiers
throughout the world. Based on the
same technology used for the massively
multiplayer online game There (www.
there.com), the Asymmetric Warfare
Michael Macedonia is a senior scientist
at the Georgia Tech Research Institute,
Atlanta. Contact him at macedonia@
computer.org.
I
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
97
INVISIBLE COMPUTING
Creating and
Protecting
Digital Worlds
Electronics Show (CES), manufacturers
displayed PMPs like the one shown in
Figure 1 that are a step beyond portable
DVD players because they can connect
to a PC or set-top box and download
content for later viewing. The most
powerful of these pocket-sized devices
can store up to 320 hours of movies, TV
programming, or home video.
In the next decade, the availability
of vast storage capacity could significantly change the way we use PMPs.
Instead of specifying which TV programs, movies, and so on to store,
users will store content that has general interest for them and then use
Bill N. Schilit and Roy Want, Intel Research
O
ver the past year, mobile storage capacities have skyrocketed as prices have plummeted. CompactFlash (CF)
memory cards that store one
gigabyte are now available for about
the cost of a 128-Mbyte card one year
ago. Tiny rotating media is keeping
up: Hitachi is boosting the capacity of
its Microdrive, used in the iPod Mini,
from 4 to 10 Gbytes. Larger-format
2.5-inch notebook hard drives can
now store up to 100 Gbytes.
The exponential growth in flash and
disk storage is likely to continue for
several years as manufacturers digitally
encode more types of media and
demand increasingly compact formats.
For example, Secure Digital flashmemory cards, which now store up to
2 Gbytes, will hold eight times as much
data by 2009, while the capacity of
2.5-inch magnetic disks will soar to
500 Gbytes.
One use for such large mobile storage capacity is to help deal with information overload. In today’s fast-paced
world, people don’t have time to
process the overwhelming amount of
content available on TV, the radio, the
Web, and other media sources. Mobile
devices with high memory capacity can
personalize and filter media streams in
the same way that personal video
recorders (PVRs) such as TiVo make it
possible to manage hundreds of cable
and satellite TV channels.
Users can now personalize
large collections of digital
data and access it in real
time.
These technical achievements highlight the ongoing “invisible computing” revolution that is enabling people,
for the first time, to use large amounts
of digital data in their everyday activities. Mobile devices are becoming
smarter and less reliant on wireless
communication, giving users real-time
access to an entire digital world.
However, as the industry approaches
miniature mobile storage devices that
can hold 1,000 songs or a 2-Mbyte
snapshot from every minute in a day,
the danger of losing that world also
increases, necessitating simpler and
more reliable backup solutions.
PERSONAL DIGITAL VIDEO
In 2004, numerous portable media
players (PMPs) hit the market including the 20-Gybte Creative Zen Portable
Media Center, Samsung’s Yepp YH999 Portable Media Center, the iRiver
PMC-140 series (available in 20- and
40-Gbyte versions), and the 80-Gbyte
Archos AV480 Pocket Video Recorder.
At this year’s International Consumer
video-search software to find exactly
what they want. For example, if a user
enters “Lord Sainsbury of Turville” in
the blinkx search engine (www.blinkx.
tv), it will select digitized TV clips and
cue them up to the point where they
mention the British Minister for Science and Innovation.
Simple video-search engines can
extract information from the program
guide such as a show’s title, description,
cast, and credits. They can also work at
a deeper level by extracting text-based
keywords from closed-caption transcripts or using an audio-mining system
that processes speech into a stream of
text and then indexes the text with references to the original audio track.
Looking further into the future,
PMPs could become proactive. For
example, after you purchase a tour
package from an online travel service,
your PMP could extract information
from your notebook’s Web browser
and present a series of Travel Channel
clips corresponding to your itinerary.
A future PMP might likewise notice
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
99
Invisible Computing
be available on wireless Internet radio
devices.
However, the future of unprotected
digital radio remains in doubt. The
Recording Industry Association of
America has submitted a brief to the
US Federal Communications Commission arguing that consumers should be
able to record digital broadcasts for
later playback but not split a broadcast into individual songs.
CREATING A DIGITAL WORLD
Figure 1. Portable media players can store hundreds of hours of video.
that your digital wallet receipt includes
a purchase of Portland cement and
then prepare a do-it-yourself masonry
video using clips from This Old House
and other shows.
Massive video storage capacity
enables users to personalize a large
media collection and access it in real
time. With no network connection to
backend servers, latency is low and
availability is high. With the push of a
button, a user can fast-forward to
specific entertainment or educational
content.
PERSONAL DIGITAL AUDIO
Portable music players use solidstate flash memory to store from tens
to hundreds of hours of MP3, Windows Media Audio (WMA), Advanced
Audio Coding (AAC), and other digital music files. For example, the 1Gbyte version of the palm-size SanDisk
Digital Audio Player can store up to 19
hours of audio—roughly equivalent to
280 songs—in 128-Kbps WMA format. Within the next few years, the
availability of 32-Gbyte flash memory
will allow such devices to store nearly
25 days’ worth of audio content.
With so much storage, manufacturers are looking for new ways to flow
media into devices. A recent feature
100
appearing in portable music players is
an FM tuner with a record mode to
capture fresh content. Although FM is
not very high fidelity, music capture of
digital radio is also at hand: The
Delphi MyFi stores five hours of highquality XM Satellite Radio content
without a computer download.
A new digital radio format is appearing in automobiles and could have a
huge impact on recordable music players. With high-definition radio, AM
and FM station owners can broadcast
digital-CD-quality audio using the
existing infrastructure and spectrum in
a way that coexists with analog signals.
Unlike XM Satellite Radio and Sirius
Satellite Radio, which charge a subscription fee, HD radio is free for consumers who have purchased receivers
from select manufacturers.
Digital radio transmits song title,
artist, and other metainformation
using standards such as ID3 (www.
id3.org). This stream of descriptive
data makes it feasible to record particular audio content—for example all
the Beatles songs, traffic reports, or
BBC news—over the course of a day,
offering the same time-shifting capability for audio that PVRs provide for
video. Moreover, many of the features
of personalized Internet radio will soon
The ability to store large collections
of digital video and audio as well
as other data and access it anytime,
anywhere has emerged as a killer application. As portable device storage
capacity increases, so will the diversity
of things the devices can store and their
utility.
Table 1 shows the size of various
types of media (www.sims.berkeley.edu/
research/projects/how-much-info-2003/
execsum.htm#stored), which begs the
question, how much is enough?
Ideally, users would like to combine
entertainment content with personal
data. Today, advances in mobile solidstate and magnetic storage capacity are
enabling portable devices to store not
only video and music, but all types of
digital data.
Following the same downward price
trends as CF cards, Universal Serial Bus
(USB) flash-memory sticks have become
ubiquitous in the past five years. In
addition to backing up spreadsheets,
presentations, and other business data,
these devices, with storage capacities
exceeding one gigabyte, serve as a readily available archive of contacts, documents, photos, music files, e-mail, Web
bookmarks, and other personal information that mobile professionals can
access serendipitously as opportunities
arise.
If you use a typical 50-Gbyte notebook for mobile computing, you won’t
need to delete any content for the
machine’s lifetime. In addition, when
it’s time to upgrade in three to five years,
you’ll be able to transfer all of the old
data, which will occupy a fraction of
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
the new laptop’s advertised capacity.
As personal collections of digital
data grow, users are finding it increasingly difficult to manually organize this
information on their computer. As one
partial response to this problem,
Google, Yahoo, and Microsoft are
adapting their popular search-engine
technologies for all PC content. For
example, with Google’s Desktop
Search (http://desktop.google.com),
users can search the full text of viewed
Web pages, e-mail, chats, and document files in various formats, updating
information continuously.
Declining media prices are providing new opportunities for value-added
services based on digital content. For
example, manufacturers could preload
read/write disks or flash-memory cards
with gigabytes of compelling content.
In the case of audio and video, this
would avoid the long delays and costs
associated with downloading files from
a network—particularly for mobile
devices that only have access to lowbandwidth cellular connections.
Developers could use well-known
cryptographic techniques to protect
copyrighted media. To access this content, a user could simply purchase the
appropriate cipher key to decrypt specific files. In addition, to prevent burdening users with undesirable preloaded information, a simple profile
describing personal preferences could
determine which encrypted data to
write over.
PROTECTING YOUR DIGITAL WORLD
As mobile devices gain the capacity
to store entire digital worlds, the danger of losing all that information in a
single disk crash also grows. Ironically,
the very technology that makes it possible to create that world can fail catastrophically, causing it to disappear in
an instant.
Although most users recognize the
importance of backing up data, many
computing systems remain vulnerable
to this kind of calamity. One reason for
this is the increasing reliability of disk
drives over the past 15 years. A combi-
nation of various safety features, small
size, and low head mass has made
today’s disk drives less susceptible to
damage when dropped or exposed to
sudden acceleration. Further, disk-drive
failure rates are just low enough to lull
users into a false sense of security.
This problem is becoming more critical with exponential increases in disk
capacity. Not only can you lose more
information than ever before, but it
takes longer to back up the system,
strengthening the psychological barrier
to do nothing about it.
Solutions abound—the challenge is
finding one that suits you and being disciplined enough to use it on a regular
basis. Organizations with well-administered computer systems typically
make tape backups at night, but for
home computing, most people rely on
various disk-based technologies.
This was once the domain of floppy
disks, which writeable CD-ROMs
have replaced. However, it takes more
than 78 640-Mbyte CD-ROMs to
back up a fully loaded 50-Gbyte drive.
Reasonably priced writeable 4.7-Gbyte
DVDs have been available for a couple of years, but creating these disks
remains a long, monotonous process.
Not to worry—more optical disk
capacity is on the way. The latest double-layer Blu-ray Disc format (www.
blu-ray.com) holds 50 Gbytes of data,
offering 10 times more storage than a
standard DVD—the right ballpark for
notebook computers. However, this
new technology is expensive, and by
the time it drops to an affordable price,
notebook disks will have moved up the
exponential memory density curve.
A popular alternative to the relatively safe optical-disk backup solution
is an external (but conventional) disk
drive connected either by a USB 2.0 or
FireWire cable. It’s highly unlikely that
both disks will crash simultaneously,
and if so, one would probably survive.
In addition, external drives are available for less than $1 per gigabyte that
let users initiate a backup by simply
pressing a button on the side.
By combining ease of use with a
Table 1. How much media is enough?
Media
Size
Typewritten page
5 Kbytes
Low-resolution photo
100 Kbytes
Short novel
1 Mbyte
Minute of MP3 audio
1 Mbyte
High-resolution photo
2 Mbytes
Minute of high-fidelity sound 10 Mbytes
Hour of standard-definition
video
2 Gbyte
Hour of high-definition video 10 Gbytes
10,000 songs in 128-Kbps
AAC format
40 Gbytes
Library floor of academic
journals
100 Gbytes
Academic research library
2 Tbytes
US Library of Congress print
collections
10 Tbytes
pragmatic 300-Gbyte capacity, external disk drives currently offer the most
attractive solution to the problem of
protecting mobile digital worlds.
torage density for hard disks has
been outpacing Moore’s law for
some time, with density approximately doubling every year. A palmsize computer can now store large
quantities of digital video, audio, and
data, making a digital world readily
available to mobile users.
At the 2005 CES, a bewildering
array of personal media devices were
on display. Currently, MP3 and photoslideshow players rule, but in a few
years, full PMP capability will be available on even the smallest devices.
However, with more to lose than ever
before, backing up data will be a necessity, not an option. ■
S
Bill N. Schilit is codirector of Intel
Research Seattle. Contact him at bill.
[email protected].
Roy Want is a principal engineer at
Intel Research. Contact him at roy.
[email protected].
February 2005
TEAM LinG - Live, Informative, Non-cost and Genuine!
101
THE PROFESSION
The Profession
and the
Big Picture
ilar, if not worse, elevation in temperature is probable by the end of this century. This strongly implies that human
society faces a catastrophe. Yet this
impending disaster is still not widely
realized. As professionals, we have a
duty to spread this information.
The differences between today’s climate changes and those of past catastrophes derive from their source:
Human activity largely drives today’s
change, primarily through the extraction of huge amounts of carbon—as
gas, oil, and coal—from underground.
When used, these minerals subse-
Neville Holmes, University of Tasmania
T
his column has on occasion
emphasized that the responsibilities of computing professionals—indeed
of
any
learned profession’s members—go beyond acquiring and applying technical experience and wisdom
and beyond acting in their clients’ best
interests. The greatest professional
responsibility is to act in the best interests of the society to which the profession owes its status.
By definition, human society’s greatest interest is its continued existence. It
seems the greatest threat to that continuance—climate change—is at last
being given credence beyond scientific
and environmental circles. The computing profession will play a crucial
role in the battle to mitigate the effects
of climate change and, if possible,
adapt to them.
Therefore, we now share a responsibility to inform ourselves of the facts
and press for the strategies and measures that we believe will be most effective in combating this threat.
CLIMATE CHANGE
The term climate refers to both the
pattern of weather over a typical year
and variations from typicality over
longer periods. Climate varies from
place to place, determining the kind of
plants and wildlife that can live in a
given location and, to a lesser degree,
the kind and quality of life people can
have there.
104
Computer
The profession must
tackle social challenges,
the biggest of which is
human society’s
continued survival.
Geological records show quite
clearly that climate can change greatly
over various time scales. Historically,
meteorological records show that the
global climate has been gradually
warming over the past century, while
more recent meteorological records
show that extremes of weather are
becoming more frequent.
Although these meteorological
records build a picture of anthropogenic
climate change validated by scientific
modeling and accepted by practicing climate scientists (www.sciencemag.org/
cgi/content/full/306/5702/1686), some
people still deny the facts. As professionals avowing rationality, we should
become familiar with these facts and be
ready to promptly counter such denials.
The most popular denial at the moment
is Michael Crichton’s deceptive State of
Fear, authoritively rebutted at www.
realclimate.org.
Geological records show that many
of the catastrophic discontinuities that
separate major eras coincide with elevated temperatures that result from
atmospheric changes. Simple projections of current trends show that a sim-
quently convert to atmospheric carbon
dioxide, CO2, presently at double the
rate at which the Earth can absorb it.
This CO2 buildup has been accepted
at an international level as harmful and
man-made, although the acceptance
lacks any sense of emergency. Indeed,
critics typically advance arguments
against doing anything about it
because a discounted cash flow analysis suggests that countermeasures
would be cheaper if postponed.
An engineer could easily demolish
this absurd kind of argument, which is
based on a superficial estimate of the
costs involved and an assumption that
climate has a simple linear behavior.
Like all complex systems, Earth has
time constants that could well induce
dramatically nonlinear behavior, even
unto catastrophe, that no delayed
action could avoid.
PREDICTION
Human society must react to climate
change. Just how it should react
depends on reliable predictions of
what will happen and how quickly,
Continued on page 102
Published by the IEEE Computer Society
TEAM LinG - Live, Informative, Non-cost and Genuine!
The Profession
Continued from page 104
and what effect various countermeasures will have. Such predictions will
rely heavily on digital technology. The
quality of a digital climate model
depends on two factors: the data available and the model itself.
Having large amounts of the right
kind of data available ensures that the
simulation is more reliable and allows
refining the model by comparing predictions to outcomes. Digital technology could and should be more widely
used to gather, store, and distribute
meteorological and geological data.
Global climate models are already
tremendously complex, and their development makes them steadily more so.
Such models are based on 3D spatial
grids, so halving the grid spacing to
improve accuracy requires eight times
as many data points. Refining the
model, as discrepancies between prediction and outcome are explained,
leads to the need for more data and
computation at each grid point.
Presently, there are two approaches
to climate simulation: distributed, as
practiced at www.climateprediction.
com, and using supercomputers. Both
approaches must be improved continually.
This raises the basic problem that
prediction cannot simply be a matter
of projection. Too many contingencies
also need scientific study and modeling: natural contingencies such as
methane burps and ocean current
changes and human contingencies such
as mass migration resulting from the
imminent Peruvian parching and the
eventual Bangladeshi submersion. As
researchers develop a better understanding of these contingencies, they
must build the likely effects, singly and
in combination, into the overall climate model.
From my reading, I suspect that
these computations will require special-purpose multiprocessors with, for
example, one processor per grid point.
The arithmetic might also need
improvement to lessen the accumulation of error in such large calculations.
Whatever the case, the computing pro102
fession will play a crucial role in developing climate modeling.
MITIGATION
The international community already
recognizes that the human activity causing global warming must be curbed.
The Kyoto Agreement (http://unfccc. int/
essential_background/kyoto_protocol/
items/2830.php) intended just that but
seems unlikely to have any nonpolitical
effect.
We need to slow the
accelerating addition of
net CO2 to the atmosphere.
The problem seems to be that key
political agents either do not know or
do not accept the relevant facts. For
example, at the recent two-week UN
conference on climate change in
Buenos Aires, the US representatives
reportedly said that their government
wants to concentrate on long-term programs to develop cleaner-burning
energy technologies (www.state.gov/g/
oes/rls/fs/2004/38641.htm). This statement implies that they do not understand that dirty burning has been
lessening global warming, while the
burning of fossil fuels adds CO2 to the
atmosphere and—perhaps just as seriously in the long run—any burning
removes oxygen from it.
There is no doubt that we need to
slow the accelerating addition of net
CO2 to the atmosphere, even though it
is not the only cause of climate change.
Indeed it’s possible that soon we will
need to actually reduce the CO2 content
to avoid catastrophe. Better modeling
would give us a better idea of what’s
needed and provide a more persuasive
argument to make to the politicians.
Because the situation calls for political action, informing the public of the
relevant facts and projections might
seem a practical activity, one in which
computing professionals could use the
Web and media as useful conduits.
However, this assumes that an effec-
tive percentage of the public can understand the relevant scientific evidence
and reasoning.
This assumption might well be
wrong. For example, The Nation’s
Report Card: Mathematics 2000 noted
that more than one-third of US high
school seniors lack basic proficiency
in mathematics (http://nces.ed.gov/
nationsreportcard/pdf/main2000/2001
517.pdf). Worse, fewer than one-sixth
have better than a basic proficiency.
Clearly, the computing profession
should be pushing for the use of computers in schools simply to inculcate
such basic skills, not just to ensure that
younger people can understand what
the climate has in store for them.
If we accept the need for practical
and intense mitigation, a boost to education is crucial because of the underlying need for more scientists to analyze
and model the climate and for more
engineers to design and implement the
machinery for mitigation. Further,
given digital technology’s potential to
help educators, scientists, and engineers
be more effective, more computing professionals will be needed. And their
education must focus on the problems
they will face.
ADAPTATION
Engineers in general, and computing
professionals in particular, understand
professionally the likely short-term and
long-term behavior of complex systems. The Earth’s climate is changing
now, will change dramatically within
a few human generations, and could
change catastrophically in the longer
term. These changes will drastically
affect human society. If this is not soon
accepted globally and officially, the
world’s scientists and engineers must
take a large part of the blame.
The greatest danger is that human
society will not adapt to the inevitable
changes. Both mitigation and adaptation must be technologically based, just
as the climate change itself is. In the
worst case, the human effects of widespread starvation and thirst brought
on by glacier disappearance alone
Computer
TEAM LinG - Live, Informative, Non-cost and Genuine!
could cause social disruption widespread enough to block the development and use of mitigation technology.
Governments will need to use technology of many kinds, necessarily supported by digital technology, in critically
threatened areas in the immediate
future simply to keep people there supplied with food and water.
In the medium term, agriculture as
we know it might not survive if we cannot stop the spread of deserts. Should
this occur, scientists would need to
develop ways to industrially manufacture food. The flooding of low-lying
coastal areas will require either constructing enormous levee banks or
relocating many of the world’s largest
cities and densest rural populations.
Increasing heat will mean that a large
proportion of the world’s population
will depend on air conditioning for its
very survival—even now, thousands
die of extreme summer heat each year.
Extremes of weather will require constructing buildings and the infrastructure more sturdily or even completely
redesigning them.
In the long run, if mitigation is
unsuccessful, the human race will be
forced to live in a completely artificial
environment, isolated from Earth’s climate. Achieving this will present a huge
technological challenge. On the bright
side, if we succeed, we should be able to
colonize the Moon and Mars as well.
MOTIVATION
Some readers may view this essay as
mere scaremongering. I intend it to
scare, but only because my reading has
convinced me that the human race
faces truly frightening prospects and
that we might indeed already be
doomed, at least as a civilization.
I ask only that those of you who
remain unconvinced of the reality of
these threats at least read some of the
resources I have found—all directly or
indirectly available through our wonderful Web and discoverable using its
search engines. George Monbiot’s
short essay, “Goodbye, Kind World”
(www.monbiot.com/archives/2004/08/
10/goodbye-kind-world-/), shows that
I am not alone in my apprehensions.
Mark Lynas’s book, High Tide (Flamingo, 2004; www.marklynas.org), is
a persuasive and well-documented eyewitness account of some climate
change effects already being felt.
More details can be found at govern-
ment Web sites such as the Intergovernmental Panel on Climate Change
(www.ipcc.ch), academic Web sites such
as that for the American Institute of
Physics (www.aip.org/history/climate/
summary.htm), and activists’ Web sites
such as www.worldwatch.org and
www.climateark.org.
echnology has almost entirely
shaped the outward aspects of our
civilization, and civilization’s use of
technology will certainly determine its
own fate. Given that digital technology
has become the main enabler of other
technologies, this issue has undeniable
relevance to the computing profession.
In this case, we face the real danger that
inaction by our profession and others
might force us to share the fate of the
apocryphal frog who, oblivious to his
imminent demise, boiled to death in a
gradually warming saucepan. ■
T
Neville Holmes is an honorary research
associate at the University of Tasmania’s School of Computing. Contact
him at [email protected].
Details of citations in this essay and
links to further material are at www.
comp.utas.edu.au/users/nholmes/prfsn.
TEAM LinG - Live, Informative, Non-cost and Genuine!